Professional Documents
Culture Documents
0X
Essentials
Student Guide
DP120 Data Protector 9.0X
Essentials
Student Guide
Use of this material to deliver training without prior written permission from HP is prohibited.
© Copyright 2014 Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice. The only warranties for HP
products and services are set forth in the express warranty statements accompanying such products
and services. Nothing herein should be construed as constituting an additional warranty.
HP shall not be liable for technical or editorial errors or omissions contained herein.
This is an HP copyrighted work that may not be reproduced without the written permission of HP.
Use of this material to deliver training without prior written permission from HP is prohibited
Microsoft®, Windows®, Windows®Vista are U.S. registered trademarks of Microsoft Corporation.
Adobe and Acrobat are trademarks of Adobe Systems Incorporated.
Java is a registered trademark of Oracle and/or its affiliates.
Oracle® is a registered US trademark of Oracle Corporation, Redwood City, California.
UNIX® is a registered trademark of The Open Group.
LiveVault® is a registered trademark of Autonomy Corporation plc
DP120 Data Protector 9.0X Essentials
Student Guide
Rev 1.2 December 2014
Content
Module 1 — Introduction 1
1–3. SLIDE: Welcome ............................................................................................................................................. 2
1–4. SLIDE: Overview ............................................................................................................................................. 3
1–5. SLIDE: Agenda................................................................................................................................................ 5
1–7. SLIDE: Additional Resources……………………………………………………………………………………………………………6
Module 3 — Architecture 1
3–3. SLIDE: HP Data Protector .............................................................................................................................. 2
3–4. SLIDE: HP Data Protector history .................................................................................................................. 4
3–5. SLIDE: Protected environment ...................................................................................................................... 5
3–6. SLIDE: Backup options ................................................................................................................................... 6
3–7. SLIDE: Direct attached backup ...................................................................................................................... 7
3–8. SLIDE: Network backup ................................................................................................................................. 8
3–9. SLIDE: SAN attached backup ......................................................................................................................... 9
3-10. SLIDE: Array based replica backup (ZDB) ...................................................................................................10
3-11. SLIDE: Backup and replication methods ....................................................................................................12
3-12. SLIDE: Cell concept .....................................................................................................................................13
3-13. SLIDE: Client server architecture ................................................................................................................16
3-14. SLIDE: Cell Manager (CM) ............................................................................................................................17
3-15. SLIDE: Disk Agent (DA) ................................................................................................................................18
3-16. SLIDE: Media Agent (MA).............................................................................................................................19
3-17. SLIDE: Integration Agent (IA) ......................................................................................................................20
3-18. SLIDE: Installation Server (IS) .....................................................................................................................21
3-19. SLIDE: User Interface ..................................................................................................................................22
3-20. SLIDE: Granular Recovery Extension Agent ...............................................................................................23
3-21. SLIDE: Internal Database (IDB) ...................................................................................................................24
3-22. SLIDE: Typical Data Protector session .......................................................................................................25
3-23. SLIDE: Variables used in this training .........................................................................................................27
3-24. SLIDE: DP Tuning via global file ..................................................................................................................28
3-25. SLIDE: DP Tuning via omnirc file.................................................................................................................29
3-26. SLIDE: Support matrix ................................................................................................................................30
Module 4 — Installation 1
4–3. SLIDE: Installation Overview ........................................................................................................................ 2
4–4. SLIDE: Cell Manager Platform Support DP 9.0X ........................................................................................... 3
Content
Module 9 — Backup 1
9–3. SLIDE: Backup, high level view ......................................................................................................................2
9–4. SLIDE: Backup specification execution..........................................................................................................4
9–5. SLIDE: Backup Specification Content ............................................................................................................5
9–6. SLIDE: Creating backup specification ............................................................................................................6
9–7. SLIDE: Backup context / Group view .............................................................................................................8
9–8. SLIDE: Creating backup specification ............................................................................................................9
9–9. SLIDE: Creating backup specification: Wizards .......................................................................................... 11
9-10. SLIDE: Creating backup specification: Sources ......................................................................................... 12
9-11. SLIDE: Creating backup specification: Destination.................................................................................... 14
9-12. SLIDE: Dynamic device allocation 1/2 ....................................................................................................... 16
9-13. SLIDE: Dynamic device allocation 2/2 ....................................................................................................... 17
9-14. SLIDE: Static device allocation .................................................................................................................. 20
9-15. SLIDE: Object mirroring 1/2 ....................................................................................................................... 21
9-16. SLIDE: Object mirroring 2/2 ....................................................................................................................... 23
9-17. SLIDE: Creating backup specification: Options .......................................................................................... 25
9-18. SLIDE: Creating backup specification: Filesystem options 1/2 ................................................................. 26
9-19. SLIDE: Creating backup specification: Filesystem options 2/2 ................................................................. 29
9-20. SLIDE: Scheduler Overview ........................................................................................................................ 31
9-21. SLIDE: Scheduler – Feature Comparison ................................................................................................... 33
9-22. SLIDE: Using the Legacy Scheduler 1/2 ..................................................................................................... 34
9-23. SLIDE: Using the Legacy Scheduler 2/2 ..................................................................................................... 36
9-24. SLIDE: Using the Advanced Scheduler 1/2 ................................................................................................ 37
9-25. SLIDE: Using the Advanced Scheduler 2/2 ................................................................................................ 39
9-26. SLIDE: Using an incremental backup chain ............................................................................................... 43
9-27. SLIDE: Protection of a backup chain .......................................................................................................... 45
9-28. SLIDE: Creating Backup Spec: Backup Object Summary ........................................................................... 46
9-29. SLIDE: Backup Object Summary – Object Properties 1/2 .......................................................................... 47
9-30. SLIDE: Backup Object Summary – Object Properties 2/2 .......................................................................... 49
9-31. SLIDE: Preview backup session ................................................................................................................. 50
9-32. SLIDE: Pre- and post- execution................................................................................................................ 51
9-33. SLIDE: Performing backups ....................................................................................................................... 52
9-34. SLIDE: Backup session message output .................................................................................................... 53
9-35. SLIDE: Resume/Restart failed Backup sessions ........................................................................................ 54
9-36. SLIDE: Missed job executions .................................................................................................................... 56
9-37. SLIDE: Reconnect broken connections ...................................................................................................... 57
Module 10 — Restore 1
10–3. SLIDE: What is Restore?...............................................................................................................................2
10–4. SLIDE: Restore methods ..............................................................................................................................3
10–5. SLIDE: Restore prerequisites .......................................................................................................................4
Module 15 — Deduplication 1
15–3. SLIDE: Deduplication technology ................................................................................................................ 2
15–4. SLIDE: How Deduplication works ................................................................................................................ 3
15–5. SLIDE: Supported Deduplication Configurations ........................................................................................ 5
15–6. SLIDE: Target side Deduplication ................................................................................................................ 6
15–7. SLIDE: Source side Deduplication................................................................................................................ 8
15–8. SLIDE: Server side Deduplication ................................................................................................................ 9
15–9. SLIDE: Multi side Deduplication.................................................................................................................10
15-10. SLIDE: Backup-to-Disk (B2D) devices ......................................................................................................11
15-11. SLIDE: Configure a Backup to Disk device 1/6..........................................................................................13
15-11. SLIDE: Configure a Backup to Disk device 2/6..........................................................................................14
15-13. SLIDE: Configure a Backup to Disk device 3/6..........................................................................................15
15-14. SLIDE: Configure a Backup to Disk device 4/6..........................................................................................16
15-15. SLIDE: Configure a Backup to Disk device 5/6..........................................................................................17
15-16. SLIDE: Configure a Backup to Disk device 6/6..........................................................................................18
15-17. SLIDE: Gateway Configuration for Source Side Deduplication.................................................................19
15-18. SLIDE: Gateway Configuration for Target Side Deduplication .................................................................20
15-19. SLIDE: Gateway Configuration for Server Side Deduplication .................................................................21
15-20. SLIDE: Creating a backup specification ....................................................................................................22
15-21. SLIDE: Running Backup with Data Deduplication .....................................................................................26
15-22. SLIDE: Creating an Object Replication specification ................................................................................27
Module 17 — Auditing 1
17–3. SLIDE: Auditing overview.............................................................................................................................2
17–4. SLIDE: Backup session auditing...................................................................................................................3
17–5. SLIDE: Enhanced Event logging ...................................................................................................................7
Module 19 — Patching 1
19–3. SLIDE: Data Protector Enhancements and Fixes ........................................................................................2
19–4. SLIDE: How to download Fixes and Enhancements ....................................................................................4
19–5. SLIDE: Download from Software Support Online (SSO) ..............................................................................5
19–6. SLIDE: GR Patch Installation ........................................................................................................................7
19–7. SLIDE: Step 1: Update the Installation Server (IS) ......................................................................................8
19–8. SLIDE: Step 2: Update the Client .................................................................................................................9
19–9. SLIDE: List installed Data Protector Patches ........................................................................................... 10
Module 20 — Troubleshooting 1
20–3. SLIDE: Log files ............................................................................................................................................2
20–4. SLIDE: Debug (Execution Tracing) ...............................................................................................................5
20–5. SLIDE: Debug Log Collector .........................................................................................................................8
20–6. SLIDE: Message Details ............................................................................................................................ 11
20–7. SLIDE: Network Connectivity .................................................................................................................... 12
20–8. SLIDE: Services ......................................................................................................................................... 14
20–9. SLIDE: Backup Devices.............................................................................................................................. 17
20-10. SLIDE: Backup and Restore ..................................................................................................................... 19
20-11. SLIDE: omnihealthcheck .......................................................................................................................... 22
20-12. SLIDE: HealthCheckConfig file ................................................................................................................. 23
20-13. SLIDE: omnitrig –run_checks................................................................................................................... 24
Contents
Module 1 — Introduction 1
1–3. SLIDE: Welcome ......................................................................................................................... 2
1–4. SLIDE: Overview......................................................................................................................... 3
1–5. SLIDE: Agenda ........................................................................................................................... 5
1–7. SLIDE: Additional Resources ..................................................................................................... 6
Module 1
Introduction
Welcome
Introduction
•Overview
•Agenda
•Logistics
•Additional Resources
3 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
course.
This course is designed for system administrators who will be responsible for the installation,
configuration, and management of the HP Data Protector management software.
This course covers the HP Data Protector software product functionality for the version 9.0X.
Throughout this course, the product name “HP Data Protector software” will be shortened to just
Data Protector or DP for simplicity.
Overview
Introduction
The following courses for HP Data Protector software are available:
Basic Course
4 days
DP120 Essentials
Update Course
2 days
DP121 Update
4 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Overview
This 4 days course covers the main features and functions of Data Protector software. It explains
the product architecture, installation of the product and how to configure and run backups and
restores in Data Protector. In addition it explains the Internal database used by the product, the
special handling in case of a Disaster and how to troubleshoot the product. At the end of the course
the product and licensing structure is explained.
The 2 days course provides IT professional with information about the new HP Data Protector
software 8.10 version and explains the required steps on how to install the software in your
environment and how to update or migrate from a previous version. The course explains the new
Internal Database and architecture changes to improve the scalability and performance in Data
Protector and covers the new features and functions that are part of this version.
The course offers Hands-on lab on all the key features and changes to ensure a thorough
understanding of the course contents.
The 4 days HP Data Protector software DP200 Advanced Windows Integration course will focus
on HP Data Protector software integrations with the main MS Windows based Application and
Database solutions, such as MS Exchange, MS SQL Server and MS SharePoint. The course explains
how to configure the integration, how to run backup and restores and how to perform a Disaster
Recovery. It covers all the supported integration methods, such as using the Online Backup API of
the application or database or how to utilize the Volume Shadow Copy framework for backup and
restores. In addition the Granular Recovery module for Exchange and SharePoint to perform single
item recovery will be explained in this 4 days course.
The 2 days HP Data Protector software DP220 VMware Integration course explains the functions
and features of the Data Protector VMware Integration agent, from single ESX server integrations
up to large VMware vCloud Director configurations. In addition, the function of the HP Data
Protector software VMware Granular Recovery agent that will allow the single restore of files and
Directories from a VMware backup is explained in detail.
Agenda
Introduction
The following chapters will be covered in the DP120 HP Data Protector software 9.0X Essentials
course:
5 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Agenda
The DP120 HP Data Protector software 9.0X Essentials course is a 4 day course and will be
delivered as Classroom or Remote course, both with Hand-on Labs included.
Additional resources
Introduction
• Training
• Contact Autonomy Education under:
https://registration.autonomy.com/dp
7 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Additional Resources
Hewlett Packard provides several additional resources designed to make you successful with our
products. These include:
• Product documentation
A soft copy (Acrobat PDF format) is included with your HP Data Protector software
distribution. Additional product manuals, support matrices and technical papers are
available for download on the HP Software Support Online portal (listed below) or released
as part of a Data Protector Documentation Patch.
This is the main entry point for general Data Protector information with links to latest
whitepapers and solution briefs
Visit the link to see Data Protector as integrated part of Autonomy, an HP company,
advanced data protection suite. You find links to download Data protector trial version,
video that explains Data Protectors industry’s leading features, together with Whitepapers
and contact information.
For consulting requests contact your local sales representative or visit us on the internet
via: http://www.autonomy.com/work/services/professional-services
For support, patches and additional information about the product visit the
Software Support Online (SSO) portal on:
https://softwaresupport.hp.com/group/softwaresupport/
(Note: HP Passport registration and SAID Contract Identifier required)
• Training https://registration.autonomy.com/dp
For Data Protector Training courses, their schedules and for registration visit our Training
course registration page.
Contents
Module 2 — HP’s Adaptive Backup and Recovery Solutions 1
2–3. SLIDE: HP Adaptive Backup and Recovery concept ................................................................. 2
2–4. SLIDE: HP Adaptive Backup and Recovery Suite ...................................................................... 3
2–5. SLIDE: HP Data Protector ......................................................................................................... 4
2–6. SLIDE: Introducing Data Protector 9.0 ..................................................................................... 5
2–7. SLIDE: Non-Staged Granular Recovery for VMware................................................................. 6
2–8. SLIDE: Data Protector Federated Deduplication ...................................................................... 7
2–9. SLIDE: Data Protector Catalyst Over Fiber Channel ................................................................. 8
2-10. SLIDE: Data Protector Enhanced UI Preview ............................................................................ 9
2-11. SLIDE: HP Backup Navigator................................................................................................... 10
2-12. SLIDE: Data Protector Management Pack .............................................................................. 11
2-13. SLIDE: For more information .................................................................................................. 12
Module 2
HP’s Adaptive Backup and Recovery Solutions
Core Capabilities
I. Prioritization
Set policies based on data & application
priority & business criticality
II. Prediction
Real-time operational analytics drive
optimal resource utilization
III. Recommendation Zero Downtime
Instant Restores Tiered Backup
Actionable suggestions to mitigate Backup
potential conflicts & ensure SLAs are met Operational
Granular Control Integrations
Analytics
IV. Automation
Self-learning system enables automated
provisioning adjustments >_ APP
OS
Applications OS File Systems Hypervisor Networking Compute Storage Archive Cloud
3 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP Adaptive Backup and Recovery (ABR) is a new, innovative & game-changing technology and
vision for the backup market. It contains the following core capabilities:
I. Prioritization
Set policies based on data & application priority & business criticality
II. Prediction
Real-time operational analytics drive optimal resource utilization
III. Recommendation
Actionable suggestions to mitigate potential conflicts & ensure SLAs are met
IV. Automation
Self-learning system enables automated provisioning adjustments
ABR is a based on a phased rollout approach with Prioritization and Prediction already available.
HP Data Protector
Management Pack
Real Time
Monitoring
HP Data
Protector
Reporting and
>_ APP Analyzing
OS HP Backup
Applications OS File Systems Hypervisor Networking Compute Storage Archive Cloud Navigator
4 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP is revolutionizing how backup and recovery is addressed. Our backup and recovery platform, HP
Data Protector delivers a framework that is deeply integrated into the technology stack and
workload aware. You benefit from a solution that provides zero-downtime/zero-performance-
impact backups to facilitate instant recovery and disaster recovery planning for core datacenters,
regional offices and branch offices
Regardless of your size, you’ll build your data protection strategy on enterprise class software that
uses key storage technologies to address your capacity and security needs now and into the future.
Add to this platform the complimentary products HP Backup Navigator for reporting, analysis,
trending and forecasting, along with HP Data Protector Management Pack for real-time
monitoring of the backup and recovery infrastructure, and you’ll be assured that you are investing
in a company whose vision and execution of our solution is based on evolving the backup and
recovery process to be just as agile as the datacenter and capable of shifting just as quickly as the
workloads your are tasked with protecting.
HP Data Protector
Meaning Based Data Protection
• Centralized management:
Perform global backup and recovery operations from a single
console that is extremely powerful, yet simple and easy to use,
install, and configure.
• Advanced backup to disk, tape, and cloud:
Get integrated protection across a continuum of storage
options.
• Zero-downtime backup and Instant Recovery:
Protect critical applications such as databases, messaging
platforms, and enterprise platforms through advanced
integration with storage hardware snapshots and recover it in
minutes instead of hours
• Granular recovery extension:
Recover single items faster, providing admin-centric recovery
capabilities to improve recovery-related SLAs.
5 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP Data Protector
Data Protector is the industry's first unified meaning based data protection solution that utilizes an
intelligent data management approach to seamlessly protect and harness data based on its
meaning from edge to datacenter and across physical, virtual and cloud environments.
• StoreOnce Integrations
Scale-out store management with Federated Catalyst for
B6200 and 6500, Support for high performance backups
with StoreOnce Catalyst over fiber channel
• Enhanced UI Preview
New HP One View look and feel user interface for select
modules: Advanced Scheduler, Global options & Missed job
executions
6 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The most recent Data Protector version 9.0 introduces the following key features:
• StoreOnce Integrations
Scale-out store management with Federated Catalyst for B6200 and 6500, Support for high
performance backups with StoreOnce Catalyst over fiber channel
• Enhanced UI Preview
New HP One View look and feel user interface for select modules: Advanced Scheduler, Global
options & Missed job executions
The features are explained with more details on the following pages.
Description
– Direct, non-staged single file recovery for VMware
– Based on the newly introduced SmartCache Device
– Restore driven by the VMware administrator
– Browse to select file for restore
Usage
– Single item recovery from image
based VMware backups
Benefit
– Accelerated single item recovery
from large virtual machines
7 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Description
The non-staged recovery feature in GRE for VMware introduces the presentation and recovery of
files from a VMware backup, without restoring the backed up VMDK files into a staging area. As a
requirement the backup has to be performed into the newly introduced Data Protector SmartCache
Device.
Usage
During a non-staged VMware Granular Recovery the appropriate backed up VMDK file is directly
mounted on the mount proxy host and enables the user to browse the disk and select the file(s) to
recover. Hence, with non-staged recovery, it is not necessary to restore a disk (or a whole chain) of
any particular backup for recovering files.
Benefit
Using the non-staged recovery feature in GRE for VMware will significantly accelerated single item
recovery from large virtual machines by obsoleting the restore of the backed up VMware data files.
StoreOnce Enhancements
Description
– Deduplication store can span multiple nodes
Applications
(B6200 & 6500 only)
– Supported with Application Source and Backup
Server deduplication
Virtualization
– Stores are teamed within the StoreOnce UI
Usage
File Servers
– Balance capacity, performance and growth over
multiple nodes
– Best practice is to group like data types together
Databases Benefit
– Easier capacity and performance planning
as well as management
8 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Description
The latest generation of StoreOnce Catalyst release supports the configuration of federated
deduplication devices for the StoreOnce models B6200 and 6500. These federated stores can span
up to 4 nodes and allow the configuration significant larger stores.
Federated stores need to be configured within the StoreOnce UI.
Usage
Federated stores will automatically balance capacity, performance
and grow over multiple nodes. Within Data Protector federates
stores can be configured within B2D devices like normal stores.
For best performance it is recommended to configure separate
stores for the separate data types you are backing up, e.g.
Filesystem stores, Oracle stores, VMware stores.
Benefit
Federated Deduplication simplifies the capacity and performance planning as well as the StoreOnce
management.
StoreOnce Enhancements
Description
LAN FC – Catalyst protocol now available over Ethernet
and Fiber Channel
– Application Source & Backup Server deduplication
– Limited to Windows & Linux media agents
Usage
– Catalyst based backup in the data center where
fibre channel infrastructure is available
Catalyst
Benefit
– Improve backup & restore performance
– Leverage existing backup SAN infrastructure
9 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Description
In addition to federated deduplication the latest generation of the StoreOnce Catalyst release
introduces supports for Ethernet and Fiber Channel interfaces. Application Source and Backup
Server Deduplication are supported in such configurations. With the current release the feature is
limited to Windows and Linux gateway systems only.
Usage
The feature allows Catalyst based backup in Data centers where Fibre Channel infrastructure is
available. In Data Protector enter the so called Catalyst over Fiber Channel (COFC) address instead
of an IP or FQDN of the store during the B2D
configuration. The store needs to be
configured within the StoreOnce UI, the COFC
address/alias is generated and displayed
within that StoreOnce UI.
Benefit
Store Once Catalyst Backups over Fibre Channel take advantage or an existing backup SAN
infrastructure and will significantly improve the backup & restore performance.
Updated for:
• Advanced scheduler
• Missed job executions
• Global options
10 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Data Protector is now supporting the new HP One View look and feel user interface for the select
modules:
• Advanced Scheduler
• Global options
• Missed job executions
The updated modules provide a new look and feel by offering the same functionality.
HP Backup Navigator
Comprehensive Backup/Recovery Reporting for HP Data Protector
Key Features
• Central monitor for multiple DP Cell Managers • Performance, capacity trending & future
• Simplified tracking of infrastructure changes planning and simplified error analysis
across complete Data Protector environment • Customizable reporting and dashboard
11 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP Backup Navigator
12 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP Data Protector Management Pack for Microsoft® Systems Center Operations Manager (SCOM)
delivers real-time intelligent monitoring, analysis, isolation, remediation, and reporting for Data
Protector environments. It continually monitors the health and state of each component in the
backup and recovery infrastructure to provide you with actionable insight that can increase the
effectiveness of your data protection services.
HP Data Protector Management Pack for Microsoft® SCOM adds Data Protector specific monitors,
rules, views, tasks, knowledge and reports into an existing SCOM installation.
Monitoring the health and performance of the backup and recovery infrastructure is the first step in
identifying what has happened. The next logical step is to identify how one issue may relate to the
next. The diagnostic and actionable insight provided HP Data Protector Management Pack allows
for the separation of cause and effect while uncovering often-unrelated dependencies that have an
effect on the infrastructure.
Using graphical cues and visualization interfaces, HP Data Protector Management Pack delivers
actionable insight by providing the tasks that can be used to address the issues uncovered in the
diagnostic analysis. In real time, IT staff can quickly isolate issues and execute solutions that make
use of HP Data Protector best practices.
www.adaptive-backup.com
13 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Contents
Module 3 — HP Data Protector Architecture 1
3–3. SLIDE: HP Data Protector .......................................................................................................... 2
3–4. SLIDE: HP Data Protector history .............................................................................................. 4
3–5. SLIDE: Protected environment .................................................................................................. 5
3–6. SLIDE: Backup options............................................................................................................... 6
3–7. SLIDE: Direct attached backup .................................................................................................. 7
3–8. SLIDE: Network backup ............................................................................................................. 8
3–9. SLIDE: SAN attached backup ..................................................................................................... 9
3-10. SLIDE: Array based replica backup (ZDB) ............................................................................... 10
3-11. SLIDE: Backup and replication methods ................................................................................ 12
3-12. SLIDE: Cell concept ................................................................................................................. 13
3-13. SLIDE: Client server architecture............................................................................................ 16
3-14. SLIDE: Cell Manager (CM) ........................................................................................................ 17
3-15. SLIDE: Disk Agent (DA)............................................................................................................ 18
3-16. SLIDE: Media Agent (MA) ........................................................................................................ 19
3-17. SLIDE: Integration Agent (IA) .................................................................................................. 20
3-18. SLIDE: Installation Server (IS)................................................................................................. 21
3-19. SLIDE: User Interface .............................................................................................................. 22
3-20. SLIDE: Granular Recovery Extension Agent ........................................................................... 23
3-21. SLIDE: Internal Database (IDB) ............................................................................................... 24
3-22. SLIDE: Typical Data Protector session ................................................................................... 25
3-23. SLIDE: Variables used in this training..................................................................................... 27
3-24. SLIDE: DP Tuning via global file ............................................................................................. 28
3-25. SLIDE: DP Tuning via omnirc file ............................................................................................ 29
3-26. SLIDE: Support matrix ............................................................................................................ 30
Module 3
Architecture
What is it?
• Software that provides automated data protection for
businesses with 24x7 availability needs
3 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP Data Protector
HP Data Protector is a backup solution that provides reliable data protection and high accessibility
for your fast growing business data. Data Protector offers comprehensive backup and restore
functionality specifically tailored for enterprise-wide and distributed environments.
• Supporting clusters to ensure fail-safe operation and support backup of virtual nodes.
• Enabling the Data Protector Cell Manager itself to run on a cluster.
• Supporting all popular online database Application Programming Interfaces.
• Providing best in class support for HP Storage based high-availability solutions like the HP
StorageWorks P6000 EVA Disk Array Family, HP StorageWorks P9000 XP Disk Array Family,
or HP StorageWorks P10000 3PAR array
• Providing various disaster recovery methods for Windows and UNIX platforms.
• Offering methods of duplicating backed up data during and after the backup to improve
fault tolerance of backups or for redundancy purposes.
For detailed documentation describing the features of Data Protector, including integrations, as
well as the latest platform and integration support information, consult the HP Data Protector
home page at: http://www.hp.com/go/dataprotector.
4 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The first version of Data Protector was released in 1994 as HPUX version only version and called
OmniBack II these days. The original name was taken from a backup tool that was originally
developed by Apollo Computers, a company that HP took over in 1989. After several releases with a
fast growing installed base the decision was made to change the name to Data Protector in 2002.
For backwards compatibility reasons, the existing directory structure and binary names were kept,
so today all Data Protector installation directories and binary names still contain the reference to
the original name of the product such as:
Protected environment
REGIONAL
OFFICE DATA CENTER
BRANCH LAN/
OFFICE WAN
Protected environment
Data Protector is able to provide backup services from small and medium, up to enterprise sized
installations. It is able to manage challenging installations such as high available applications and
database setups, as well as virtualized environments. In addition, it supports complex multi-site
setups with regional and branch offices, connected over LAN/WAN to business critical systems and
applications.
Based on geographical, network connectivity or security reasons the client systems might be
managed by a single or by multiple Data Protector cells. Even if you choose to have multiple cells,
Data Protector allows you to easily configure common policies and concepts among the cells and
share the available backup infrastructure with all configured cells.
6 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Backup options
Data Protector offers several ways of performing a backup and running a restore. Advanced backup
and restore functionality offered by Data Protector are Zero Downtime Backup (ZDB) and Instant
Recovery (IR).
Cell
Manager
LAN
Tape Backup
7 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The concept of direct attached backup means that only one host is included in the backup process.
The data is read by the Data Protector Disk Agent and written to the backup device by Data
Protector Media Agent. In this scenario both Agents (Disk and Media) are running on the same
system, no other host is included in the backup process.
A direct attached backup device is a device which is controlled by the Media Agent installed on the
host, e.g.:
Network backup
• Backup devices connected to a dedicated backup host
• Application host and Backup host are part of the backup process
• Data transferred via LAN from application disk to backup device
Cell
LAN Manager
Tape Library
Disk Array (Direct or SAN
attached)
8 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Network backup
In opposite to the direct attached backup, where both Agents (Disk and Media Agent) are running on
the same system, when talking about network backup the network (LAN) is included in backup
process.
The Disk Agent installed on one system is reading the backup data and sending the data via LAN to
the Media Agent installed on the system where the backup device is connected.
Cell
Manager
LAN
Tape Library
9 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Data Protector supports the Storage Area Network (SAN) concept by enabling multiple systems to
share backup devices in the SAN environment. The same physical device can be accessed from
multiple systems. Thus, any SAN connected system can perform a local backup to these devices
without using any other system.
Because data is transferred over the SAN, backups do not need any bandwidth on the conventional
LAN. This type of backup is sometimes referred to as a “LAN-free” backup.
Cell
Manager
LAN
SAN
Optional:
Tape or Disk Backup
10 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Zero Downtime Backup (ZDB) is a backup approach in which Disk Array based mirror and snapshot
techniques are used to minimize the impact of backup operations on an application system. A
replica of the data to be backed up is created first, and all subsequent backup operations are
performed on the replicated data rather than the original data.
As a backup occurs in the background while the application remains online and available for use,
the impact on your environment during a backup is minimal. The recovery window is dramatically
reduced by using the Instant Recovery (IR) functionality, which enables recovery of vast amount of
data in minutes rather than hours. This makes ZDB and IR capabilities suitable for high-availability
systems and mission-critical applications.
The following are the basic principles behind ZDB and IR:
• Create, at high speed, a copy of the data to be backed up and then perform backup
operations on the copy, rather than on the original data.
• Restore a backup copy of data, held on the array, to its original location on the array
to facilitate high-speed recovery.
• ZDB to tape
• ZDB to disk
• ZDB to disk+tape
The main difference between a traditional tape backup and ZDB is that during a traditional tape
backup, application operation is affected until the streaming of data to the backup medium is
complete. Using ZDB, application operation is only affected during the time it takes to create a
replica. As this process is almost instantaneous, the impact on the application is considerably
reduced. After the replica is created, the application is returned to normal operation, and backup to
tape is done without impacting the application. Regardless the fact that the backup is physically
performed from a backup host the backup is handled in the IDB like it was done directly on the
application host.
ZDB to tape
The basic concept of ZDB to tape is the following: create, at a high speed, a copy of data (a replica)
from the source volumes at a specific point in time, and use this replica for a backup to a standard
backup medium, typically to tape, but of course also a disk backup is possible. The replica is
presented to a backup host to minimize the backup impact to the application host. After the
backup, the created replica may be overwritten Restore is done from tape and does not differ to a
normal restore.
ZDB to disk
With ZDB to disk a replica is created and kept on the array. The replica is temporary presented to
the backup host to validate consistency the created volume, but no tape backup is performed. A
high speed recovery of the backed up data can be performed using Instant Recovery.
Note: ZDB to disk method only creates the replica. No backup to a disk or tape device is
performed.
ZDB to disk+tape
From the functionality point of view, ZDB to disk+tape is ZDB to disk with the added capability to
stream data from the replica to tape or disk medium, after replication.
Restore of the backed up data can be performed ether by utilizing Instant Recovery of by running a
traditional tape restore.
Note: ZDB to disk+tape method consists internally out of two Data Protector sessions, the
replica creation part and the optional backup to a disk or tape device, while externally only
one session-id is used. Normal restore and IR based restore is possible by using the same
session-id.
11 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Most companies are challenged today to reduce or eliminate downtime, this includes a requirement
to reduce or eliminate the backup windows.
Many companies are opting for Disk-to-Disk (D2D) backup as a way to reduce the time spent
executing backup jobs and in addition to have better performance when restoring single files. While
Disk-to-Tape (D2T) is still necessary to meet long term archiving and compliance requirements,
D2D is fast becoming the primary backup method. To meet these challenges, Data Protector
provides an ever increasing set of possibilities and combinations to meet the data security and
service levels required. Shown above are the backup possibilities offered with Data Protector.
Cell concept
Manager of Managers
• Backup domain (MoM)
Cell2
• Logical organization of systems
• Can match your organization or
geographical region Cell1
• Heterogeneous system support
• Independent, but centrally managed
GUI
Client Cell
sytems
Backup IDB Manager Backup
devices specifications
Cell
12 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Cell concept
The Data Protector cell is a network environment that has a Cell Manager, client systems, and
backup devices. The Cell Manager is the central control point that manages all backup and restore
operations in the cell and runs the Internal Database (IDB). After the installation of Data Protector
agent software on client systems these systems become Data Protector client systems that are
part of the cell and their data is backed up to media in configured backup devices.
The Data Protector IDB keeps track of the files you back up so that you can browse and easily
recover single files or the entire system. Data Protector facilitates backup and restore jobs. You can
do an immediate (or interactive) backup using the Data Protector Graphical User Interface (GUI) or
using the Command Line. You can also schedule your backups to run unattended.
The Data Protector architecture breaks down the size and complexity of the enterprise network by
allowing systems to be configured into Data Protector cells. This cell is a loosely coupled collection
of systems, organized to allow for central management of backup processes.
Important: A client system can only belong to one cell at the same time.
Cells are generally independent parts of the enterprise network. They are administered and
operate independently of each other. Data Protector has the capability to monitor and administer
all the cells from a central administration point utilizing the Cell Console or Enterprise Console or
the Manager of Managers console.
Note: If Client systems are configured in different time zones, some of the Data Protector
session messages might be confusing as local client time is shown. In DP
configuration tasks the Cell Manager’s time zone and time settings are used.
Manager of Managers—MoM
Data Protector can be managed in larger environments by implementing the Manager of Managers
(MoM) layer. An existing Data Protector Cell Manager can be configured as the Manager of
Managers (MoM) which allows remote administration and monitoring of many cells from a single
GUI. A centralized media management database (CMMDB), cross-cell device sharing as well as
central license management may also be configured with MoM.
To efficiently hierarchically structure and manage large-scale environments, you can combine
single DP cells into a Manager-of-Managers (MoM) environment. A MoM can manage up to 50 Data
Protector cells. An environment structured in such a way, allows you to manage up to 50 000
clients from a single MoM setup.
Such a setup allows you to manage an unlimited number of Data Protector clients from one central
location while distributing administrative and managerial rights to different Data Protector users
and user groups.
The maximum number of clients that can still efficiently be managed within one Data Protector cell
depends on the following factors:
• Data Protector Internal Database (IDB) load: filesystem log level, types of objects backed up
(disk image, application database, other object types), zero downtime backup sessions,
NDMP backup sessions, and so on.
• Network traffic and system load: local versus network backup, level of concurrent backup
and other activities, network traffic and system load unrelated to Data Protector.
• Maintenance tasks: user management, configuration of backup specifications, upgrading,
patching.
Note: Check the HP Data Protector Product Announcements, Software Notes, and
References as here you find all limitations documented.
Client–server architecture
communication
port: 5555
DP GUI*
IDB
13 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The basic Data Protector implementation utilizes only two architecture layers, the Cell Manager,
and the DP Client layers. The User Interface is installed on the Cell Manager but it may be installed
on clients as well.
• Disk Agent – responsible for read/write actions from disk drives for backup and restore
• Media Agent – responsible for read/write actions to backup media (tape or disk drives)
Backed up data is send directly from the Disk Agent to the Media Agent, using a SAN or LAN
connection. The basis of the client/server model is that the Data Protector software consists of
client modules and a server module. These modules can all be installed on a single system (a single
client cell) or be distributed across many systems (up to 5000 in one Data Protector cell).
Communication between modules is accomplished via TCP/IP sockets, initiated on port 5555.
CRS
• Cell Manager contains: MMD
Cell Services (daemons) KMS
Session Managers HPDP-IDB IDB
User Interface
Internal Database (IDB) HPDP-IDB-CP
Scheduler data and configuration files HPDP-AS
Agents and User Interface (CLI/GUI) Session Managers
Installation Server (optional)
Disk, Media and Integration Agents
14 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The Cell Manager is the main control center for the cell and contains the Internal Database (IDB). It
runs the core Data Protector software and the Session Manager, which starts and stops backup and
restore sessions and writes session information to the IDB.
Any system within a chosen cell environment can be set up as a Data Protector Client. Essentially, a
client is a system that can be backed up, a system connected to a backup device with which the
backup data can be saved, or both. The role of the client depends on whether it has a Disk Agent or
a Media Agent installed.
A client that will be backed up using Data Protector must have a Disk Agent installed. Data
Protector controls the access to the disk. The Disk Agent lets you back up information from, or
restore information to, the client system.
A client system with connected backup devices must have a Media Agent installed. This software
controls the access to the backup device. A Media Agent controls reading from and writing to, a
backup device’s media.
Source DA MA
Data
15 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The Disk Agent is a component needed on a client to back it up and restore it. Disk Agent controls
reading from and writing to a disk. During a backup session, the Disk Agent reads data from a disk
and sends it to the Media Agent, which then moves it to the device. During a restore session the
Disk Agent receives data from the Media Agent and writes it to the disk. During an object
verification session the Disk Agent receives data from the Media Agent and performs the
verification process, but no data is written to disk. The Disk Agent component consists of
specialized processes that are started on demand by the respective Backup or Restore Manager
process (Session Manager).
Refer to the Platform and Integration Support Matrix for a list of currently supported platforms.
16 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The Media Agent is a process that controls reading from and writing to a backup device, which read
from or writes to a medium (typically a tape). During a backup session, a Media Agent receives data
from the Disk Agent and sends it to the backup device for writing it to the medium. During a restore
session, a Media Agent locates data on the backup medium and sends it to the Disk Agent for
processing. A Media Agent also manages the robotics control of a library.
A Media Agent component must be installed on the client system to which the backup device is
physically attached (direct attached or SAN attached). The Media Agent component consists of
specialized processes that are started on demand by the respective Backup, Restore, Copy,
Consolidation or Media Management Session Managers.
...and more
DB IA MA
17 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Data Protector provides a set of integration components that enable data to be exchanged
between the most popular applications (databases) and Data Protector. Data Protector accesses
the application vendors API in order to perform online backups and restores. The ability to perform
online backups is a highly desirable feature in mission-critical, high-availability environments. Data
Protector also provides integrations with many other applications that assist in areas such as high
availability, system control, and monitoring.
Application Integrations
• Oracle
• SAP ERP
• IBM Informix, DB2 and Lotus Domino
• Microsoft SQL, Exchange, SharePoint, VSS, DPM
• VMware
• Citrix Xen Server … and many more
Note: Refer to the Platform and Integration Support Matrix for a list of currently
supported platforms.
UNIX
based IS
Cell
IDB Manager
WINDOWS
based IS
18 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Data Protector Installation Server is a computer system that holds a repository of the Data
Protector software packages for a specific architecture. The Installation Server is used for remote
installation of Data Protector clients. In mixed environments at least two Installation Servers are
needed: one for UNIX systems and one for Windows systems. The Installation Server must be
registered as such with a Cell Manager.
Note: The Installation Server is not restricted to a single cell, it can be imported into
several cells, but it is limited to distribution services for its native platform
(Windows only or UNIX only)
When the Cell Manager system pushes agent software to a client system, the particular Installation
Server from which the software is to be obtained is specified.
Data Protector patches are applied to the Installation Servers(s) and then distributed to clients
during an update/push request from the Cell Manager.
Note: Refer to the Platform and Integration Support Matrix for a list of currently
supported platforms.
User Interface
• Graphical User Interfaces (GUI)
• Also known as MFC GUI
GUI
Network Cell
IDB Manager
19 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
User Interface
Data Protector provides easy access to all configuration and administration tasks using the Data
Protector GUI on Windows and UNIX platforms. You can use the original Data Protector GUI (on
Windows) or the Data Protector Java GUI (on Windows and UNIX). Both user interfaces can run
simultaneously on the same computer. Additionally, a command-line interface is available on
Windows and UNIX platforms.
The Data Protector architecture allows you to flexibly install and use the Data Protector user
interface. The user interface does not have to be used from the Cell Manager system; you can
install it on any desktop system and it allows you to transparently manage Data Protector cells
with Cell Managers on all supported platforms.
Data Protector provides a rich and powerful command line interface. The CLI can be used in
situations where a GUI is not available, for example, when dialing in to a system for remote
support, or when writing shell scripts or batch files. Most of the Data Protector commands will
reside in the bin directory below the product home.
20 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
In addition to the regular Database and Application Integration Agents Data Protector offers
Granular Recovery Extension Agents for:
• MS Exchange
• MS SharePoint and
• VMware
While regular Database and Application agents allow only restore and recovery of whole Exchange
Databases, SharePoint Content databases and VMware virtual machines the Granular Recovery
Extension Agents allow single item recovery for these named applications, like a single email
recovery from an Exchange User Mailbox, a single picture or word document recovery from a
SharePoint Web Page or a single file restore from a VMDK Image backup.
The Granular Recovery Extension Agent is fully integrated into the named application and does not
require the usage of the Data Protector GUI. The application administrator is able to trigger the
application database restore into a cache area and the end user is able to extract the missing item
from that cache area back into his running application.
Note: The Granular Recovery Extension requires a separate license for each database system
that requires a single item recovery.
• Backup management
Information about performed backup, restore, copy and consolidation sessions
• Media management
Stores information about all used media in backup, copy and consolidation sessions, manage
protection of stored data and track location of backed up data on medias for fast restore as
well as track location of medias in tape libraries
•Encryption/decryption management
In case of encrypted backups operation encryption keys are stored in IDB and retrieved in
case of a restore
21 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The Data Protector Internal Database (IDB) is an embedded database, located on the Cell Manager,
which keeps information regarding what data is backed up; on which media it resides; the result of
backup, restore, copy, and media management sessions; and what devices and libraries are
configured.
Note: For more detailed information refer to the IDB chapter of this training.
Cell Manager
request read/write
CRS HPDP-IDB-CP
start
User Interface session IDB
BSM catalog
22 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
There are several processes that execute while backup or restore jobs are performed. The slide
illustrates the location of the processes that execute on the various systems, as well as their roles.
Note: Data from the backup flows directly between Disk and Media agent, and does not
flow through the Cell Manager.
Remote Processes
Data Protector is a distributed application and relies heavily on multiple cooperating of local and
remote processes. Its Inter-Process Communication (IPC) mechanisms are designed and
implemented with great care to maximize system response time and data throughput.
Data Protector concentrates on simple bi-directional messaging for both data and message
transfer.
As both network capacity and backup device speed are expected to increase significantly during the
lifetime of the Data Protector product, all IPC channels are carefully designed to avoid
communication bottlenecks. Data Protector uses the following fast and reliable IPC mechanisms,
available on all major platforms today:
$DP_VAR ... Data Protector IDB, log and temporary files directory
23 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Within this training, the following variables are used to refer important Data Protector directories:
• Global Option Tuning possible through DP GUI: Internal Database Global Options
24 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Global options affect the entire Data Protector cell and cover various aspects of Data Protector,
such as timeouts and limits. All global options are described in the global options file, which you
can directly edit through the DP GUI to customize Data Protector. Change to the Internal Database
context, expand Internal Database and select Global Options to see the Global Option Tuning
page. For tuning identify the option you want to change and modify the entry under Value.
Note: The global option file can also directly modified by a text editor, but in this way no syntax
is performed. In case of a typo or a value setting out of the supported range DP switches
back to the documented default setting without updating the global file.
WINDOWS DP_HOME\omnirc
UNIX DP_HOME\.omnirc
Note: omnirc does not exist as a default. It needs to be renamed for activation
Unix : .omnirc.TMPL .omnirc
Windows: omnirc.tmpl omnirc
25 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The behavior of each Data Protector client can be modified by making changes within omnirc, a file
that need to be stored under:
As a default only a template file exist - omnirc.tmpl - that needs to be copied to omnirc or .omnirc
respectively. To overwrite the default Data Protector behavior on a specific client uncomment the
appropriate option in the omnirc file on that client and set it to the new value. A short description
of available options in the omnirc file explains the purpose of the variable and the supported
values. Changes in omnirc require no restart of the Data Protector services, just re-run the
operation that was supported to be changed by that parameter.
Besides tuning the file is often used for troubleshooting or for the activation of undocumented
product features that are introduced by special Test Modules. Follow the instructions from Data
Protector Support on how to activate these functions via omnirc.
Note: There is no DP GUI support and DP CLI support for a central omnirc tuning.
The omnirc file needs to be edit locally on each DP client.
Support Matrices
Important:
Always check the latest support matrix online on SSO Portal
https://softwaresupport.hp.com/group/softwaresupport/support-matrices#DP
26 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Support matrix
Data Protector supports all of todays main Operating Systems, Databases and Applications. A
detailed listing of the supported Platforms and Integrations can be obtained from the Platform and
Integration Support Matrix. A similar Support Matrix exists for all the Backup Devices that are
supported by Data Protector Media Agents and the Disk Arrays that are supported for Zero
Downtime backup and Instant Recovery operations.
Support Matrices are included in the product documentation, can be push-installed as part of the
Data Protector documentation module and stored under:
WINDOWS : DP_HOME\docs\support_matrices
UNIX : DP_HOME/doc/C/support_matrices
Note: Check always the latest support matrices online as they will be updated frequently.
The updated version of the support matrices are available on the SSO portal:
https://softwaresupport.hp.com/group/softwaresupport/support-matrices#DP
Contents
Module 4 — HP Data Protector Installation 1
4–3. SLIDE: Installation Overview .................................................................................................... 2
4–4. SLIDE: Cell Manager Platform Support DP 9.0X....................................................................... 3
4–5. SLIDE: Localization ................................................................................................................... 4
4–6. SLIDE: DP 9.00 DVD Packaging ................................................................................................. 5
4–7. SLIDE: Product Documentation ................................................................................................ 7
4–8. SLIDE: Overall Installation Sequence ....................................................................................... 8
4–9. SLIDE: Plan the layout of the cell ........................................................................................... 10
4–10. SLIDE: Check hardware and software requirements ............................................................. 13
4–11. SLIDE: Preparation on Windows ............................................................................................ 15
4–12. SLIDE: Installation Wizard on Windows ................................................................................. 16
4–13. SLIDE: CM Installation on Windows cont. ............................................................................. 17
4–14. SLIDE: CM Installation on Windows cont. ............................................................................. 18
4–15. SLIDE: CM Installation on Windows cont............................................................................... 19
4–16. SLIDE: Preparation on UNIX ................................................................................................... 20
4–17. SLIDE: DP 9.00 Cell Manager Installation on UNIX Systems ................................................. 23
4–18. SLIDE: DP 9.00 Cell Manager processes ................................................................................ 25
4–19. SLIDE: Client installation overview ........................................................................................ 26
4–20. SLIDE: Remote push installation ........................................................................................... 27
4–21. SLIDE: Remote push installation cont. ................................................................................. 28
4–22. SLIDE: Remote push installation cont. ................................................................................. 30
4–23. SLIDE: Windows Firewall push installation ........................................................................... 31
4–24. SLIDE: Push installation with secure shell............................................................................. 35
4–25. SLIDE: Local client installation - Windows ............................................................................ 36
4–26. SLIDE: Local Client Installation - Unix .................................................................................... 37
4–27. SLIDE: Export of clients .......................................................................................................... 38
4–28. SLIDE: Import of clients ......................................................................................................... 39
4–29. SLIDE: Adding components to clients .................................................................................... 40
Module 4
Installation
• Push Installation
• Local Installation
3 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Installation Overview
The overall Data Protector rollout requires carefully planning, which is also discussed in this
chapter.
4 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Before starting a Cell Manager installation ensure that the planned Operating System is fully
supported as Cell Manager platform. The slide above lists the support status of the Data Protector
9.00 Media Release (MR) version without any patches or Service Packs.
For updates refer to the latest Platform and Integration Support Matrix, available on the SSO portal
at: https://softwaresupport.hp.com/group/softwaresupport/support-matrices#DP
Localization
Localized content:
• Online Help
• Subset of the Product Documentation:
Admin Guide, Getting Started Guide, Concepts Guide, Installation Guide,
Troubleshooting Guide, Disaster Recovery Guide, Product Announcements
5 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Localization
In general Data Protector is orderable as English version only. Selected localized versions of Data
Protector exists.
Data Protector 9.0 requires the installation of Patch Bundle 9.02 to enable the following
localization versions:
• French localization
• Japanese localization
• Simplified Chinese localization
Note: Only a subset of the Data Product Documentation is available as localized version.
All other product manuals are only available in the English version.
6 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Data Protector 9.00 can be ordered as electronic delivery or on physical media. The physical media
set contains three DVDs that are packaged like in the major releases before. Three DVDs are
required to cover the various operating systems and processor architectures that DP supports.
All three DVD types contain the DP Starter Packs, the Cell Manager and Installation Server for the
platform, include all manuals in PDF format (in the DOCS directory), and the HP Software
Integration Packages.
DVD 1 also includes the agents for Open VMS clients, DVD-2 includes the agents for HP-UX, Solaris
and Linux clients and DVD 3 includes the agents for HP-UX, Solaris and Linux clients, too.
In detail the contents of the DVDs are:
DVD 1
- Cell Manager and Installation Server for Windows
- The complete set of English guides in the electronic PDF format
- Windows IA-64 clients
- HP OpenVMS clients (Alpha and IA-64 systems)
- Product information
- HP software integration packages
DVD 2
- Cell Manager, Installation Server, and clients for HP-UX
- Clients for other UNIX systems
- Clients for Mac OS X systems
- The complete set of English guides in the electronic PDF format
- HP software integration packages
DVD 3
- Cell Manager, Installation Server, and clients for Linux systems
- Clients for other UNIX systems
- Clients for Mac OS X systems
- The complete set of English guides in the electronic PDF format
- HP software integration packages
All Data Protector installation files for Microsoft Window systems are digitally signed by HP. The
Readme.txt file contains the instruction to verify the HP Signature.
In addition Data Protector can be downloaded for evaluation purposes with the Instant-On 60 days
license from www.hp.com/go/dataprotector under “Trials and Demos”.
Documentation
• Bundled with the Data Protector
Cell Manager Installation and part of the
push-able DP Documentation modules
Note: Updated versions available as direct download on the HP SSO Portal or as part of DP Documentation Patch
7 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Product Documentation
A complete set of the Data Protector product documentation is included in with each software
distribution (on DVD or electronic download) and gets automatically installed as a default
component during each Cell Manager installation. The location of the documentation is:
DP_HOME\docs
The product documentation contains the installation and integration guides, concept guides and
the product announcements. In addition it contains a set of the support matrices and the CLI guide.
It is possible to run a Push Installation of the Data Protector product documentation and Online
Help to any remote Data Protector client system in the same way like any other Data Protector
component. Just select the English Documentation component or a localized documentation
component for client installation.
Updated Product Documentation, Whitepapers and the latest version of the Support Matrices can
be downloaded from the SSO portal: (HP Passport Login and SAID Support identifier required)
https://softwaresupport.hp.com/group/softwaresupport/support-matrices#DP
Client Installation
Maintenance
3 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
This slide provides an overview about the Data Protector overall installation sequence.
Install Clients
• After you have installed the Cell Manager and Installation Servers, the Data Protector client
systems may be installed remotely, using the DP GUI to push the respective agent
components from the depot on the Installation Server through the network to the clients,
or the clients can be installed manually from the local DVD media.
Clients that are manually installed from the media must be imported into the cell after their
installation completed.
• An Instant-On license is automatically created when the product is first installed. This gives
you usage for 60 days, during which time you must apply for and install a permanent
license
Maintenance
• Cell Maintenance is an ongoing process. After completion of the initial installation new
client systems might have to be added to the cell in order to back them up. New Data
Protector components need to be pushed to existing clients to cover newly installed
applications and databases.
• In case a new Data Protector version is available with features that are required in your
environment an upgrade to a newer version needs to be performed. In case of a Cell
Manager platform change (e.g. 32bit to 64bit, Windows to Linux) a Cell Manager Migration
needs to be performed. If both steps have to be performed you have to run first the
migration before you run the upgrade. More details about this process can be found in the
Upgrade chapter.
• Ongoing Maintenance also includes Data Protector patch installation sessions, which often
requires downtime for the cell (depending on the patch module) and should be carefully
planned. Data Protector patches can be downloaded from the SSO Portal under:
https://softwaresupport.hp.com/group/softwaresupport/support-matrices#DP
(HP Passport login and SAID Support Identifier required)
4 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
This slide lists the questions that must be answered during the installation planning.
Which platforms will be backed up?/Which applications will be backed up using a Data Protector
integration agent?
Check the HP Data Protector Platform and Integration Support Matrix to see whether DP supports
these systems, applications and databases. In addition refer to the Device Support Matrix to ensure
that your planned backup devices (Tape, Disk) are supported with DP.
Which system(s) will be the Installation Server(s)?/Are there several Installation Servers (IS)
needed?
Any system that is supported as a Cell Manager can also be used as Installation Server. About 2 GB
is needed for the depot directory that stores the DP client agent components.
Hardware Requirements
Windows – Linux – HP-UX – Cluster (64bit)
The table shows the requirements for Cell Manager, Installation Server, Windows GUI and DP
agents.
Product Download
Data Protector can also be downloaded from
the WEB. Go to
www.hp.com/go/dataprotector
https://softwaresupport.hp.com/group/softwaresupport/support-matrices#DP
It is available on:
www.hp.com/go/dataprotector
Depending on your environment there might be additional pre-requisites that must be satisfied
before installation. See the HP Data Protector Installation and licensing guide for more details and
configuration steps.
Preparation on Windows
Logon as (Domain) Administrator
Make sure the Cell Manager (and Installation Server) have static IP addresses
Verify that port number 5555 is free on all systems in the cell
Verify that ports 7112, 7113 and 7116 are free on the Cell Manager
Verify that ports 9990 and 9999 are free on the Cell manager
Java JRE 1.7 64bit must be installed – if not it will automatically installed
11 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Preparation on Windows
To install a Data Protector Cell Manager on Windows you must have Administrator rights. The Data
Protector Cell Manager system must meet the following requirements:
Installation on Windows
• Insert the Windows installation DVD
• In the HP Data Protector start-up window, select 'Install Data Protector'
• Alternatively, on the installation media, go to the WINDOWS_OTHER\<platform> directory
Run setup.exe to start the installation wizard
1 2
12 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
To start the Data Protector installation insert the Windows installation DVD-ROM on your system.
In the HP Data Protector autorun window, select 'Install Data Protector' to start the Data Protector
Setup installation Wizard.
Alternatively run setup.exe directly from the installation medium, located under:
WINDOWS_OTHER\<platform>
Installation Procedure:
1.) After the welcome screen
2.) Accept the License Agreement
3 5
4 6
13 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
14 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
7. Accept or change the IDB service and Application Server account (default CRS user
account) and the service ports for the IDB and JBOSS Application Server
The installation verifies, if ports and accounts can be used on this system.
9 11
15 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
8. Windows Firewall Configuration: Per default Data Protector opens inbound ports as
needed, while required outbound ports need to be manually configured.
9. Overview shows configuration for a last check before the installation starts click on
Install
After the installation a log file can be checked for warnings or error messages
Preparation on UNIX
Logon as root
Verify that sufficient system memory (4GB) is available
Check or adjust kernel parameter shmmax to >= 2.5GB
Verify if inetd or xinetd daemon up and running
HPUX only: Check /etc/inetd.conf if identd is activated
Make sure the Cell Manager (and Installation Server) have static IP addresses.
Verify that port number 5555 is free on all systems in the cell
Verify that port number 7112, 7113, 7116 and 9999 are free on the Cell Manager
Verify that host name resolution is enabled (DNS)
Create an OS user hpdp in an OS group hpdp (no root permission required)
Java JRE 1.7 64bit must be installed – if not it will automatically installed
On DP IDB partition get prepared for future IDB grow (DCBF)
16 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Preparation on UNIX
The listed preparation steps applies to Linux and HP-UX. First log on the planed Cell Manager
system with root privileges.
If service is comment out, remove the comment, save the inetd.conf file and restart inetd service.
Make sure that Cell Manager and Installation server have a static IP address
Data Protector licensing is based on the IP address of the Cell Manager. Therefore you need to
assign a fixed IP address in Active Directory/DNS to the Cell Manager system and request your
permanent license for it.
Installation Server does not require one, if it is running on a separate system, only in case when Cell
Manager and Installation server are running on the same system.
Verify that port number 5555 is free on all systems in the cell
Port 5555 is used for initial communication between Cell Manager and client systems and need to
be available on all client. It is possible to change the default port on UNIX and WINDOWS, but
ensure that the newly configured port is available and replaced on all members of the DP cell or
within all DP cells in case of a MOM environment. Refer to the Data Protector Installation Guide how
to change the default Inet communication port on the Cell Manager.
Verify that port number 7112, 7113, 7116 and 9999 are free on the Cell Manager
For the IDB services, ports 7112, 7113, 7116, 9999 and 5555 should be available.
For verification run: (check includes the 5555 port)
LINUX:
netstat -anp --inet |grep -e :7112 -e :7113 -e :7116 –e :9999 -e :5555
netstat -anp --inet6 |grep -e :7112 -e :7113 -e :7116 –e :9999 -e :5555
HPUX:
netstat -an -f inet |grep -e .7112 -e .7113 -e .7116 –e .9999 -e .5555
netstat -an -f inet6 |grep -e .7112 -e .7113 -e .7116 –e .9999 -e .5555
If none of these ports is occupied, both commands will return no output, otherwise output will
shown that ports are occupied and cannot be used by DP. In case you need to change the ports you
need to create a DP.dat file as mentioned on the next slide (4-17)
operations, so you need to have a consistent hostname resolution in your environment. Therefore
it is highly recommended to use DNS in your environment.
If the user hpdp doenot defined, then specify it accordingly. Ensure that a home directory exist for
the hpdp user and add it to the password file (here: /home/hpdp)
Execute grep command to check whether group hpdp exists and user hpdp is part of it.:
HPUX: $ grep -e hpdp /etc/group
hpdp:!:1000:hpdp
Java JRE 1.7 64bit must be installed – if not it will automatically installed
In case of problems, check if JAVA Runtime (JRE) version 1.7 (or newer) for 64-bit Environment is
installed:
Installed java version is checked during Cell Manager installation and if no java version found
automatically installed under /opt/omni/jre. So no action need to be taken here.
Note: Check the latest information about the prerequisites for DP 9.00 installation and upgrade.
Refer to the release versions of these manuals:
HP Data Protector Product Announcements, Software Notes, and References, and Installation and
Licensing Guide, Chapter 2: "Installing Data Protector on your network", Section: "Installing a UNIX
Cell Manager".
Changing the default Data Protector Default entries for DP.dat file
PGPORT=7112
IDB ports and user accounts: PGCPPORT=7113
• Create the file /tmp/omni_tmp/DP.dat
APPSSPORT=7116
• Add the parameter with value you need to change
APPSNATIVEMGTPORT=9999
• Default entries are shown in the example
PGOSUSER=hpdp
17 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The procedure to start DP 9.00 installation of the Cell Manager (and Installation Server) is the same
on HP-UX and Linux:
Insert and mount the appropriate UNIX installation DVD and run the following command
from the LOCAL_INSTALL directory:
DP_DEPOT
LOCAL_INSTALL
Where platform_dir is:
hpux -- HP-UX systems
linux_x86_64 -- Linux systems on AMD64/Intel EM64T
On UNIX it is not possible to change the IDB ports and IDB user directly by omnisetup.sh.
/tmp/omni_tmp/DP.dat
and add the new settings line by line into this file.
In addition you can change the default IDB user account: PGOSUSER
Default: hpdp
Example DP.dat file with changed Application Server (hpdp-as) management port:
PGPORT=7112
PGCPPORT=7113
APPSSPORT=7116
APPSNATIVEMGTPORT=7200
PGOSUSER=hpdp
DP 9.00 CM Processes
Since DP 8.00: RDS
DP Processes on the Cell Manager : uiproxy
• Windows : running as Services
• Unix : running as Daemons Windows Services
Name Description
The Cell Request Server (CRS) service starts and
crs controls backup & restore sessions in the cell
When the installation is finished, the processes listed in the table above are running on the Cell
Manager.
Data Protector configures during the installation the required files in order to ensure that these
processes are started, whenever the system is booted.
The Data Protector processes are running as services on WINDOWS and as daemons on UNIX OS.
You can manage the Data Protector processes by the omnisv command:
Note: The processes rds and uiproxy, part of the DP Cell Manager processes before version 8.00,
are no longer available in DP 8.X. In case you run customized monitoring scripts you need to update
these scripts to reflect the changes in DP 8.00 and higher.
Support:
• Local installation available for: All supported client platforms
• Remote installation available for: All supported client platforms beside
OpenVMS and Netware
After the Cell Manager and Installation Server are installed, it is time to install the Data Protector
agent software on all systems that should be backed up. Those systems become then DP clients
belonging to this cell. The installation can be performed via two mechanisms:
Local installation:
With a local installation it is required to log into the system and start the installation on the
corresponding system. A local installation is available for all supported platforms.
Remote installation:
The Installation Server offers the possibility to push the software to the corresponding
systems. There are two different Installation Servers, one for Windows and one for UNIX. That
means that if the software has to be pushed to Windows and UNIX clients, both Installation
Servers must be installed.
The remote installation is available for the most platforms, but some have to be installed
locally, e.g. HP OpenVMS.
1.
2.
define which Installation
Server platform should
be used
3.
• select Add Clients
1.
• select the target
2. platform
• select the
4.
3. Installation Server
• press Next
4.
20 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
If a new Data Protector client should be added to the cell, Data Protector agents software must be
installed for the first time on that system. This can be achieved via local installation or via a remote
pushing mechanism offered by the Installation Server. This and the following slides show how the
remote push mechanism works.
1. Start the Data Protector GUI (on any system) and right mouse click Clients within the
Clients context
2. Select the correct platform of the remote system (either Windows or Unix)
3. Select the Installation Server that should be used for the installation. Note, that only
those Installation Servers are listed that belong to the platform as selected during step
2.
4. Select Next.
5. 6.
• specify system
5. name
7.
• press Add
6.
Nearly 40
components • select all required
are selectable 7. components
8. • press Finish
8.
21 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
7. Select which Data Protector component or agent software should be installed. For
some dedicated components, like MS SharePoint Granular Recovery Extension, it is
required to specify dedicated user credentials. If more than one client is selected and it
is required to install different components on each client, select the option I want to
customize these options for client systems independently.
Example:
• Installation session
starts
9.
• if required, DP asks
for user credentials
10.
9.
• installation
11. finishes
10. 11.
22 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The session starts and lists all important operation in the session output window.
9. Data Protector tries to connect to the system and requests for user credentials if
required.
10. The installation continues and installs all selected components on the system.
The client is also imported automatically into the Cell:
23 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Windows Firewall
If the firewall is enabled on the remote system the firewall must be configured such as
• port 5555 is enabled
• Outbound connections are allowed
If the firewall configuration is not configured correctly the push installation might fail with the
following messages:
or:
In case the Cell Manager is Windows 2003 and the clients are running Windows 2008 the user that
starts remote installation must have Administrator privileges on remote host and the Remote
Administration (NP-In) inbound rule must be enabled on the remote system.
24 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
For security reasons, it is recommended to use secure shell for the Data Protector remote
installation. If secure shell is not available, the legacy UNIX tools rsh and rexec are automatically
used by the Data Protector remote installation.
To use secure shell, install and set up openssh on both, the client and installation Server.
Secure shell installation helps you to protect your client and Installation Server by installing Data
Protector components in a secure way. High level of protection is achieved by:
• Authenticating the Installation Server user to the client in a secure way through the public-
private key pair mechanism.
• Sending encrypted installation packages over the network.
NOTE: Secure shell installation is supported on UNIX systems only and requires the
omnirc variable OB2_SSH_ENABLED set on the Installation Server.
# OB2_SSH_ENABLED=0|1
# Default: 0
# Allows SSH protocol to be used for remote installation of DP agents. This
# secures the remote connections while distributing the agents. Set this variable
# on Installation Server host. It is applicable only on UNIX platforms.
3.
25 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Data Protector client installation started pretty similar to a Cell Manager installation.
While default installation on Windows is always a remote installation it might be required to run
a local installation.
3. Check the Data Protector Client Installation status in the summary window
Note: For a cluster node installation, please check the HP Data Protector Installation and
Licensing Guide
26 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The default installation type for UNIX client is the remote installation. Similar to a Cell Manager
installation omnisetup.sh has to be used in case of a local DP client installation.
Examples
Install the component Disk Agent (DA), Media Agent (MA), User Interface (CC), and English
Documentation (docs)
CLI: omnisetup.sh –server ita030.dpdom.com –install DA,MA,CC,Docs
• HP-UX swinstall;
• Linux rpm;
• Solaris pkgadd,
Details are explained in the HP Data Protector Installation and Licensing Guide.
Export of clients
1.
• select Clients context
2. 1.
• select Delete
3.
• select Yes for Export with DP
4. Uninstall or No for Export only
3.
4.
Rules:
A DP client always belongs to one DP Cell only
A DP client can be exported from its cell and then imported into another Cell
Optional: Export can be performed together with client software uninstall
After Client export all backed up data of that client stays protected until protection expired
27 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Export of clients
Exporting a client from a Data Protector cell means removing its references from the Internal
Database on the Cell Manager with or without uninstalling the Data Protector client software from
the client. This can be done using the Data Protector GUI using the procedure as shown above.
Import of clients 1.
• select Clients context
• select Finish
4.
3.
4.
Note: DP client always belongs to one DP Cell only, so Import will fail if client still belong to another cell
28 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Import of clients
This slide describes how to import clients. If this client still belongs to another cell, an error
message occurs and the import fails:
4.
3.
29 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Adding components to existing Data Protector client(s) systems is done by the following tasks:
1. In the Scoping Pane, click Clients, right-click the appropriate client,
2. click Add Components.
3. This wizard allows you to add additional clients, on which the components should be
installed. This allows adding several components on several client systems within one
operation.
4. Select the component(s) that should be added; press Finish.
Contents
Module 5 —Upgrade and Migration 1
5–3. SLIDE: Upgrade and Migration Scenarios .................................................................. 2
5–4. SLIDE: Upgrade and Migration Scenarios cont. .......................................................... 3
5–5. SLIDE: Upgrade & Migration Windows ...................................................................... 4
5–6. SLIDE: Upgrade & Migration paths HP-UX, Linux and Solaris ........................................ 5
5–7. SLIDE: Upgrading devices with default block size ....................................................... 7
5–8. SLIDE: Upgrade of a Windows Cell Manager 1/4 ......................................................... 8
5-9. SLIDE: Upgrade of a Windows Cell Manager 2/4 ........................................................ 10
5-10. SLIDE: Upgrade of a Windows Cell Manager 3/4....................................................... 11
5-11. SLIDE: Upgrade of a Windows Cell Manager 4/4....................................................... 12
5-12. SLIDE: IDB Migration Concept............................................................................... 13
5-13. SLIDE: IDB Core Migration ................................................................................... 15
5-14. SLIDE: After Core Migration ................................................................................. 17
5-15. SLIDE: Manually Catalog Migration ....................................................................... 19
5-16. SLIDE: The omnimigrate command ....................................................................... 21
5-17. SLIDE: Report old Catalog ................................................................................... 24
5-18. SLIDE: IDB Size and Update Duration ..................................................................... 26
5-19. SLIDE: Upgrade of a UNIX Cell Manager .................................................................. 27
5-20. SLIDE: Upgrade of a MoM Environment .................................................................. 31
Module 5
Upgrade and Migration
Direct Upgrade
• Upgrade from DP 6.2X, DP 7.0X and DP 8.x directly to DP 9.0X
Indirect Upgrade
• DP 6.1X and older needs to be upgraded to DP 6.2X, DP 7.0X
or DP 8.x first, before upgrading to DP 9.0X
3 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Upgrade Scenarios
Direct Upgrade
The one-step-upgrade from DP 6.2X or DP 7.0X or DP 8.x to DP 9.0X.
Indirect Upgrade
DP 6.1X and older versions need to be upgraded to DP 6.2x or higher before upgrading to DP 9.0X
Migration
Important:
DP 8.00 and higher supports 64-bit Cell Managers only
Migration Task:
Existing 32 bit based CMs must be transitioned to 64 bit first
4 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Migration Scenarios
Beginning with DP 8.00 only 64-bit platforms are supported for the Cell Manger.
A Cell Manager installation on a 32 bit OS must be migrated to 64 bit first. This applies to both, HP-
UX and Windows. A Data Protector Cell Manager was never supported on 32-bit Linux, so such
issues does not apply for Linux.
Data Protector 9.0X does not support a Cell Manager on Windows 2003 (32 or 64-bit), so you need
to migrate the current used DP version to a DP9.0X and current used DP version supported
Windows first, before starting the DP 9.0X upgrade, e.g. migrate to Windows 2008 64bit.
HP-UX PA-RISC systems has been obsoleted as a CM platform. Therefore, CM Systems with HP-UX
PA RISC must be migrated to HP-UX IA 64 first (Itanium HP-UX 11.31).
Upgr.
Upgrade Upgrade
5 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The term Upgrade is used for the upgrade from a DP version to a newer one.
5–6. SLIDE: Upgrade & Migration paths HP-UX, Linux and Solaris
Upgrade/Migration paths on HP-UX
Only Itanium systems are DP Cell Manager DP Cell Manager
9.00 9.00
supported as 9.00 CM!
upgr.
Upgr.
Upgrade Upgrade
6 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HPUX:
The possible upgrade paths and migration paths for HP-UX on PA-RISC and Itanium in a graphical
representation.
Light grey boxes contain everything what has to be done before upgrading to DP 9.00
Blue/Dark grey boxes contain the DP 9.00 upgrade step.
The term Upgrade is used for the upgrade from a DP version to a newer one.
Linux:
Linux Cell Manager was always supported on 64-bit Linux OS only.
See HP Data Protector 9.00 Installation Guide for detailed instructions on how to perform this
migration.
Starting with Data Protector 8.00 the default value for the used block size of Data Protector logical
devices has been increased from 64KB to 256KB for a better performance.
Devices with a user defined block size, e.g. 64KB or 512KB, retain their setting after upgrade.
Devices configured with Block size setting “Default (64)” were accidently reconfigured to “Default
(256)” after upgrade to DP 8.00. This cause problems during tape overwrite or append operations
after upgrade, because 256KB data blocks cannot be written/appended to a 64KB formatted
medium.
The problem is resolved in DP 8.10 and higher by retaining the old 64KB Block size setting.
Important: All newly configured devices in DP 8.x and higher will now get a default block size
of 256KB configured. If you still use media/devices with 64KB configured, either
switch back to a block size of 64KB for new devices or create different pool for
256KB formatted media and use “speaking names/descriptions” for your backup
devices and pools for separation.
• Make sure the Cell Manager system fulfills the DP 9.00 Prerequisites
64bit OS used
See “DP 9.00 Platform and Integration Support Matrix” for supported Windows
versions and configurations”
8 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
For upgrading an earlier version to DP 9.00, the same prerequisites must be met as for a plain DP
new installation. (Check the HP DP Product Announcements, and HP DP Installation Guide for details
about the prerequisites.)
Important: The existing Data Protector must NOT be de-installed before the upgrade.
The upgrade is supported only on a fully operational IDB DP version.
For the export of the existing IDB, all DP services need to be up and running.
The IDB-migration is the major task during the upgrade to DP 9.00 and consists of the following
steps:
- Export of the existing IDB data to flat files
- Deletion of the old DP product files
- Installation of the new DP 9.00 binary files
- Creation of a new DP IDB using PostgreSQL
- Import of the IDB from flat files into the new PostgreSQL IDB
Upgrade
• Log in as the DP CM Administrator.
• Ensure that no backup, restore or IDB maintenance session is running.
• Insert the Windows Installation DVD and wait for the autorun to bring up the DP Product
splash screen or
• run the setup.exe command from the folder \Windows_other\x8664\.
As shown on Slide 5-10 the DP installation wizard finds automatically the earlier DP version and
shows the information on the Welcome screen. This is the same as with upgrades to new versions
in the past:
On the then following 'Component Selection' window you can choose to install the same or more or
less components than those that the wizard finds from the existing DP installation.
Click Next
2 4
9 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
After the welcome screen Data Protector detects the current DP version. In the example above, the
DP version 6.20 is shown.
As usual, the Component Selection Windows allows choosing components for installing on the CM.
All components, already installed with the current version, are automatically selected.
Hitting the Space button DP shows the size of available disks with its free space. Last column
shows the required space for the DP 9.00 CM and the selected components.
6 7
10 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
After the Welcome and the Component Selection window, the next screen “Internal Database and
Application Server options” prompts for user name/password of the IDB Service and Application
Server account. As a default CRS service account is listed.
The port numbers 7112, 7113 and 7116 are the default ports for the IDB Service, the IDB
Connection Pool (CP) service, and the Application Server (AS) service and should only be changed if
these ports are already used for other purposes on the system.
The installed JBOSS Application Server can also be managed directly in a Browser Window, outside
of Data Protector. This used Management Port (default: 9999) can be customized as well in this
window.
The next screen asks for permission to perform all required Firewall configuration changes to
ensure a proper Data Protector operation.
The last shows a summary of the configured upgrade operation. Hit Install to start the installation.
Note: Together with the new PostgreSQL IDB, a JBOSS Application Server Technology Stack,
a Job Control Engine and a Java Runtime Environment is installed.
Importing …
8
Exporting …
10
11 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
In the first upgrade step it exports the old IDB – mainly the directories cdb and mmdb to a
temporary location: DP_VAR\tmp\export
Ensure to have enough free space available on that volume for the ASCII dump of that RAIMA IDB.
The export is done using CLI commands of the installed old DP version. That’s one of the main
reasons why the old DP version should not be removed before DP 9.00 is deployed. If you do, the
upgrade will fail.
After exporting the old IDB, the wizard removes the old DP binaries and installs the new DP 9.00
product binaries. The wizard creates an empty PostgreSQL based IDB under
DP_VAR\server\db80
and imports the old IDB data from the DP_CONFIG\tmp\export directory.
DP then continues the installation as usual and finally comes with the “Installation Status” Window.
Important: IDB Migration is only required for DP upgrade from pre-DP 8.x to DP 9.00
12 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The migration from an existing IDB on a DP 6.x or 7.x Cell Manager to the new PostgreSQL IDB has
two parts, the Core migration and the Catalog migration.
After the core migration, the DP 9.00 Cell Manager is fully functional. Media, Pools and Device
related information was migrated and are now accessible in the DP GUI/CLI. The biggest part of the
IDB, the Catalog Database (CDB) was only partly migrated to shorten the overall required IDB
downtime and quickly allow new backups to run.
A special wrapper was installed that allows the new IDB to access the not migrated part of the IDB,
which was set in Read-Only status after Core migration. All new backups start creating new Catalog
files under DP_VAR\server\db80. Media overwrite and append operations will create new DCBF
format 2.0 files and the trigger the deletion of the belonging old DCBF files.
But it is possible to migrate all old DCBFs into the new DCBF 2.0 format. This step is called Catalog
migration and depending on the amount and size of old DCBF it might run from several hours to
several days. The Catalog migration does not require downtime and is able to run in the
background.
The Catalog migration might be required in environments with a high number of media with long
retention times or even permanent protected media.
• Old DCBF and fnames.*, dirs.*, fn*.* are not migrated and kept in their old locations:
DP_VAR\db40\dcbf (1,2,3..)
DP_VAR\db40\datafiles\cdb
13 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The IDB Core migration upgrades all the critical information such as entire MMDB and the vital part
of the CDB (sessions, objects, object versions, media positions) into the new PostgreSQL based IDB.
The new PostgreSQL based IDB data files are located under:
Windows: DP_VAR\server\db80\idb\PG_9.1_201105231
Unix: DP_VAR/server/db80/idb/PG_9.1_201105231
The Core Migration is started and controlled implicitly by the omnimigrate command.
The usage of the omnimigrate command is the same on Windows, HP-UX and Linux.
It is a Perl script and located in DP_HOME\bin.
1.) Exporting the RAIMA part of the old IDB into temporary flat files
By executing omnidbutil –writedb –cdb <location> -mmdb <location> the old RAIMA Part
of the IDB is exported to a temporary location. The location is hardcoded set to
DP_VAR\tmp\export
and cannot be changed. In case of a default installation this will point to:
C:\ProgramData\Omniback\tmp\export
Note: The biggest RAIMA datafiles, fnames.*., fn*.* and dirs.* are excluded from the
export operation and remain in Read-Only mode in the original folder:
DP_VAR\db40\datafiles\cdb
3.) Importing the ASCII data into the new PostgreSQL IDB
Within this step the prepared data is loaded into an empty PostgreSQL database. The location is
listed on the previous page. The size of the new PostgreSQL database is less than the old
RAIMA database because the biggest part, the filename and path information of the backed up
data is now stored within the DCBF.
In addition, the Session Messages Binary Files (SMBF), stored outside of the RAIMA IDB part, are
moved from the old db40 folder into the new location
DP_VAR\server\db80\msg
Important: Data Protector IDB Core Migration automatically starts as part of the DP9.00
installation process. There is no option to run a DP9.00 installation on a DP
Cell Manager system with an older DP version installed without starting the
Core migration. So ensure that your existing IDB is prepared for this upgrade
and you have enough space available.
Important:
After the upgrade configure and perform a full IDB
backup! Old IDB backups cannot be used in Data
Protector 9.00
14 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
After the core migration the DP 9.00 Cell Manager is fully functional.
The migration keeps the old Raima filenames and DCBF files and changes them to read-only mode.
The files are still located in their original folders under: DP_VAR\db40\datafiles\cdb
This way all objects, whose catalogs are still protected and were backed up prior to the migration,
have their filenames and backup versions info available in the old format. DP 9.00 is able to
understand the old format and can start restore sessions from it.
Over time the catalogs get expired. When there is no more data-protected file on the medium that
a DCBF belongs to, the old DCBFs get automatically deleted via daily maintenance the same way as
this was done in DP 6.x / 7.x.
New DCBF files are created when new backups are performed in DP9.00. The backup catalog
information is written into new DCBF files. The new DCBF 2.0 format files are kept in new folder:
Windows: DP_VAR\server\db80\dcbf
UNIX: DP_VAR/server/db80/dcbf
When an old medium is to be appended, also a new DCBF 2.0 file will be created. The advantage of
this approach is that for the upgrade of an existing DP setup only the core part of the IDB needs be
migrated during installation of DP 9.00 software.
The time consuming migration of the DCBF files can be done at a more convenient time, or can
possibly be avoided completely.
As the slide shows, after the successful migration, 5 new DCBF directories are created under the
above mentioned directory.
Within Data Protector the 5 new DCBF 2.0 folders and the old DCBF folders are listed in the DP GUI
IDB context.
• Migrates (remaining) DCBF files into new DCBF 2.0 format files
• DCBF files are transferred sequentially one by one - permanently protected media first
• Migration process can last very long time, but can be stopped & continued as needed
• New DCBF 2.0 files located in: DP_VAR\server\db80\dcbf
After catalog migration space consuming DCBF are deleted automatically during the daily maintenance.
Note:
The new DCBF 2.0 format contains backup versions and filenames. It
need approximately three times more space as the old DCBF file
15 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The Catalog Migration can be triggered manually after the upgrade by starting:
omnimigrate.pl -start_catalog_migration
It converts the old DCBF files to the new DCBF 2.0 format and stores them according the configured
DCBF allocation policies in one of the created 5 DCBF folders under:
DP_VAR\server\db80\dcbf
During this process the names of backed up files are taken from the Raima data files and stored
within the new DCBF files. Therefore the average size of the new DCBF files is three times larger as
before. The user has to take care that there is enough space on the new DCBF location.
After all DCBFs are migrated, fnames.dat, dirs.dat, fn*.ext and their respective .key files are no
longer needed and can be deleted via omnimigrate.pl -remove_old_catalog
The catalog migration process can last for a very long time, because migrates the media one by
one. It can however always be paused and continued at a more convenient time. Some older media
can be expected to expire by themselves during the migration process.
If there are no permanently protected objects and media in a DP cell and if the retention period is
short, then the manual catalog migration is not needed at all, as the old DCBF files will be deleted
automatically once there is no more protected file on the respective media.
On the other hand, the old Raima files - in particular the space consuming fnames.dat - cannot be
deleted as long as there is a single filename which is protected and continue to occupy disk space.
Ideally the DP administrator should wait until most of the old media have expired and then trigger
the migration for the remaining permanently protected media.
To help making this decision, a short report about "Space consumption of remaining old DCBFs and
filenames files" is generated as an alarm once per week in the event log (displayed in the GUI
Reporting context).
• Show the amount of space still occupied by old DCBF & filenames data files
omnimigrate.pl -report_old_catalog [media | sessions | objects]
• Delete all output & control files from the DP_VAR\tmp directory
Allow a restart of the migration after unexpected termination or system crash
omnimigrate.pl –cleanup
• Deletes all old DCBF files and the old Raima data files
Irreversibly ends the migration process
omnimigrate.pl -remove_old_catalog
16 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The omnimigrate.pl command supports various options and is used for Core and Catalog migration.
The most important options for the user triggered Catalog migration are listed above, such as how
to start and stop of the Catalog migration and how to get reports about the process.
Once Catalog migration is completed the old RAIMA datafiles and remaining DCBFs can be deleted
Be careful with the remove_old_catalog option, as it deletes all of the old DCBF files and the old
Raima data files, which means it irreversibly ends the migration process.
omnimigrate.pl -start_catalog_migration
With this command the catalog migration is started. The migration progress is written to standard
output:
Example:
perl omnimigrate.pl –start_catalog_migration
When running the script, DP creates and follows a priority sequence list for the media which need to
be migrated:
- Permanently protected media are migrated first, as they will never expire
- Then the youngest media, as they are least likely to expire during the migration process
omnimigrate.pl -stop_catalog_migration
This command stops/pauses the catalog migration process. The current DCBF upgrade is finished
and logged. When running -start_catalog_migration’ again, the process continues where it
stopped before.
omnimigrate.pl --report_catalog_migration_progress
This command displays the progress of the catalog migration.
Example:
omnimigrate.pl -report_catalog_migration_progress
omnimigrate.pl -report_old_catalog
If no additional option is specified, the command lists the amount of space still occupied by old
DCBF files and filename data files:
Example
omnimigrate.pl -report_old_catalog
omnimigrate.pl -cleanup
With -cleanup option, the command is used to clean up the DP_VAR\tmp directory of all the
export/import files. This includes the migration's priority and control file. Thus, after a cleanup, the
migration will not continue with the next media, but restarts the migration by calculating which of
the remaining media should be migrated first now. The omnimigrate -cleanup command is useful
to restart the migration after a system crash.
Example:
omnimigrate.pl -cleanup
Done!
Note: Instead of option -output_dir, also -shared_dir and even -input_dir can be used with the
same effect.
omnimigrate.pl -remove_old_catalog
This command deletes all of the old DCBF files and the old Raima data files, which means it
irreversibly ends the migration process.
Example:
omnimigrate.pl -remove_old_catalog
This will permanently remove all of the non migrated old catalog of the Data Protector Internal Database.
Are you sure you want to proceed (y/n)? y
Done.
Space consumption of remaining old DCBFs and filenames data files is shown as an
alarm once per week in the event log
18 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
This slide shows this -once a week generated- space consumption of remaining old DCBFs and
filenames data files report: It is shown as an alarm in the Event log.
omnitrig uses the command omnimigrate.pl -report_old_catalog to generate the weekly report
about the space consumption of the old catalog files. It is shown in the event log as an alarm.
omnimigrate.pl -report_old_catalog [media | sessions | objects]
When the media, sessions or objects option is specified, it will show the respective 'old' items and
their expiration dates individually.
Examples:
omnimigrate.pl -report_old_catalog media
Time estimates:
Old DCBF size 24MB 48MB 97MB
Upgrade duration (min:sec) 1:51 3:33 6:13
New DCBF size 91MB 181MB 417MB
18 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
DCBF sizes
After the DCBF’s are upgraded, the size of the DCBF is increased by roughly factor
of 3.5. The duration of the upgrade is determined by the DCBF size:
New DB size
An empty PostgreSQL DB size is about 150MB
Make sure the Cell Manager system fulfills the DP 9.00 Prerequisites
• Physical memory >= 4GB + Shared Memory (>= 2.5GB)
• Availability of ports 7112, 7113, 7116, 9999
• Group and User 'hpdp' created
19 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
If one or more of these port numbers are already used for other products, you can change the ports
during installation by creating the file/tmp/omni_tmp/DP.dat before starting the installation.
For more details about the prerequisites, refer to the IDB Installation chapter of this DP 9.00
Update Training. Also look for the latest information in the “HP Data Protector Installation Guide”,
chapter 2 'Installing Data Protector on your network', section 'Installing a UNIX Cell Manager'.
The existing DP must NOT be de-installed before the upgrade. For the export of the existing IDB,
all DP services should be up and running.
Export of the existing IDB data to flat files, deletion of old DP product files, installation of the new
DP 9.00 product files, creation of a new DP IDB using PostgreSQL, import of the IDB from flat files
into the new PostgreSQL IDB.
To perform the upgrade, you should be logged on as the DP Cell Administrator on that system.
Ensure that there are no running sessions before starting the upgrade.Then insert and mount the
DP 9.00 product medium.
Change directory to LOCAL_INSTALL and run the ./omnisetup.sh command from that directory.
The following list is the command line output of an upgrade on a Linux Cell Manager. Note that
many more client and integration packets will be installed if also an Installation Server is present
on the system:
$ cd DP900
$ ll
drwxrwxrwx. 3 root sys 4096 Feb 5 23:08 linux_x86_64
drwxr-xr-x. 2 root sys 4096 Feb 5 22:54 LOCAL_INSTALL
$ cd LOCAL_INSTALL
$
$ ./omnisetup.sh -CM
Data Protector version A.07.00 found
Cell Manager detected...
Client detected, installed components: cc da ma docs
Passed: The user account "hpdp" will be used for the IDB service.
Passed: Port number "7112" will be used for the "hpdp-idb" service.
Passed: Port number "7113" will be used for the "hpdp-idb-cp" service.
Passed: Port number "7116" will be used for the "hpdp-as" service.
Passed: The kernel parameter value: SHMMAX = 68719476736.
The minimum required parameter value is "2 684 354 560".
Passed: There are "6166810624" bytes of available system memory.
4 GB of system memory are required.
Passed: Data Protector installation requires 2125824 kilobytes of free
storage space on the "/" filesystem.
The filesystem "/" has 34835268 kilobytes of free space.
Exporting DONE!
Important:
During the upgrade (core migration) in a MoM environment, none of the
Cell Managers in the MoM environment is operational.
20 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
To upgrade your MoM environment to Data Protector 9.00, you need to upgrade the MoM Manager
system first. During the upgrade in a MoM environment, none of the Cell Managers in the MoM
environment should be operational.
After this is done, all Cell Managers of the previous versions, which have not been upgraded yet, are
able to access the Central MMDB and central licensing, perform backups, but other MoM
functionality is not available.
Note that device sharing between the Data Protector 9.00 MoM cell and the cells with earlier
versions of the product installed is not supported.
Contents
Module 6 – Licensing and Product Structure 1
6–3. SLIDE: Data Protector licensing................................................................................................ 2
6–4. SLIDE: Data Protector licensing schemes ................................................................................ 4
6–5. SLIDE: Data Protector product structure – traditional ............................................................ 6
6–6. SLIDE: New Capacity based license method ............................................................................ 9
6–7. SLIDE: License key validity ..................................................................................................... 11
6–8. SLIDE: License reporting and checking .................................................................................. 12
6–9. SLIDE: License reporting tool ................................................................................................. 13
6-10. SLIDE: General hints ............................................................................................................... 14
Module 6
Licensing
3 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Data Protector requires a valid license to operate. Licenses are checked and if there are missing
licenses found the operation will not be started.
After a fresh Data Protector installation, you can start using it for 60 days because of a built in
instant-on password. This 60-day period is granted for the customer to request and install
permanent passwords. The Data Protector products are shipped with Entitlement Order number
(EON) that entitle the customer to obtain permanent passwords.
Emergency passwords are available in case the currently installed passwords do not match the
current system configuration due to an emergency (e.g. a loss of the Cell Manager and subsequent
recovery using a new system). Active emergency passwords can be requested from HP Support and
will activate all Data Protector features on any system for two weeks.
To obtain the appropriate country support phone number to contact for emergency license
requests, please visit http://support.openview.hp.com/contact_list.jsp
http://www.webware.hp.com
In addition it is possible to request the password from the Software Update Portal:
http://www.hp.com/software/updates
Run omnicc without any option to force a reload of the lic.dat file
Note: Run omnicc –password_info to check your cell for active licenses
Traditional licensing
• based on required features and backup targets
• three main categories:
- Cell Manager Starter Packs
- Backup Targets such as drive extensions, Advanced Backup to disk and
ZDB licenses
- Functional Extensions such as Online Backup licenses for Integrations,
MOM licenses, Encryption, NDMP or GRE licenses
• OS depended licenses (UNIX licenses work on all platform, while Windows
licenses work only on Windows and Linux)
4 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Traditional based licenses are OS depended licenses. All UNIX based licenses work on all platform,
while Windows licenses work only on Windows and Linux)
Note: The only exception is the AES Software Encryption license that requires a separate
license.
Capacity based licensing is perpetual that covers all existing and new systems, arrays and
applications. It is based on a customer contract and includes all license purchases, even if they are
used in different cells and customer sites.
Important: It is not possible to mix both licensing methods and not possible to migrate from
one method to the other method.
Note: For a physical version (DVD or printed license certificate) always remove the “E” at the end
5 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
This slide provides an overview about the Data Protector Product structure in case traditional
licensing is used. It lists the available Data Protector licenses with product numbers for the
different Data Protector functionalities.
This product structure table is divided in different sections, to ease or the ordering information for
a Data Protector solution. LTU stands for License–to-use.
• Starter Pack,
• Drive & Library extension
• Functional Extension
1. Select a Starter Pack. The appropriate product number depends on the operating system of
your Cell Manager system.
2. Determine the number of configured Backup drives together with and in the customers’
environment and the tape libraries involved.
The required minimum is a Starter Pack license, which includes a DP Cell Manager license and one
Drive license.
Below you can find a brief description of a selection of Data Protector licenses.
It’s recommended to check always the latest version of the Data Protector QuickSpecs under:
http://www.hp.com/go/dataprotector
Select Resources QuickSpecs Select your area to access the latest version.
All UNIX Starter Packs can also be used as a substitute for a Windows or Linux Starter Pack.
Library extension
Includes the license-to-use (LTU) for managing tape libraries with the number of physically
available slots within one Data Protector Cell. Required once per library.
Functional Extensions
On-line extension
Includes the license-to-use (LTU) to perform on-line backup of databases and applications running
on the specified platform. Required per server, it does not matter how many databases are running
on the system. Even if databases of different types are running on the same system, only one
license is required.
Available for UNIX and Windows
Note: Installed GRE licenses are handled like pool licenses and locked for a particular Database or
Application server for 1year after performing the first Granular Recovery on that server.
After expiration the lock is removed and license can be used by other servers.
More details are shown via: omnicc -gre_license_info
Encryption extension
Includes the license-to-use (LTU) and media to encrypt all backup data of one HP Data Protector
client server or workstation with the HP Data Protector AES 256 bit encryption software. Required
once for each HP Data Protector client (Agent / Application Agent) with encryption configured. DP
using drive based encryption doesn’t need a license.
Manager-of-Managers extension
Includes the license-to-use (LTU) for each Data Protector management server (CM), running on the
specified platform, to be part of a Manager-of-Managers environment
Separate licensed:
AES SW Encryption License 1x 1-server/1x10-servers BB618AAE/BB618BAE
6 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Capacity based Licensing includes the Cell Managers, MoM, Drives, Library Slot Extensions, Online
Backup, GRE, ZDB/IR, Advanced Backup to Disk and NDMP licenses. In other words, the Capacity
based Licensing includes almost all current licenses available. But be aware, that future DP
functionality may still require separate licenses. (It may not fall under the capacity licensing.)
At the moment the Software Encryption is the only exception to the global replacement, so this
license needs to be ordered separately.
Granularity of 1TB
The License is available per TB of “full backup” capacity. If the customer wants to protect 5 TB, he
needs 5 licenses of one TB (TF521AA/E)
The capacity purchase tier is based on the number of TBs of capacity licenses that the customer
owns (ordered for the same customer contract ID) plus the capacity the customer wants to order.
See the purchase examples below.
Currently there is no migration from the traditional to the new license model available.
Officially announced Dec 1st 2012, Capacity based licensing requires DP 7.01 (DP 7.00 + patch
bundle BDL701) or a higher version of Data Protector.
The different licensing models cannot be mixed within a Cell or in a MoM environment.
Capacity Calculations:
• Example 1: If a customer has 200 TB of data to protect and backs up all 200 TB to a de-
duplication store that only uses 20 TB, he would still need 200 one TB capacity licenses.
• Example 2: If a customer has 100 TB of data but only protects 10 TB of this data, he would
only need 10 one TB capacity licenses.
Purchase Examples:
• Example 2: A customer makes three DP capacity license purchases over the course of
multiple years.
4 TB in year 1, 20 TB in year 2 and 40 TB in year 3
Year 1 = 4 x TF521AAE (01 to 09 TB tier)
Year 2 = 20 x TF542AAE (10 to 49 TB tier)
Year 3 = 40 x TF543AAE (50 to 99 TB tier)
Note: Upgrading from DP 8.1X to DP 9.0X does not require new license keys.
7 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The Data Protector licensing is using the Cell Manager IP as a base for the license key creation.
Up to Data Protector 7.X the Cell Manager need to have a static IPv4 IP address configured to use
the build-in OVkey3 mechanism for licenses keys management. The Cell Manager system might
have IPv4 and IPv6 based IP addresses configured, but only IPv4 addresses can be used for
licensing. A pure IPv6 based Cell Manager was not supported up to version 7.X
In Data Protector 8.0X it is now possible to create license keys for IPv6 based Cell manager IP
addresses. A newer OVkey version, OVkey4, is used to support IPv4 and IPv6 based Cell Manager
IPs for license key creation. In addition it is possible to use existing OVkey3 licenses from older DP
versions together with newer OVkey4 licenses, so no license migration was required.
Data Protector 8.1X/9.0X and higher supports only OVKey4 licenses. The Cell Manager IP can be
still of any format, so IPv4 and IPv6 based Cell Manager IP addresses can be used for license key
creation. New in Data Protector 8.1X is the need for a license migration, if an upgrade from an older
Data Protector version is performed. None of the older Data Protector licenses can be used in this
version, regardless if existing license keys are OVkey3 or OVkey4 based. So even an upgrade from
Data Protector 8.0X to 8.1X requires this license migration. During the Data Protector upgrade to
version 8.1X a new Instant-On license for 60 days is created to prevent license issues.
Note: Upgrading from DP 8.1X to DP 9.0X does not require new license keys.
Data Protector passwords and therewith licenses are checked and if missing, reported during
various Data Protector operations, for example:
• As a part of the Data Protector checking and maintenance mechanism, the licenses are
checked and, if missing, reported in the Data Protector Event Log.
• When the Data Protector User Interface is started, if there are any missing licenses
reported in the Data Protector Event Log, an Event Log notification is displayed.
• When a Data Protector session is started, the licenses are checked and, if missing, reported.
Data Protector licenses are checked with regard to their characteristics as:
Cell Manager related licenses – like Starter packs, Manager-of-Managers Extension and Single
Server Edition. When a certain DP component, such as the Cell Manager (included in the Starter
Pack) or the MoM is present in the DP Cell, only the presence of the required basic or special license
is checked. Licenses are installed on the Cell Manager and are specific to its IP address.
Entity based licenses – like Drive extensions for certain platforms and library extensions for an
amount of slots are checked at session start time. The session will not start if licenses are not
covered.
Capacity based licenses – like ZDB/IR licenses are also checked at session start and will prevent
backup or Instant Recovery session from start if license is not covered.
9 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The license reporting tool traditionally referred to as license checker, reports if the correct
product licenses are in place. It will not report anything if there are adequate licenses present. On
the other hand, it will issue warnings through the Event Log if there are not sufficient licenses. The
license checker is not a license enforcement tool, just a reporting tool.
License reporting is implemented through the omnicc command with the option check_licenses. It
can be used to report license status on demand. To produce a report about licensing related
information from the cell, run:
With the –detail option, for every license in the cell is returned license name, licenses installed,
licenses used, and additional licenses (capacity) required (plus possibly drive info in case of Drive
Ext. LTUs).
General hints
10 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
General hints
All UNIX Licenses-To-Use (LTU) can be used for Microsoft Windows and Linux systems if applicable.
Or in other words: The UNIX product licenses operate on all platforms, providing the functionality
regardless of the platform, while the Windows product licenses operate on the Windows and Linux
platforms only.
The Manager of Manager allows you to configure centralized licensing for the whole environment.
Then all licenses are generated for the IP address of the single MoM server system, installed and
kept on there. (In case of converting an existing environment from separate cells to centralized
licensing a complete “License Move” must be performed.) The instant-on evaluation license does
not permit MoM license configuration.
The Instant-on License must be replaced with the Permanent License within 60 days after
installation. Data Protector leverages the product numbers of previous Data Protector versions.
Existing Data Protector licenses remain valid after the migration until DP8.0X In DP8.1X a license
migration needs to be performed.
License Passwords are bound to the IP address of the Cell Manager (CM) and are valid for the
entire Data Protector cell. (If you change the IP address of the CM, you need to transfer the license.)
DP Clients do not require any license for file system or disk image backups
License Enforcement: Backup drive licenses are required for all operations, including restores. To
ease the restore in case of a full disaster recovery, all restore operations can run without any
license installed. Please notice that during a Data Protector Object Copy session a restore agent is
used to read the data and forward it to the copy agent. This Data Protector Object copy internal
restore requires a license for the used reading devices (as well as for the target devices), so only
during a pure restore session licensing is not enforced.
Using the “Automating Disaster Recovery module” does not require any license.
Contents
Module 7 — Backup Devices 1
7–3. SLIDE: DP Device types ............................................................................................................. 2
7–4. SLIDE: The logical device .......................................................................................................... 5
7–5. SLIDE: Physical to logical device mapping ............................................................................... 6
7–6. SLIDE: Data Protector tape format .......................................................................................... 7
7–7. SLIDE: Tape based Storage Devices ......................................................................................... 9
7–8. SLIDE: HP Tape drive portfolio ............................................................................................... 10
7–9. SLIDE: Tape drive performance considerations ..................................................................... 11
7-10. SLIDE: Tape library terminology ............................................................................................ 13
7-11. SLIDE: SAN connected SCSI Library – example configuration ............................................... 15
7-12. SLIDE: Multiple devices........................................................................................................... 16
7-13. SLIDE: Multipath devices ........................................................................................................ 18
7-14. SLIDE: SCSI library – Autoconfiguration ................................................................................. 20
7-15. SLIDE: SCSI Library – Properties 1/4 ...................................................................................... 22
7-16. SLIDE: SCSI Library – Properties 2/4 ...................................................................................... 23
7-17. SLIDE: SCSI Library – Properties 3/4 ...................................................................................... 25
7-18. SLIDE: SCSI Library – Properties 4/4 ...................................................................................... 27
7-19. SLIDE: SCSI Library – Drive Properties 1/4 ............................................................................. 28
7-20. SLIDE: SCSI Library – Drive Properties 2/4 ............................................................................. 29
7-21. SLIDE: SCSI Library – Drive Properties 3/4 ............................................................................. 31
7-22. SLIDE: SCSI Library – Drive Properties 4/4 ............................................................................. 34
7-23. SLIDE: Device preparation on Windows ................................................................................. 35
7-24. SLIDE: Automatically discover changed SCSI address ........................................................... 38
7-25. SLIDE: Disk based Backup Devices ......................................................................................... 40
7-26. SLIDE: Virtual Tape Library - Overview .................................................................................. 41
7-27. SLIDE: Configure a VTL in Data Protector .............................................................................. 43
7-28. SLIDE: Backup to Disk device (B2D) - Overview ..................................................................... 44
7-29. SLIDE: File Library - Overview ................................................................................................ 45
7-30. SLIDE: File Library – Configuration 1/3 .................................................................................. 49
7-31. SLIDE: File Library – Configuration 2/3 .................................................................................. 50
7-32 SLIDE: File Library – Configuration 3/3 .................................................................................. 52
7-33. SLIDE: Disk Staging................................................................................................................. 53
7-34. SLIDE: Device tools: Devbra, uma .......................................................................................... 55
7-35. SLIDE: Device tools cont.: SANConf, LTT ................................................................................ 58
Module 7
Backup Devices
3 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
DP Device types
Tapes or disks, what should I use for backup? This is a valid question.
The answer is: “It depends on . . .”
Years ago; the magnetic tape was the only medium which was used to store data for backup. Until
mid of the eighties of the last century a tape stored 135MB of data, compared with disks available
at this time 6 times more and 8 times cheaper. Today a tape and disk have roughly the same
capacity and are available for the same money.
Both media technologies have advantages and disadvantages. If we compare access time, power
consumption, reliability, read/write speed, IOs/s, RAID, vaulting and others, it becomes more
complex to choose the right backup media. Data Protector supports all important devices sold by
HP and other vendors. The DP Device Support Matrix lists tape based devices from HP, IBM,
Quantum, Sony, Exabyte, Tandberg, ADIC and others.
This module shows you what kind tape-based devices or disks-based devices can be configured by
Data Protector.
Standalone
A standalone device can be a tape, file or null device.
Tape:
A standalone tape device is a simple device with one drive that reads from or writes to one medium
at a time, examples of a standalone device are:
These devices are normally connected to one system and are used for small-scale backups. There
is no robotic, there are no repository slots. As soon as the medium is full, an operator must
manually replace it with a new medium for the backup to proceed.
Disk:
The standalone file device is the simplest disk-based device. It is a file in a specified directory to
which data is backed up instead of writing to a tape. This device saves data in the form of files. The
file can be located on a local or external hard drive (as long as Data Protector knows its path). The
path is specified when configuring the file device.
Null:
The null device is an operating system specific special file (/dev/null on UNIX, NUL on Windows) that
can be configured in Data Protector as a File device to help test backup performance. The null
device discards all data written to it, but reports all successful writes. Therefore, the null device is a
useful test tool that can be used to gauge the performance of the backup up until the point where it
is written to a real device.
B2D Device
Backup-To-Disk device is a relatively new device. It consists of two parts, the ‘gateway’ and the
‘Store’. It can be based on a physical box, the HP StoreOnce appliance or it can be completely
configured on a DP client similar to a file library. The main feature is the very efficient de-
duplication engine, either running on the HP StoreOnce or on the DP client appliance. The B2D
device, de-duplication and the different ways to configure it, is described in the module “De-
duplication” in detail.
Stacker
A stacker is a single device that usually has only one drive. It loads media in a sequential order. A
stacker takes a medium from a "stack" (its repository) and inserts the medium into its drive. This
exchange is always limited to ejecting the medium already in the drive and inserting the next
medium from the stack. The load is done automatically, except the first medium has to be loaded
manually. When a tape is full, it is ejected and the next tape is loaded automatically. When all the
tapes are used in a stacker magazine, the magazine has to be dismounted manually and the next
one has to be inserted. Again the first tape has to be loaded manually into the drive.
SCSI Library
SCSI library devices are large backup devices. This can be an ‘Autoloader’ with ~40 cartridges and 2
or 4 drives or a ‘Tape library’ with several hundred media. A robotic loads and unloads the 40
drives.
A typical library device has a SCSI ID (Windows) or a device file (UNIX) for each drive in the device
and one for the library's robotic mechanism. For example, a library with four drives has five SCSI
IDs, four for the drives and one for the robotic mechanism). Many library devices use media with
barcodes which Data Protector can use for media labeling, quick scanning of a library’s repository
and identify cleaning media.
Jukebox
The jukebox is a library device which was created for magneto optical jukeboxes e.g. HP
StorageWorks 2200mx. It is manly used for archiving data. The optical jukebox device is configured
as a set of disks representing each side of the optical platters in the jukebox.
If the device is used to contain file media it is known as a 'file jukebox device'. It contains slots
whose size is defined by the user during the initial device configuration. If used to contain file media
the device writes to disk instead of tape. The file jukebox device saves data in the form of files;
each of these files is the equivalent of a slot in a tape device.
For further information on the file Jukebox device see Jukebox (File) on page Error! Bookmark not
defined..
File Library
A file library device is a device which resides in a directory on an internal or external hard disk drive
and consists of a set of directories. When a backup is made to the device, files are automatically
created in these directories. The files contained in the file library directories are called file depots.
The file library device can be located on a local hard drive or on a network share, as long as Data
Protector knows its path. The directory path is defined when configuring the file library device.
The File Library can be used for disk staging what is explained later in this module.
External Control
External control is a means to control libraries not known to Data Protector. If Data Protector does
not support a particular device, a user can write a script/program that will run the robotic control to
load a medium from a particular slot into the specified drive.
ACSLS Library
Using Oracle StorageTek Automated Cartridge System Library Software (ACSLS) it is possible to
manage several libraries from one single point. The software is used to manage large library silos
like the Oracle StorageTek Powderhorn model.
Physcial Properties:
Device File, SCSI Path, Serial ID, Control Host, Data
Host, Repository Slots, Mail Slot, Cleaning Slot
MMDB
4 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Data Protector does not reference physical devices; instead Data Protector uses a logical
representation of the physical device known as a logical device. This logical representation allows
for easier, flexible configuration and easier management of devices.
In order to use a physical device with Data Protector a logical device must be configured. Therefore,
for each physical device to be used by Data Protector, at least one logical device must be
configured against it.
The logical device is made up of physical properties (taken from the physical device; such as the
device path, SCSI ID, Serial ID) and logical properties (automatically set by Data Protector or
manually configured; such as device name, device type, device options). The logical device
definition is stored in the Data Protector Media Management Database (MMDB).
Logical devices are used for operations involving access to devices (such as scanning media, media
initialization, formatting media, backup and restore).
many-to-many:
HP LTO 6
library 1
drive 1
drive 2
one-to-one:
library 2 library 1
drive 3 drive 1
drive 4 drive 2
one-to-many:
library 1
HP ESL E-Series HP MSL 4048 drive 1
drive 2
5 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
As Data Protector uses logical representations of the physical device, known as a logical device, it
provides flexibility as to how devices can be configured. The physical device must be configured
and mapped to at least one logical device however, it can be configured as many logical devices.
There are numerous ways in which a device can be configured in Data Protector. With a logical
device it is possible to:
• Configure SAN based device as one logical device to use multiple paths (one-to-one)
• Configure a single physical device as multiple logical devices, with each logical device
having a different name and set of properties (one-to-many)
• Configure a physical device to have many device files, with a separate logical device
configured for each device file (one-to-many)
• Configure many single physical devices into one single logical device, known as a device
chain (many-to-one)
• Configure a single physical library into multiple logical devices, with each logical device
configured with a subset of the physical library’s available drives and slots. This is known
as library partitioning (many-to-many)
Tape Image
2000 MB
(default)
EOD
End of
Segment Data
Block Size and Data Segment Tuning via: Tuning via omnirc: OMNIMAXCATALOG
DP GUI - Device Settings – Advanced - Sizes (default =12 MB, 60MB=max)
6 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Tape Sections
•
• Dynamic Segment Size --- Segment size is not fixed, but rather variable. The segment size
parameter is used to specify the maximum size of the segment on tape. The segment size used
is determined by the segment size parameter, or a system specific parameter named
OMNIMAXCATALOG_<device_name>. By specifying a catalog size per device on a particular
system, you can limit how large the catalog segment will be on the tape. The default segment
size is 12 MB, and can range from 1 to 60 MB. Data Protector may adjust the size of the
segment if the catalog reaches the defined limit. The catalog size takes precedence over the
specified segment size. The parameter to define the catalog limit must be in the omnirc file on
the system where the device is connected.
• Data Blocks ---- Data stored within the segments are written in blocks. The block size for all
Data Protector devices is 256 KB by default. This default is now used for both Unix as well as
Windows devices. You should set the block size to equal values if you want to exchange tapes
between different devices. In many cases, when backing up a large data set, a larger block size
may improve performance.
• Catalog Information — Catalog information is stored after each segment is written and
records what data (file names, etc. ...) was backed up in that segment. When the data is written
to tape, the catalog information is kept in memory and then written to the tape at the end of
each segment. The larger the segment, the more memory is required to keep the backup
information. The catalog information is also stored within the Data Protector database. This
information is later used during the restore process. Catalog information may be read from the
tape into the database by performing a media import. (Media Import is covered in the next
module). The size of the catalog per segment by default is 12 MB, but can range from 1 to 60
MB.
• Block Size — You can change the block size for a device for better performance or
compatibility, be mindful of the following when doing so:
• Each Logical Device has a default block size that can be customized
• The default block size is set based upon the type of device
• Data Protector adjusts the block size automatically during the restore
• Data Protector backup cannot append to a tape originally written with a different block
size than the one for the current device
• Some versions of Omniback (pre- Data Protector) do not support the same block size
features as the current release. Consult with your device/interface documentation to
verify support for larger block sizes.
.
For Data Protector tape backup device support refer to the latest version of the
Device Support Matrix (available on the HP SSO portal)
7 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
In the past a backup was the process of dumping the critical data to a local attached tape device.
Of course this way of performing server backups changed over the years, but tape based backups
were the only reliable backup solution for a very long time. In the last few years with the
development of high capacity, fast access disk devices their role got changed, so you typically find
a mixture of disk and tape based backup in today’s backup concepts. High available systems,
replicated data in hot or standby mode reduces the need for backup and therefore the need for
tape backups. But tape backup technology made incredible progress, so today’s tape media are
able to keep several TB of data, the data can be encrypted, protected against overwrites and are
easy to transport.
On the following pages we will explain how Tape based Storage Devices are handled in Data
Protector.
* To help keep the LTO Ultrium drive streaming, the HP Adaptive Tape Speed (ATS) adjusts to the data rate from the source
** Provides Encryption only with an LTO-4 medium only
8 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The slide above provides an overview of the various tape drives which are supported by HP
Libraries. The first column lists the maximum capacity of the drive specific media, the next column
list the maximum native transfer rate. Next is the supported interface type, information if WORM
(write once read many) is supported and whether drive-based data encryption is supported.
Often, transfer rate is listed for compressed data assuming a compression ratio of 2:1. However,
the compression ratio is varying with the data and the likelihood of an average compression ratio
of 2:1 is very small. With Database file backups a higher compression ratio maybe seen, with other
data (e.g. MS Office documents, PDFs, binaries) the compression ratio is usually much smaller.
Therefore, the table above lists the native speed. If the data is highly compressible, the throughput
might be much higher. If the data cannot be compressed the maximum throughput will be the
native rate.
Example calculation:
Backup device: LTO-5 (140 MB/s)
Average Data compression ratio:
• SAP R/3 data: 3.5:1
• FS data: 1.5:1
Resulting Read Performance:
• 3.5 x 140 MB/s = 490 MB/s (for SAP R/3 data)
• 1.5 x 140 MB/s = 210 MB/s (for FS data)
Technical Challenge:
Can the data be provided fast enough from disk, over the SAN/LAN via the interfaces to the system and
from there via other interfaces over SAN/SCSI to the backup device? How many processes are running
in parallel to access: disks, interfaces, libraries?
Note:
Use native tools such as HP Library &Tape Tools (LTT) to determine possible disk and tape performance.
9 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The capabilities and limits of the tape device and system infrastructure (file size, directory depth
and data compressibility has an impact on system performance). Actual transfer rates depend upon
the type of drive, number of drives, and the number of drives connected to the SCSI bus or Fibre
Channel. The library robotics imposes minimal loading on the bus. Tape drives also support a burst
rate which is higher than the sustained rate, but limited based upon the bus/channel speed.
Assumptions:
The compression ratio describes an average, which implies that a considerable amount of data
might have much higher or lower compression. However, it is recommended to provide the data to
the drive at the speed calculated against the compression rate to sustain the drive.
To sustain maximum drive speed (LTO-5) for SAP R/3 file system data with an average compression
rate of 3.5:1, data needs to be provided to the drive at a rate of:
To sustain maximum drive speed (LTO-5) for file system data with an average compression rate of
1.5:1 data needs to be provided to the drive at a rate of:
Therefore, the HBA (SCSI & Fibre Channel) should be able to transfer data at maximum tape speeds.
As best practice, it is recommend to use a separate HBA for each tape drive. It is important to check
that the Storage Area Network (SAN) has enough bandwidth. SAN switches, for example, have tools
for measuring performance. These tools can be used to ensure that the SAN has the needed
bandwidth. In addition, tools such as HP L&TT can be used to check performance of disk and tape
(read and write) speeds.
Types of Connection
The type of connection between the servers and clients to be backed up and the secondary storage
system affects the backup performance. This connection is typically one of the following:
• Directly connected tape device: Devices connected directly to the server through a SCSI or
USB connection.
• Network connection between client and backup server: The LAN bandwidth affects the
speed at which data can be transmitted between the client devices and the backup server
• Fibre Channel connection between backup server and tape device: Data transmitted over a
Fibre Channel connection to the tape device is very fast, 4Gbs = 400MB/s and 8Gbs
/800MB/s these are theoretical values. ~80% can be reached in real.
Repository slots
• Cartridge/magazine slot
• Storage element
Media exchanger
• Robotics
• Transport element
Mail slot(s)
• Import/export slot
• Eject element
Barcode reader
10 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The tape library is a complex system used for near-line or off-line storage of data. Data is typically
written onto high capacity tape cartridges by utilizing multiple tape drives simultaneously. The
tape library system differs from a standalone tape drive in many ways, not least is the automated
handling (load and unload) of media to and from the embedded tape drives. Most tape library
systems contain the following components:
• Tape drive(s)
• Repository slots
• Media transport/exchanger/robotic
• Mail slot(s)
• Barcode scanner
• Management interface
The SCSI interface within the library presents the various components to all attached host systems
as objects. There are usually four objects, commonly referred to as elements that are presented:
• Drive
− Tape drives such as DLT, SDLT, LTO which are used to write data to and from media
− In SCSI terms, “Data Transfer Element”
• Slot
− Repository of tape cartridges, stored in magazines in the library, where the physical
media is located and held in a library
− In SCSI terms, “Storage Element”
• Transport
− The robotic that moves tape cartridges between slots and drives
− In SCSI terms, “Medium Transport Element”
• Ports
− Commonly referred to as import/export slot or mail slot, a means to move media into
and out of the library
− In SCSI terms, “Import/Export Element”
In addition the library may have a barcode scanner for media identification using the media barcode
and a management interface to allow for easier management of the library.
• Data Protector has two possibilities for SAN based device configuration; multiple devices or multipath
system 1 system 2
A Storage Area Network (SAN) is a network dedicated to data storage. The SAN provides off-loading
storage operations from application servers to a separate network. Data Protector supports this
technology by enabling multiple hosts to share storage devices connected over a SAN, which allows
multiple-system to multiple-device connectivity. Therefore, each device in the SAN can be accessed
through several paths.
The slide above provides an example of a small SAN environment. In this example, we have
multiple systems connected to a SAN through multiple HBAs. In addition, the tape library
containing two drives is also connected with multiple fabrics. Therefore, we have the possibility for
each system to access the library and drives through multiple paths
(multiple-system to multiple-device connectivity).
There are two possibilities when configuring SAN attached device with Data Protector:
• Multiple Devices configuration
• Multipath Device configuration
Multiple devices
• One logical device is configured for each physical path available between a system and a device
• A Lock Name is used to avoid conflicts where Data Protector tries to use two or more logical devices
that point to the same physical device
• One control path for robotics is configured, to allow other systems to control robotics in the event
of a failover direct access can be configured using libtab
• Multiple devices is no longer the recomended way to configure SAN attached devices in Data
Protector
system 1 system 2
drive 1 drive 2
library 1
12 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Multiple devices
Multiple Device configurations require a separate logical device to be created for each host
connected to each physical device. One logical device is configured for each physical path available
between a system and a device. Therefore one physical device is configured as multiple logical
devices, each one corresponding to a physical path that exists between a drive and a system. This
approach also covers multiple HBAs on one system. To eliminate potential conflicts arising due to
Data Protector attempting to use two or more logical devices that point to one physical device, a
lock name is deployed.
The lock name is identical for all logical device definitions which use the same physical device. It is
assigned during the device configuration. It can be subsequently changed manually if needed. The
lock name consists of an alphanumeric string. The default lock name configured when the device
has been configured though auto configuration consists of the device name, vendor, model and
serial number separated by a colon:
Drive1:HP:DAT160:HU171200LU
In the slide above, we have an example of a Multiple Device configuration. Both system 1 and
system 2 are configured with drive 1 and drive 2. A logical device created for each path between the
system and the physical drive. Therefore, four logical devices have been created for the two
physical drives in the library:
• System 1 to Drive 1
• System 1 to Drive 2
• System 2 to Drive 1
• System 2 to Drive 2
The path to be used would be selected when configuring the backup specification. For failover,
another drive would be included in the backup specification and load balancing configured.
For the library robotics, only one control path can be configured. To allow other systems to control
the robotics in the event of a failover, direct library access can be configured on each system
requiring access to the robotic using the Data Protector libtab file. The libtab file must be created
manually at the following locations on all systems where direct library access is needed for this
functionally to work:
Windows: DP_HOME\libtab
UNIX: DP_HOME/.libtab
The libtab entry consists of the system name, the address of the robotic and the name of the
library.
Example:
libtab file on Windows system ‘zala’
zala.company.com scsi:2:0:0:0 SAN_LIB_1_zala
zala.company.com scsi:2:0:0:0 SAN_LIB_2_zala
zala.company.com scsi:2:0:0:0 SAN_LIB_3_zala
zala.company.com scsi:2:0:0:0 SAN_LIB_4_zala
Example:
libtab file on HP-UX system ‘fiat’
fiat.company.com /dev/spt/lib SAN_LIB_1_fiat
fiat.company.com /dev/spt/lib SAN_LIB_2_fiat
fiat.company.com /dev/spt/lib SAN_LIB_3_fiat
fiat.company.com /dev/spt/lib SAN_LIB_4_fiat
The Multiple Device approach continues to be supported however; to configure multiple devices,
lock names and libtab it is insufficient for large environments considering the number of logical
devices that need to be configured and managed. For example, if there were 10 systems which
were connected to a single device, 10 devices with the same lock name need to be configured.
Therefore, the recommended method for configuring devices within a SAN in Data Protector is
Multipath.
Multipath devices
• Assign multiple paths (host name and SCSI address/device file) to a single physical device
• Simplifies device configuration
• Easier management of devices
• Increases system resilience
system 1 system 2
drive 1 drive 2
library 1
13 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Multipath devices
With the Multipath device configuration approach, only one device is created per physical device
and the physical paths between the system and device are stored as attributes of the logical
device. Multipath allows multiple paths (for example, system plus SCSI address) to physical devices
to be defined in one logical device.
Both drives and the robotic are supported with Multipath and can include mixed platforms. During
auto configuration, for each device, the various paths from the systems to the device that are
identified by Data Protector are stored as part of the device definition. A priority can then be
assigned to each of these paths. This priority can be changed at any time. When Data Protector
uses a Multipath device, it will use the paths according to the priority set in the device definition.
In the slide above we have an example of a Multipath configuration. Both system 1 and system 2
are configured with drive 1 and drive 2. One logical device has been created for the two drives and
for the robotic. For each logical device created, the available paths from the systems to the device
are included the logical device definition. Here you can see that drive 2 has two paths configured,
one to each system in the example given:
14 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
To automatically configure a SCSI library, select Devices & Media in the Context List. In the Scoping
Pane, right click on Devices:
1. Select Autoconfigure Devices In the results pane, all client systems in the cell that
support auto device configuration will be listed. In the example both client systems
configuration are listed.
Once the device discovery utility devbra has completed the scan, the results pane will contain a list
of the devices discovered and available to be configured. Each discovered library has its robotic
path and all associated drives listed.
Listed under each client system is the available path. Two robotics paths are listed because
MultiPath is selected by default and in the example environment, we have two client systems.
In the example configuration; the two client systems, the library and its two drives are listed. In the
results pane:
3. Select each client system to be configured with the library robotic path and drives. In
the example configuration, both systems are to be configured with the library robotics
and the two drives and so all available paths are selected to be configured.
5. The option to select the Automatically discover changed SCSI address feature is
available. The feature is not selected by default. In the example configuration this
option is selected.
Those paths selected will now be configured and the configuration completed. The
logical device is now ready to be used with Data Protector.
Management console
URL for library
15 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The auto device configuration process may produce the desired device settings but in many cases
you may wish to change the default device properties. Once the auto device configuration is
completed, all the library and drive properties can be viewed and changed as needed in the Data
Protector GUI. Starting with the library, in the Library Properties General tab:
• When configuring a virtual device with capacity based licensing, the Virtual tape library – TB
based licensing (Advanced backup to disk) must be selected after the auto configuration is
complete. This is because Data Protector cannot determine whether it is it to be a real tape
device or emulated device during the auto configuration and so is not automatically
configured.
• The Management Console URL (web access) to the tape library is not discovered
automatically and must be entered manually. This allows users to open the library
management console quickly and conveniently from the Data Protector GUI
1
• Select library and switch to the
2 1. ‘Control‘ tab
3 4
• Select client
5 2.
• Discover path
3.
• Add path
4
16 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The auto device configuration process associates the library (robotics and drives) to the configured
paths (client systems and SCSI address). All the configured paths can be viewed in the library
properties Control tab. In the list of configured paths, Data Protector allows the selection of the
preferred path for each robotic and drive. The set of available paths constitute the failover
mechanism. The first path in the list is the preferred path. If the preferred path fails, the Media
Agent attempts to use the next configured path until a path is available, if none of the listed paths
are usable, the session will abort. This failover/preferred path mechanism is used for non-local
Disk Agent access to the devices via the Media Agents. The arrow buttons can be used to move the
selected path up or down the list or to move it to the first or last position. The Apply button must be
selected after any changes in order to save the path priority in the Media Management Database
(MMDB).
In addition to managing the preferred paths already configured; paths can be deleted, added and
further prioritized. To add a new path to the device configuration:
1. First select the library where the properties are to be changed; right click on the library
in the scoping pane and select Properties and then select the Control tab.
3. Press the arrow at the right hand side of the field labeled SCSI address of library
robotic. This starts the path discovery on the selected client system. The paths
available will then be listed. Select the path to be configured.
4. Click Add. The selected path will then be added to the list of Configured paths below.
5. To prioritize the configured paths for failover; use the arrow buttons to move the
selected path up or down the list, or to move it to the first or last position. Use the
Apply button if any changes to the device configuration are to be saved in the MMDB.
To delete a configured path, simply select the path and use the Delete button.
Barcode option
Changed SCSI address
discovery
17 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Other options in the device properties that can be configured from the Control tab are busy drive
handling, barcode reader support, changed SCSI address discovery and SCSI reserve and release.
The Busy drive handling options determines what Data Protector should do if it encounters an
unexpected media in a drive. For example, the media maybe left in a drive through an operation
outside of Data Protector or because of a Data Protector session that was unable to complete
successfully. There are three available options that can be selected:
For the backup to continue automatically then one of the eject options should be selected. As the
ejected media maybe moved to an unknown slot, the library should be scanned before the next
backup.
Many libraries and media support barcode readers and in order to enable Data Protector to use this
feature; the Barcode reader support option must be selected. There is also the option to select Use
barcode as medium label on initialization; with this option selected the barcode will be written as a
medium label to the medium header on the tape each time a medium is initialized. If this option is
not selected, Data Protector will generate medium labels based on media pool names.
The Changed SCSI address discovery feature detects and manages device replacements and SCSI
path changes caused by SAN modifications and system reboots. This option is by default not
selected.
The option SCSI Reserve/Release (robotic control) prevents the SCSI robotic control from being
used by any other process or application, reserving the robotic control only for Data Protector
operations. This option should only be selected when the device is shared between Data Protector
and another application or if the device is shared between two Data Protector cells that do not have
a Centralized Media Management Database (CMMDB). If the device is to be used by Data Protector
only within one Data Protector cell or in multiple Data Protector cells employing a CMMDB, do not
select this option.
18 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The configuration of the Repository allows for all or some of the available slots to be selected and
allocated to a particular library. The slots can be specified in a range or as individual slot numbers.
The slots do not need to be sequential although this is most common. It is possible for a physical
library to be configured as more than one logical device; each logical device allocated a different
set of physical slots. This is particularly useful when the library contains more than one device type
(for example, LTO and DLT).
The Cleaning slot options allows a specific which (if any) of the repository slots contain a cleaning
tape(s). If configured, Data Protector will use this slot with any logical device that has Detect dirty
drive option enabled. At the time of a backup, if the drive issues a ‘cleanme’ request, Data Protector
will load the cleaning tape from the cleaning slot and return it once the drive has been cleaned. The
option Detect dirty drive is configured in the drive properties.
The Settings tab is where the Media Type used in the library is displayed and cannot be modified
once the library is configured.
Multipath device
configured as default
19 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
As with the library and the robotic, all the drive properties can be modified once the auto
configuration has completed. Select the drive in the scoping pane and then in the General tab:
2
1 • Select drive
4 1
3
• Add path
4
20 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
With Multipath configured, in the Drive tab all the configured paths to the drive will be listed. This is
identical to the path configuration for the library robotic. The set of available paths constitute the
failover mechanism. The first path in the list is the preferred path. If the preferred path fails, the
Media Agent attempts to use the next configured path until a path is available, if none of the listed
paths are usable, the session will abort. The arrow buttons can be used to move the selected path
up or down the list or to move it to the first or last position. The Apply button must be selected
after any changes in order to save the path priority in the Media Management Database (MMDB).
In addition to managing the preferred paths already configured; paths can be deleted, added and
further prioritized.
1. First select the drive where the properties are to be changed; right click on the library in
the scoping pane and select Properties and then select the Drive tab.
2. Select the client system on which the path is to be configured.
3. Press the arrow at the right hand side of the field labeled SCSI address of data drive.
This starts the path discovery on the selected client system. The paths available will
then be listed. Select the path to be configured.
4. Click Add. The selected path will then be added to the list of Configured paths below.
5. To prioritize the configured paths for failover; use the arrow buttons to move the
selected path up or down the list, or to move it to the first or last position. Use the Set
and Apply buttons if any changes to the device configuration are to be saved in the
MMDB.
To delete a configured path, simply select the path and use the Delete button.
Additional options configurable on the Drive tab are hardware compression, automatically discover
changed SCSI address and drive index.
Most modern backup devices provide built-in hardware compression. To enable this in Data
Protector the option Hardware Compression can be selected. If this option is set, Data Protector
sends the device an instruction to use hardware compression. A device receives the original data
from the Media Agent client and writes it to the tape in compressed mode. Hardware compression
increases the speed at which a tape drive can receive data, because less data is written to the tape.
For multipath devices, this option is set for each path separately.
The Changed SCSI address discovery feature detects and manages device replacements and SCSI
path changes caused by SAN modifications and system reboots. This option is by default not
selected.
The Drive index is the number that identifies the mechanical position of a drive inside a library
device. This number is used by the robotic control to access a drive.
Default Media
Pool
Mount request
options
Configure
concurrency
Lock name is
automatically
Select additional configured
options
21 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The Settings tab displays the media type and cannot be modified once the library is configured. The
Default Media Pool allows a specific media pool (for that media type) to be used by the drive. When
initializing or importing media; the drive will add the media to this default pool. When the drive is
used for backup, media from this default pool will be used. An existing pool can be selected from
the drop-down list or a new media pool can be created by entering its name in the available box..
In addition, on the Settings tab is the Advanced button, click this button in order to configure
advanced drive properties. Once selected, an Advanced Options window is opened with three tabs;
Settings, Sizes and Other.
• Concurrency: This defines the maximum number of concurrent data streams (from disk
agents or application specific agent) that the device will receive. Setting this to an optimum
value for a particular device type allows the device to stream. The default configured is
dependent on the drive type.
• Eject after session: Only needs to be considered when using a standalone drive, this option
specifies whether the tape should be ejected after the operation accessing it has been
completed. By default, this option is not selected
• CRC Check: The CRC check is an enhanced checksum function. When this option is selected,
cyclic redundancy check sums (CRC) are written to the media during backup. The CRC check
allows the media to be verified after a backup. Data Protector re-calculates the CRC during
a restore and compares it to the CRC on the medium. It is also used while verifying and
copying media or verifying objects
• Rescan: This option is only available for drives in a library. This option instructs Data
Protector to rescan the device repository before a backup starts. This is useful if manual
media changes were performed since the last media scan. This rescan synchronizes the
Data Protector media database with the media that is currently present within the library
repository. For devices that support barcode readers, this is a barcode scan; otherwise the
scan requires each tape to be loaded into a drive to scan the header. By default, this option
is not selected
• Detect dirty drive: When selected Data Protector will detect when a drive is in need of
cleaning. When the drive sends a ‘cleanme’ request, Data Protector will either
automatically insert the cleaning tape into the drive itself or issue a mount request for a
cleaning tape to be loaded. By default, this option is not selected
• Use direct library access: Only needs to be configured for non-Multipath devices. When
configuring devices with Multipath, this option does not need to be selected. By default,
when configuring Multiple Devices, the library robotics is configured as belonging to only
one host. This option enables every system to send control commands directly to library
robotics. In the case of multiple systems operating the same library, this communication
has to be synchronized. The libtab file must be created on the Media Agent client for this
functionality to work. If direct access is enabled for multipath libraries, local paths (paths
on the destination client) are used to control library robotics first, regardless of the
configured path order
• Block size: The device hardware processes data it receives using a device type specific
block size. Data Protector allows the adjustment of the size of blocks it sends to the device.
The default for all devices is now 256 KB (introduced with DP 8.0) . For Data Protector to
use tapes for backup in different devices, the block size must be set the same for all
devices.
• Segment size: Use this drop-down list to enter the size of the data segments on the media.
The segment size affects the speed of restore and of the import of media. A smaller
segment size requires additional space on the media because each segment has a fast-
search mark. The additional fast-search marks result in faster restores because the Media
Agent can quickly locate the segment containing the restore data. However, with smaller
segments there are more catalog segments, which makes the importing of media slower.
An optimal segment size depends on the media type used in the device and the kind of data
backed up. The default segment size depends on the media type. The minimum value you
can specify is 10.
• Disk agent buffers: The Data Protector Media Agent and Disk Agent use memory buffers
during data transfer. This memory is divided into a number of buffer areas. The buffer size
is the number of Disk Agent blocks that a Media Agent can hold in its buffer. Values from 1-
32 can be specified. The default number of Disk Agent blocks is 8. There are two basic
reasons to change this setting; shortage of memory or lack of streaming. The shared
memory required for a Media Agent can be calculated as follows:
DAConcurrency*NumberOfBuffers*BlockSize
Reducing the number of buffers from 8 to 4, for instance, results in a 50% reduction in
memory consumption, what could result to performance implications. If the available
network bandwidth varies significantly during backup, then it becomes more important
that a Media Agent has enough data ready for writing to keep the device in the streaming
mode. In this case, increase the number of buffers
• Mount request (Delay & Script): The script to be executed after a mount prompt request
has been outstanding for the number of minutes configured as the Mount Prompt Delay.
The default script sends an alert notification containing the relevant details. The delay is
the time in minutes that must of elapsed since a mount prompt was issued before the
script is executed. The default value for the delay is 30 minutes.
• Device Lock Name: The device lock name prevents Data Protector from using the same
physical device which has been configured as two separate logical devices. When selected,
the ‘Use Lock Name’ option will lock the device during backup and restore sessions. For
example, if you configure two logical devices using one physical device, you must use the
same lock name for both logical devices. By default, when a device is configured
automatically this option is enabled and the lock name is generated automatically by Data
Protector. When configured manually, this option is not enabled and lock names must be
entered manually.
Device Policies
Device Tag
22 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
On the Policies tab device policies can be selected. These options are only available for drives in a
library and by default not enabled. A device with this option checked may replace any device of the
same Device Tag (see Device Tag below for details):
Device may be used for restore: If the original device is not available for a restore session, Data
Protector automatically selects an alternative device with the same Device Tag located in the same
library.
Device may be used as source device for object copy: If the original device is not available as
source device for an object copy session, Data Protector automatically selects an alternative device
with the same Device Tag located in the same library.
Device Tag: Specify a name for the Device Tag. Devices with the same Device Tag name can replace
each other if needed. Ensure that such devices are of the same media type and from the same
library. Otherwise, the automatic replacement cannot be successful. The name can consist of
maximum 80 characters, including spaces.
23 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Windows operating system facilitates the scanning, discover and configuration of the device.
Once the device is connected, the Windows Operating System will scan and discover the device
(either at the time of booting the system or requested to scan for hardware changes in the system
Device Manager) and configure the device in the operating system Device Manager.
The information listed in the Device Manager for each robotic and drive will depend on the class
drivers installed and configured. In the example below, there are no vendor specific class drivers
installed and available on the system. Therefore, the Microsoft drivers for both the robotic and the
tape drives have been used to configure the device.
With the device configured with the Microsoft class drivers, the library robotics is listed under Media
Changer devices as Unknown Medium Changer, the drives are listed under Other devices as HP
Ultrium 4-SCSISequential Device:
Therefore, if you have the device vendor drivers installed and Data Protector is unable to see the
device then disable the drivers and
move to the Microsoft class drivers. If
the device vendor drivers are not
installed and Data Protector is unable
to see the device then install the
device vendor driver and check again
whether the device is visible to Data
Protector.
If the device is connected to the Windows system through multiple paths then it is recommended
to enable Microsoft Multipath I/O (MPIO).
To enable MPIO:
1. Click Start, select Administrative Tools, and then click Server Manager
2. Click Features
3. Click Add Features
4. In Add Features, on the Select Features page, select the Multipath I/O check box, and
then click Next
5. On the Confirm Installation Selections page, click Install
6. Once the installation has completed, select Close
7. After restarting the system, MPIO installation will be completed.
8. Click Close
Once added, MPIO will check the system and automatically identify and manage multiple path
devices.
Once the device is visible in the Windows Device Manager and listed with Data Protector devbra
device is now available and ready to be configured in Data Protector.
• SCSI addresses can change dynamically and can result in failed backup
sessions
• Data Protector stores the SCSI address and Device Serial ID in the
MMDB part of the Data Protector Internal Database (IDB)
• Data Protector verifies SCSI adress and Device Serial ID each time it is used.
If it does not match a device path discovery is started
• If a new path is discovered for the Device Serial ID the stored configuration
is updated in the IDB and session will start using the updated settings
• In case a SCSI tape drive has been replaced, the new Serial ID of the
replaced drive needs to be updated in the DP Logical Device configuration
24 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The SCSI address of a device can change dynamically in some SAN environments. Some operating
systems assign LUNs based upon the order, in which targets are presented to the operating
system. This assignment is a characteristic feature that can occur during the boot up process. On
every boot cycle, the LUNs may shift depending on the presence or absence of targets.
Bridges can also modify SCSI and I/O addresses if rebooted for the following reasons:
• One of the devices connected to the bridge fails to respond due to a malfunction or a turn-
off. During bridge reboot all the devices, connected to the bridge with their pre-assigned
addresses, following the failed device are accorded new addresses
• A new device has been connected and configured online. In this case, the new device is
assigned the first free address available. During bridge reboot, the sequence in which the
addresses are assigned may vary: the new device may be become the first device to be
assigned a new address, which in turn causes the remaining devices to acquire new
addresses
Therefore, during a bridge reboot, all the devices connected to the bridge may be assigned a new
address.
The Automatically discover changed SCSI address feature provides a robust and tolerant solution
to dynamic SCSI address changes enabling Data Protector to counter and adapt accordingly. Thus
removing the need to reconfigure all the effected logical devices where the SCSI address has
changed.
Data Protector stores the device serial number in the IDB. When enabled, the Media Agent retrieves
the corresponding serial number during the first media operation and saves it to the IDB. From then
on, the serial number forms the basis for device identification. Automatically discover changed SCSI
address is enabled only after the serial number is stored in the IDB.
Thereafter, every time a device is used, Data Protector compares the device serial number against
the serial number stored in the IDB. If the serial numbers do not match then the following action is
taken:
• The media agent invokes devbra, which will discover the device path at the new address, if
found, it is updated to the IDB
• The updated device serial number in the IDB will then be used for all subsequent sessions.
If the automatically discover changed SCSI address option is not selected, the backup session may
fail as, if the SCSI address of the device has changed, the device will no longer be available to Data
Protector. Therefore, in SAN environments it is recommended to use this option.
It is not recommended however, to use dynamic addressing in large UNIX SAN environments
because an ioscan execute can be very time-consuming before it returns the output. For large SAN
environments, it is advised to activate dynamic addressing only in Windows and not in large UNIX
environments.
The Reload button and serial number field will be enabled only if automatically discover changed
SCSI address option is selected. The serial number field is not editable and is always grayed out. On
clicking ‘Reload’, the device serial number will be replaced with the text reading “Reload on next
operation”.
On selecting ‘Reload’, the serial number stored in the IDB is deleted. This then allows a faulty device
to be replaced with a new device without having to configure a new logical device in Data Protector.
The first media operation on the new device will store the device serial number in the IDB to be
used for all subsequent sessions.
25 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Out of the box Data Protector is able to perform Disk based Backups to a directory or Network
share. The following Data Protector Disk based Backup Devices can be used to perform disk
backups to any type of storage:
• Standalone (media type: file)
• Jukebox (media type: file)
• File Library (media type: file)
• Backup to Disk device (used for StoreOnce Software Deduplication and the Non-Staging
GRE feature of the DP/VMware integration)
As the only requirement for such backups the storage/disk/LUN needs to be supported on the
mounted/presented OS
In addition a lot of vendors offer hardware or software based backup appliance (Virtual Tape
Libraries (VTL), Disk-to-Disk (D2D) Appliances, StoreOnce Backup systems). These appliances
needs to be listed in the Device Support Matrix before they can be used as the following Disk based
Backup Devices in Data Protector:
• SCSI Library (Check option: Virtual tape library)
• Backup to Disk device (in case a StoreOnce Backup System or Data Domain System is used)
26 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The Virtual Tape Library (VTL) is a disk appliance that has special software to emulate tape libraries
including tape drives and media inside the library to the backup system. Backup software cannot
distinguish between a physical tape library and an emulated library.
32 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The configuration of a VTL is very similar to the configuration of a SCSI Library. Because Data
Protector cannot separate between a physical and virtual Tape Library the administrator need to
inform Data Protector to handle a manual configured or autoconfigured tape library as a Virtual
Tape Library.
There is a difference in the licensing of a VTL compared to a SCSI Library. A normal SCSI Library
requires licenses for each configured Drive and a Slot Extension Library in case of more than 60
configured slots. A VTL with the same configuration is licensed on the native disk capacity of the
VTL only(in case the VTL is exclusively used by DP – click on License details to see more information
about the licensing), the number of devices and number of slots are not licensed in this case.
Therefore the estimated library capacity consumption in TB is required after the Virtual tape
library option is checked.
You need to ensure to have an Advanced Backup to Disk License (TB based) with the same value like
the entered capacity or higher available on your Cell Manager.
SW based Deduplication:
StoreOnce software Deduplication
HW based Deduplication:
StoreOnce Backup system
28 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
A Backup to Disk (B2D) device is a device that backs up data to physical disk storage and with the
introduction of the Virtual Storage Appliance (VSA) also to a virtual storage. The B2D device
supports multi-host configurations. This means that a single physical storage can be accessed
through multiple hosts called gateways. B2D devices are mainly used by StoreOnce Deduplication
backups within DP. The B2D device supports Hardware based Deduplication (B6200, B6500
StoreOnce backup systems) and Software based Deduplication using the Data Protector Software
Store.
In case of a Data Protector Software Store
you need to install a StoreOnce Software
Deduplication Agent only on the system
that keeps the Data Protector Software
store.
Note: There is no need to install a StoreOnce Software Deduplication Agent on a Data Protector
client, if no Data Protector Software Store is required. Deduplication functionality for
StoreOnce and Data Domain Boost is part of the regular Data Protector Media Agent
For more details about Deduplication and the B2D device refer to Module 15 “Deduplication”
29 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The File Library is basically a group of files in one or more configured directories to which data is
backed up to instead of to a tape.
The File Library is the most sophisticated disk based device available in Data protector and as a
result, has a number of benefits over the jukebox file device. Indeed, the jukebox file device was
originally introduced to Data Protector for testing purposes and therefore is not sophisticated or as
powerful as the File Library. Therefore, when using backup up to disk with Data Protector, it is
highly recommended to use the File Library device.
A file to which data is backed up to in a File Library is called a ‘file depot’. A file depot is created
each time a backup or copy session is made to the File Library. If the amount of data being backed
up is larger than the maximum file depot size, Data Protector creates more file depots as required
for the backup session. Therefore, the backed up object can span over two or more file depots. A
file depot is equivalent to a tape media in a slot, whereas the directories represent the repository
(slots) part of a library. This means many media operations can be applied, e.g. scan, format,
recycle, export, etc. However, some operations are not available, e.g. import, eject.
The name of each file depot is a unique identifier which is automatically generated by the system. It
looks similar to the Data Protector media ID however; this is not actually a media ID but just a
unique file name. An example file depot name (including path) is:
C: \data\backup\0100007f54106d9295058c50008.fd
Since each file depot contains backed up or copied data, a corresponding Detail Catalog Binary File
(DCBF) keeps the detail catalog information for it in the Data Protector Internal Database (IDB).
Thus for each file depot a corresponding DCBF file exists.
The global file variable DCDirAllocation determines the algorithm used for selected the DCBF
directory for a new detail catalog file and the following options are available:
It is recommended to change the allocation policy from fill in sequence (default) to balance size.
The global file is located under:
Windows: DP_CONFIG\Options
UNIX: DP_CONFIG/options
There is no maximum capacity for the File Library depot that is set by Data Protector. The only limit
on the size of the depot is determined by the maximum file size that can be saved on the file
system (on the operating system on which the device is being run). For example, the maximum size
of the File Library depot running on LINUX would be the maximum size of a file you can saved on
this operating system. The capacity of a file depot is specified when the media is first configured. It
is possible to re-set the sizing properties of the File Library at any time during use of the device in
the Data Protector GUI.
The File Library drives are called ‘writers’. The number of writers configured defaults to the number
of directories added to the File Library.
There are a number of benefits gained from using the File Library compared to the File Jukebox:
• Configuration: The first benefit comes in the configuration of the File Library. This is easier
and quicker than configuring a jukebox file device. In the configuration of a File Library, you
only need select the number of writers to be configured which are then configured
automatically by Data Protector. In the jukebox file device configuration, each drive must
be configured manually. When configuring a file jukebox device, each slot must be
configured manually. However, with a newly created File Library no slots or file depots will
be created. You only need to configure the directory. Each slot or file depot for the File
Library will then be automatically created.
• Efficient disk space management: By default all file depots will be non-appendable. This is
very useful for the efficient disk space management. Only one session will be stored in one
(or more) file depot. As soon the protection of the session expires, the file depot can be re-
used. Only where sessions have a small amount of data, backup of logical and archive logs,
the media usage policy of the media pool should be changed to appendable. In addition,
other space management options available with the File Library allow for the configuration
of the minimum free disk size to create a new file depot, the amount of free disk space
which should remain free on the disk and to trigger a Data Protector event if the free disk
space drops below a certain percentage
• Improved disk full handling: In order to determine that there is enough disk space
available to complete the backup of the current data segment; Data Protector pre-allocates
the amount of disk space needed to complete the write task, in particular to complete the
write of the catalog segment. This avoids failed backup sessions when there is insufficient
space available to complete the current write task
• Support for Data Protector Synthetic backup and Virtual Full backup: Synthetic backup is
an advanced backup solution that eliminates the need to run regular full backups.
The File Library is disk array independent so the File Library can be deployed on a multitude of
different storage devices from a single disk, low cost JBOD, to the higher end storage arrays. The
File Library device can be located on a local hard drive, or even on a network share, as long as Data
Protector knows its path. The directory path is defined at configuration of the File Library device.
However, it is recommended to use a local disk or a FC connected disk. Disks connected via
NFS/CIFS links provide only a slow connection which also is not reliable.
There are now a number of Virtual Tape Library (VTL) devices that provide both VTL and NAS
targets. This means that the File Library device can be configured on the VTL NAS partition. Such
VTL devices can therefore accommodate the initial full, incremental and Synthetic Full backups.
The full backup is written to the VTL target device and the incremental backups are written to the
File Library device configured on the NAS partition. With this approach; the VTL provides one single
device where full, incremental and Synthetic Full jobs can be run providing easy management of
‘incremental forever’ backups.
C:\File_Library\44d1914454e8d9cc850d2050013.fd
Therefore, to add the slot to the File Library, use the omnimm CLI command located in
DP_HOME\bin.
Example:
omnimm –add_slots File_Library C:\File_Library\44d1914454e8d9cc850d2050013.fd
4
• Select Device Type ‘File Library‘
3
• Select client on which File
4 Library is to be configured
30 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The File Library can be easily created by using the Data Protector GUI. Within the Data Protector
GUI, select the Devices & Media context:
1. Right click Devices in the scoping pane and select Add Device....
3. Using the Device Type pull down list, select File Library.
4. Using the Client pull-down list, select the associated client system on which the File
Library is to be configured.
31 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The next configuration step is to define the directories, their properties and define the number of
writers:
6. Specify a directory or a set of directories where the File Library should reside. Manually
add the directory or use a browser (click on the Browse button) to select a directory to
be used as the container for the file depots. Multiple selections are not possible using
the browser.
If the directory is on a network share (Windows), then it must be entered manually
since browsing is not possible. The directories to be added must be on different file
systems and must exist on the disk as Data Protector will not create them. The disk on
which the File Library will reside must be visible in the filesystem.
It is recommended that the disk on which the file library resides should be local to the
Media Agent; otherwise there could be an impact on performance. It is critical that the
directory created for the file library is not deleted from the disk. If it is deleted, any
data within the file library device will be lost. At the time the file library is created it will
not contain any file depots. The file depots will be created as needed when a backup is
made to the device.
7. Specify the Number of writers to be configured with the File Library. A writer is the
equivalent of a drive. For each writer used to write data during a backup, a separate
Backup Media Agent (BMA) is started. Increasing the number of writers may improve
performance but will also consume more system resources, e.g. memory. The default
number of writers is the number of configured directories. The naming convention for
writers (drives) is <FileLibraryName>_Writer<Number>. The name cannot be changed
during the creation of the File Library. However, the name (as well as other writer
properties) can be modified once the creation of the File Library is completed.
Maximum size of a file depot: This is equivalent to the media capacity of tapes (format size). If the
amount of data backed up in one session is bigger than this value, a new file depot will be used to
continue the backup. The default value is 50 GB.
Minimum free disk space to create new file depot: The default value is 2 MB.
Amount of disk space which should stay free on disk: The default value is 0 MB
Even if the free disk space drops below (%): An event will be triggered if the free disk space drops
below this value. The default is 10%.
10
11
32 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
9. The Media Type can only be File when configuring a File Library.
10. Select the Distributed file media format’ (DFMF) option to enable the File Library for
Virtual Full Backup. If virtual full backup is not to be used, don’t select this option. .
11. Check and confirm the configuration. Click Finish
In the results pane, a summary of the File Library configuration is displayed. It includes the
following information:
After the File Library configuration check and confirm the File Library properties using the summary
provided. The configured File Library will be listed in the scoping pane under ‘Devices’. The
properties of each directory and the writer can be viewed and modified in the results pane.
Backup:
• store on disk
• scheduled copy to disk or tape
• expire backup on disk after copy to tape
DA
MA
MA File
Depot
1
DA
Restore:
(1) fast restore from disk if data is still available there
(2) restore directly from tape
DA 2
MA File MA
Depot
DA Stream to tape
33 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Disk Staging
The concept of disk staging is based on backing up data in several stages to improve the
performance of backups and restores, reduce costs of storing the backed up data and increase the
data availability and accessibility for restore.
The backup stages consist of backing up data to media of one type and later moving it to secondary
media. The data is backed up to media with high performance and accessibility but limited capacity
such as system disks. These backups are usually kept accessible for restore for a period of time
when a restore is the most probable. Single file restores are executed with an excellent
performance by disk technologies. This is very helpful for selective file restores (particularly
multiple times) where time is an important issue. No tape must be loaded and positioned, which is a
major advantage against tape. After a certain period of time, the data is moved to media with lower
performance and accessibility but high capacity for storage such as tape.
Therefore, disk staging acts as a buffer allowing media drives to operate at maximum speeds and
provide the option to do automatic data replication during off-peak hours. This technique is highly
recommended when backing up numerous small files to prevent poor transfer rates to tape drive.
The continuous backup of transaction log files for example, would not result in overhead through
media load and unload, for tape drives there is no issues with start/stop mode.
Other user cases where disk staging can provide benefits is the backup of slow clients without
multiplexing and tape-less backup of branch offices.
In the example above; we have File Library devices configured for the first stage backup and tape
devices configured for the second stage backup. The File Library writer block size must be the same
as the tape device used for the second stage backup. The data is streamed in the first stage to the
File Library and stored with a data protection period that will not expire before the objects are
copied to the tape. The data will then remain in the File Library for the period of time where a
restore is most probable. Once this period of time has passed the data is scheduled to be streamed
from the File Library to the tape device using the object copy functionality. Once the object copy is
completed the data can be removed from the File Library and the data can be restored directly from
the tape.
Device tools
Devbra
uma
39 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Data Protector provides tools that can be used to determine that a device is connected, visible and
configured correctly with the operating system. Three tools that can be used to determine a device
is ready to be configured in Data Protector:
• Device Browser (devbra)
• Utility Media Agent (uma)
• SANConf
Windows: DP_HOME\bin
UNIX: DP_HOME/lbin
When executed, devbra will scan and discover all the devices that are connected and visible to the
local Media Agent system. The result of the scan is then displayed.
Devbra reports details on both robotics (listed as Exch) and drives (listed as Tape). The SCSI path of
the discovered devices is included in the output. Once the device paths are known, the robotic paths
can be tested using the Data Protector Utility Media Agent (uma) to determine operational status.
Windows: DP_HOME\bin
UNIX: DP_HOME/lbin
The Utility Media Agent (uma) is used for tape library management and can also be used as a tool
for testing library robotic operations (such as moving media between drives and slots within a
library). The output from devbra can be used for identifying the device path to use with the uma ‘–
ioctl’ parameter as shown in the following example.
Note: The Utility Media Agent (uma) only works with the library robotic and will only
work with a robotic (listed as Exch/Changer in devbra)
You cannot use uma in combination with a tape SCSI path.
Using uma to manipulate the library robotic provides confirmation that the SCSI path is correct,
available and the library can be configured with Data Protector. Once testing with uma is completed
it is recommended to return all the media used to test the robotic back to the library repository,
there should be no media left within the library’s tape drives.
Locate the device path for the tape library robotics (media changer) via devbra.
Invoke the utility media agent to interact with the tape library (load/unload tapes, status inquiry,
etc.):
SANConf
40 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
SANConf
The sanconf tool is a command line utility that is available on the Data Protector Cell manager
system that can be used to scan, discover and configure all the devices within the Data Protector in
SAN environments in single Data Protector cells as well as in Manager of Manager (MoM) multi cell
environments using the Centralized Media Management Database (CMMDB). It can automatically
configure a library within a SAN environment by gathering information on drives from multiple
clients and configuring them into a single library.
The sanconf command can be run on the Data Protector Cell Manager or on Data Protector clients.
It resides in the following directories:
Windows: DP_HOME\bin
UNIX: DP_HOME/lbin
The example on the slide shows the output of the sanconf –list_devices command executed from
the Cell Manager. The output provides a summary of all available local and SAN attached devices
within the Data Protector cell. This information can then be used to verify that Media Agents
systems have the correct access to the desired devices prior to device configuration.
HP Library and Tape Tools is a tool for managing HP libraries and tape drives. L&TT is a collection of
storage hardware management and diagnostic tools for tape, tape automation, and archival
products. L&TT assembles these tools into a single, convenient program. It is available to download
for no charge from the following URL:
http://www.hp.com/support/tapetools
In addition, further information regarding L&TT and the L&TT User Guide can be found at the above
URL.
Contents
Module 8 — Media Management 1
8–3. SLIDE: The Media Pool ............................................................................................................... 2
8–4. SLIDE: Media pool properties - General .................................................................................... 5
8–5. SLIDE: Media pool properties - Allocation 1/2 .......................................................................... 7
8–6. SLIDE: Media pool properties - Allocation 2/2 .......................................................................... 9
8–7. SLIDE: Media pool properties - Condition factors ................................................................... 11
8–8. SLIDE: Media pool properties – Media pool usage .................................................................. 13
8–9. SLIDE: Media pool properties – Media pool quality ................................................................ 14
8-10. SLIDE: Creating a Media pool 1/4 ........................................................................................... 15
8-11. SLIDE: Creating a Media pool 2/4 ........................................................................................... 16
8-12. SLIDE: Creating a Media pool 3/4 ........................................................................................... 17
8-13. SLIDE: Creating a Media pool 4/4 ........................................................................................... 18
8-14. SLIDE: Free pool concept ........................................................................................................ 19
8-15. SLIDE: Multiple free pools ...................................................................................................... 21
8-16. SLIDE: Free pool properties .................................................................................................... 22
8-17. SLIDE: Create a Free Pool ....................................................................................................... 24
8-18. SLIDE: Medium Properties ...................................................................................................... 25
8-19. SLIDE: Location Tracking and Priority .................................................................................... 27
8-20. SLIDE: Media Management actions ........................................................................................ 28
8-21. SLIDE: Formatting Tape Media ............................................................................................... 30
8-22. SLIDE: Media Export & Import 1/2.......................................................................................... 33
8-23. SLIDE: Media Export & Import 2/2.......................................................................................... 34
8-24. SLIDE: Vaulting with Media Pools ........................................................................................... 36
Module 8
Media Management
Media pools
3 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Data Protector offers more than just a mechanism for backing up data, it provides the user with a
powerful mechanism to manage the backed up data through a set of Media Management function
and stores the all media management related information in the Media Management Database
(MMDB), an important part of Data Protectors Internal Database.
Media Pool
Data Protector organizes backup media into Media Pools. The Media Pool is simply a logical
collection or grouping of media within the Media Management Database (MMDB).
Media pools organize media that are of the same physical type (LTO-Ultrium, T9940, Exabyte, etc.)
or media that are related in some way and share similar usage requirements.
Note: A simplified way to think about media pools is to view them as a destination for
backup, while considering the devices to be the transfer mechanism between the
data and the media pools.
It is also possible to have all backups share the same single media pool. This approach has certain
disadvantages. The quantity of media in the pool may be too large, and managing the pool may be
difficult. It would be difficult to verify that you have sufficient media in the pool to complete all the
backups that utilize it (as each backup has a different requirement).
Grouping media used for similar kinds of backup into a Media Pool allows you to apply common
media handling policies on a group level. In this case, you will not have to bother with each medium
individually. All media in a pool are tracked as one set and have the same media allocation and
usage policies.
Library Features
Media residing within the tape libraries managed by Data Protector are tracked in the MMDB. Data
Protector offers several features for library media management.
• Online Catalog
Data Protector maintains a database record of all the data that has been backed up along with the
media used to perform the backup. When it is necessary to restore data, the on-line catalog can be
browsed to locate the file to be restored and to find the candidate backups that could be used. The
media database stores tape location as well as file position on tape.
• Location Tracking
Once a backup has been performed, the media usually is moved physically from one location to
another, for example to offsite storage or a fire-safe. Data Protector can keep track of the physical
location of the media by use of predefined vaulting locations.
In addition to tracking external changes to the media locations, Data Protector stores the current
physical location of media. When a tape is inserted into a Logical Device, and then accessed by Data
Protector, the device repository is stored in the database. This media tracking provides for quick
access to known tapes. This device repository feature is available for tape library as well as
standalone devices.
Barcode Labeling
Media, located in a virtual or physical Tape Library are using Barcodes as labels for faster
identification. It is possible to use the assigned barcode of a medium as name/label of the same
medium in Data Protector to simplify the media management work.
Protection Features
Beside protecting the data that is stored on the media from overwrite Data Protector offers also
Media Protection features. The most important ones are listed below:
• Media Labeling
Data Protector media contains header information that enables the media usage to be tracked and
controlled. Each new medium to must be initialized with the Data Protector header which will
contain a unique medium id. Whenever a medium is used, it is first verified (by the header) to be the
correct medium for the designated session, whether for backup, restore, or copy. After a session
completes the header information is again verified.
• Media Duplication
For extra security, it may be necessary to have multiple copies of a particular backup. For example,
if the data were being changed in some way, or removed after the backup has taken place, the only
place that the original data would reside is on the backup media.
In this situation, it is desirable to have multiple copies of the backup available in case there is a
fault with the original copy or it is somehow lost. Data Protector provides several methods for
media and object level duplication. See the “Media Management and Replication” and “Object
Consolidation” modules for more details.
Note: Data Protector Media Management was initially developed for the daily work with physical
media. In environments with mainly disk based backup devices some features make no
sense anymore (Number of overwrites, Medium export/import). In order to provide an easy
and consistent Media management, disk and tape based media are handled in the same
way, but tape media related settings are safe to ignore for disk based media.
Max 64
characters
properties
Supported Media Types
Media allocation
Media conditions
Usage
Quality
4 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Media Pool properties are defined at the time when a media pool is created and can be partly
modified any time later. The name for the pool can contain up to 32 characters, spaces are allowed
but not suggested (complicates scripting, etc). The description field is optional, limited to 64
characters and may be used to convey a purpose or usage characteristics for the pool.
The media type is selected when the pool is created and is not modifiable. To change the media
type of a pool you must first delete the pool and then re-create it.
The Allocation Policy as well as the Usage Policies may be altered for new or existing pools. The life
expectancy and number of overwrites should be set according the media manufacturer's
recommendations. Data Protector simply provides a default value based upon the media type.
In addition to being a logical container for your media, a Media Pool is configured so that the media
within the pool exhibit particular characteristics. These characteristics depend on the properties
and policies that you have set for the Media Pool.
Media Type
This defines what type of media the pool may contain; a pool may contain only one type of media.
The currently supported media types are:
• AIT • SAIT
• CSF-R • SD-3
• DDS • SuperDLT
• DLT • T10000
• DTF • T3480/T4890/T9490
• ExaByte • T3590
• File • T3592
• LTO-Ultrium • T9840
• Optical • T9940
• QIC • Tape
NOTE During Data Protector installation default pools for all media types were created
(Default DDS, Default DLT, ..). It is safe to delete those default pools and create
customized pools for the Media types that are available in your backup
environment.
Pool Name and Description can be changed afterwards, but changing Media type is not possible. In
order to change the Media type you need to delete and recreate the pool.
appendable
• multiple backups can be appended to the same medium
• very useful when backing up small amounts of data very often
properties
• media that are most full, but still has spare capacity will be used
non-appendable
• Medium is used for just one backup Media conditions
• Medium that has been used the least number of time is chosen
Usage
appendable on incrementals only
• same as appendable, except only incremental backups can be Quality
appended to existing backups
5 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Data Protector picks up media from a pool in a certain order, which is determined by the defined
Media usage policy and the Media allocation policy. Those policies need to be configured for each
pool and depend on used Media type and purpose of the pool
The Media usage policy defines how new backups are added to already used media and influences,
which media are selected for backup. There are three media usage policies available to choose
from:
• appendable
• Non-appendable
• Appendable on incrementals only
Appendable
This enables Data Protector to append multiple backups to the same piece of media. This can be
very useful when backing up small amounts of data throughout the day, for example Database
Archive Logs.
When using this policy, Data Protector will request a medium that has the most data on it but is not
full and by checking the Media allocation policy (See Media Allocation Policy).
When a backup is performed, it is directed to a specific media pool via the Logical Device selection,
ether using the default pool from the specified Logical device or using a dedicated media pool that
was assigned to the Logical device for a particular backup job.
Data Protector will choose the particular media to be used from this pool based on certain factors.
If the media pool allocation policy is appendable, the media that is the most full, but still has spare
capacity is used. Ideally, Data Protector wants to fill up existing media before going on to use
empty media. This policy will save generally be less expensive in terms of media cost, but will not
allow for easy tape rotations. Data Protector will continue to request the medium until it is filled.
Note If the media pool allocation policy is appendable and the backup requires more
than one medium, only the first medium used can contain backed up data from a
previous session. Subsequently, Data Protector will use empty or unprotected
media only.
Non-Appendable
If the media pool is non-appendable Data Protector will always write to a media from the
beginning. Data Protector will request a media with none or expired protection that has been used
the least amount of times (See Media Allocation Policy).
In this way, Data Protector ensures even wear across all media, rather than the same tape being
used each time. This may make media more reliable due to less wear as a result of fewer loadings.
To make the selection visible to the user Data Protector assigns an allocation number, also called
an Order number to a media. The allocation/order number is viewable from the Media Management
GUI; select the Device & Media context, right click on a media pool, select properties. Data Protector
will pick up the media with the lowest Order number (1)
strict
• allocation order enforced
properties
• even media usage enforced
• could result in more mount requests
Media conditions
loose
• allocation order not enforced Usage
• even media usage not enforced
Quality
• fewer mount requests
6 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Media allocation policy defines the order in which media are allocated within a media pool. There
are two different options in Data Protector:
• Loose allocation
• Strict allocation
The loose policy defines that even while Data Protector will request a particular medium; it still
accepts an alternative that is available for use, so the overall number of mount requests is reduced.
The strict policy determines that the medium Data Protector requests must be used. Allocation
order is “strictly enforced”, which might result in more mount requests. In addition the media needs
to be formatted, unformatted media are not used.
The most commonly used setting is loose because it is more forgiving. Loose is also required when
you want the ability to use a new, unformatted medium.
The setting of this feature causes Data Protector to initialize and use blank media in preference to
media that is already initialized (less number of overwrites). Tape libraries loaded with new media
must be scanned (barcode scan) prior to using this feature to allow Data Protector to identify where
the un-formatted (media are located.
# InitOnLoosePolicy=0 or 1
# default: 0
# This option is used by Backup Session Manager. When using
# loose policy media checking this option is checked if the
# Session Manager should automatically initialize new media. initialize new
media
Use the Free Pool – Move Free Media to the Free Pool
The free pools are special media pools that are automatically created by Data Protector and
contain free unprotected media that can be used for backups by all the media pools that share the
same free pool. Once the option Use the Free Pool is selected select a free pool from the list or
type in a new name for a free pool that will be created.
If the option Move Free Media to the Free Pool is selected, all free media from the pool are moved
out to the specified free pool after clicking on OK
See the “Free Pool” topic later in this module for more details.
Condition factors
properties
Based on defined settings
tape are graded as :
Media conditions
Usage
Quality
7 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Media Condition factors define how long media are considered as reliable for backups. There are
two Media Condition factors in Data Protector:
Check the recommendations from the media vendor on how to set these condition factors.
Maximum # of Overwrites
In addition to the number of months that a medium is to be considered valid, the number of
overwrites can also be configured. Again, when this threshold is reached the media is marked Poor.
Tapes reaching 80% of this threshold are marked as Fair.
NOTE Both, the age and overwrite fair thresholds may be altered via the MMFairLimit
parameter in the DP_CONFIG/options/global file. Eighty percent is the
default threshold for fair quality marking.
# # MMFairLimit=PercentageOfPoor
# default: 80
# This limit is used for detecting "Fair" (almost "Poor") media.
# When a medium exceeds a specified percentage of limit
# specified for "Poor" media (this limit can be set for each
# pool), it is marked as "Fair". Data Protector Media Management
# uses such ("Fair") media only if there is no "Good" media
# available.
Media that are marked as Poor should not be reinitialized and registered as a new medium
unless the poor condition was as a result of a tape drive failure and the media was
accidentally set to poor. In this case the tape quality may be verified by scanning and/or
verifying the tape (see OLH for Object Verification).
In such a scenario the media pool’s media condition factors are simply greyed out and align with
that of the free pools, as shown below.
properties
Media conditions
Usage
Quality
8 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The pie chart displays estimated free space and used space in the media pool.
This pie chart does not show the free disk space available for media pools created for file library
devices. To get this information, you need to use operating system tools on the system where the
file library device was defined. For example, bdf (HP-UX systems), df -k (other UNIX systems), and
Explorer -> Properties (Windows systems).
Data Protector’s File Library media pool by default has a Non-appendable media usage policy. The
media pool's free disk space will always be indicated as 0%, even if there is enough free space
available.
properties
Media conditions
Usage
Quality
9 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The media quality in a media pool determines the overall quality of the media pool.
1. Poor
Media errors (read/write errors) detected or media condition factor limits specified for the
media pool have been exceeded. Data Protector will not use media in poor condition for
backup.
2. Fair
Media which exceeds 80% (default), i.e., 81 to 100%, of the specified lifetime/usage limits.
The default percentage can be changed through the global variable MmFairLimit (see
previous chapter). Media in condition Fair are used in the same way as media in condition
Good
3. Good
All media, that is not in condition Poor or Fair.
• expand Media
2.
CLI:
omnimm –create_pool <name> <type> <policy> <age> <overwrites> <options> …
10 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Media Pools can be created via the GUI or via CLI by using the omnimm command. Data Protector
provides a set of default media pools, one for each media type. To create a media pool using the DP
GUI, follow the steps:
1. in the Context List, click Devices & Media
2. expand Media in the Scoping Pane
3. right-click Pools
4. select Add Media Pool to open the Media pool creation wizard
The following slides will further explain the Media pool creation wizard.
It is also possible to create a Media pool from the CLI. Using the CLI offers several possibilities for
automation. The following example creates a media pool called MSL04_DailyBkps of type LTO-
Ultrium as an appendable, loose pool with a media usage policy of 36 months and 250 overwrites
and free pool usage:
Example:
omnimm -create_pool MSL04_DailyBkps LTO-Ultrium App+Loose 36 250 -free_pool
Pool MSL04_DailyBkps successfully created.
• click Next
8 8.
11 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
On the first page of the wizard specify the Pool Name, an identifier with a maximum length of 32
characters. Enter a Description for easier identification of the pool, up to 80 characters in length.
Next select a Media Type from the list.
Beside other purposes this information is used to calculate the free space of a media and can be
used for capacity planning. The default capacity of supported media types is listed in the Data
Protector global file and can be overwritten or updated if required. See Media Class parameter
MC_x in the global file for details.
Note Free space calculation based on Media Type is used for reporting purposes only.
Data Protector is always writing to the end of a tape (EOT flag) and does not stop
writing at the specified capacity value.
14 • click Next
14.
12 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
On the next page of the wizard configure the Allocation Policies, such as Usage and Allocation.
For details see previous chapter 8–5. SLIDE: Media pool properties – Allocation 1/2
In case Loose allocation is picked up the grayed out option Allocate unformatted media first
becomes available for selection. If option is checked Data Protector prefers unformatted media like
new media that were added to the library repository before it will pick up any free Data Protector
media for planned backup, copy or consolidation operations. Data Protectors Media Management
supports the creation of a Free Pool, a special pool that collects all free Data Protector Media of the
same Media Type and serves free media to production pools on demand. Check the option Use free
pool if this pool should participate in that service. As an optional feature all media with expired
protection can be automatically moved to the specified free pool with option Move free media to
free pool.
13 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
On the last page of the wizard Media Condition Factors can be configured that define how long a
media is reliable for backup, copy or consolidation operations Parameter Valid for (Months) define
how long media should be used in production after media was formatted for Data Protector
If the protection of a media expires the media can be re-used for new backups, copy or
consolidation tasks. The existing data on the media will be overwritten. How often such media can
be re-used is defined by the Maximum overwrites parameter.
Within the global file DP maintains default settings for each media type – see MC_x entry.
Pressing the Set to Default button will overwrite the current settings with the entries from the
global file.
Example:
If only LTO4 technology is used and company policies allow only 120 overwrites and maximum media age
of 24 month the default global entries for LTO (Media Type 13) can be updated with the following line :
Pool-1
Free Pool
de-allocate
allocate
Pool-2
de-allocate
allocate
Default: 1/day
(configurable via
global option parameter
FreePoolDeallocFreq)
14 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Data Protector supports the use of a shared Free Pool of unprotected media. These “free” tapes
may be newly formatted or have expired backups on them. Each media pool with the “Use the free
pool” option enabled and pool selected will allocate media from the designated free pool as
needed for backup operations. Media will move from the free pool to the pool associated with the
device used for backup.
The media pool allocation and usage policies will be established by properties of the regular media
pools, as free pools do not have such policies available. Tapes that exist in the free pools are not
used for backup until allocated and moved to a regular media pool.
When the protection of the data on a tape expires, Data Protector may automatically de-allocate
the tape and return it to the free pool. This feature is controlled by a second media pool property in
the GUI called “Move free media to free pool.” If this option is not enabled unprotected media may
be manually moved to the free pool.
Automatic De-allocation
The de-allocation process occurs periodically during the day. The frequency of the de-allocation is
controlled by the “FreePoolDeallocFreq” parameter in the global file. The default frequency is
once per day and occurs at 00:00 (midnight). The parameter “FreePoolDeallocFreq” is set to one
by default, but may be set as high as 96 to produce a 15-minute de-allocation frequency. When the
frequency is set greater than one, the first de-allocation occurs at 00:00, and then the day is
divided according to the frequency specified. As an example, a frequency of 3 causes de-allocation
at 00:00, 08:00 and 16:00.
Manual De-allocation
The de-allocation of expired (non-protected media) may be accomplished at any time by using the
command:
NOTE The omnidbutil command is available only on the Cell Manager as it is not part of
the command line part of the Cell Console. On the Unix Cell Manager the command
is in the DP_HOME/sbin directory; on the Windows Cell Manager the command is in
the DP_HOME\bin directory.
allocate
allocate
allocate
15 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Free pools are designed to allow media sharing between pools of the same media type.
It is recommended to put media of the same physical type into a sharing arrangement; such as LTO-
5 media shared between two pools that are associated with LTO-5 tape drives.
In addition it is recommended that each physical library got its own free pool configured to ensure
that all requests for free media from local production pools are served by local media that are
physical located in the same library.
Note:
A protected medium can never be part or moved to
a free pool; also not by a manual operation!
16 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
• cannot be deleted if the free pool is linked with a regular media pool. If you tried to remove
a free pool that is still in use by other pools, Data Protector throws the following error:
To remove the free pool, determine which regular media pools of particular medium type
are still using this free pool. You will be able to delete the free pool after de-linking it from
each of the (compatible) regular media pools which it was servicing.
• cannot be deleted if the free pool is not empty. On trying to delete a free pool filled with (at
least one medium) media, DP throws the following error:
After moving out all media from the free pool, it can be deleted.
• is different from a regular pool as it cannot be used for allocation because it cannot hold
protected media. Consequently, allocation policy options (Strict / Loose, Appendable/Non-
Appendable) are superfluous and therefore not available.
Tapes that exist in the free pools are not used for backup until allocated and moved to a
regular media pool.
Free pool imposes its media condition factors on regular media pools
All media pools configured to use a particular free pool will have surrendered their individual media
condition factors in support of a uniform set of media condition factors imposed by the free pool.
This ensures consistency across all regular media pools from a media condition point of view. In
other words, all regular media pools that share a set of tapes will use the same condition factors of
age and overwrites; this is best for media pools of the same type regardless of the use of the free
pool.
If a free pool contains media with different data format types, Data Protector automatically
reformats allocated media if necessary. This is especially true in case NDMP media—that feature a
different data format--may be reformatted to Data Protector media. To enable such an automatic
behavior, set the global ReformatFreePoolMedia to 1. Its default value is already 1, so
setting it to 0 disables this feature.
To move a protected medium to a free pool, first remove medium protection, i.e., recycle the
medium.
CLI:
omnimm –create_free_pool <name> <type> <policy> <age> <overwrites> <options> …
* Not shown here, because its identical to the previously shown setup of a regular media pool
17 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
There are two ways to create the free pool within the Data Protector GUI:
• Automatically during the initial request to use the pool (along with the regular pool).
When the “use free pool” option is specified, simply enter a name for a new free pool in
the adjacent field.
• Manually in advance of requesting to use the free pool
The Add Free Pool option is used to manually create a Free Pool as required. The manual
creation of a Free Pool is pretty similar to the creation of a regular media pool. For the manual
creation, the following steps are required:
1. Create Free Pool with the “Add Free Pool” option, which will start the Free Pool
configuration wizard
2. Set Free Pool general options, such as Pool Name, Description and Media Type
3. Set Free Pool conditions, such as maximum age and number of overwrites.
In order to create a Free Pool from the command line omnimm can be used:
Info Usage
Medium is DP internal
identified by Medium-
General ID, external also
Description/Label can
be used
Objects
Backed up
objects
stored on the
medium
18 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Medium Properties
Medium properties can be accessed in a few ways. The easiest way is to right click the medium and
click on Properties, as shown in the left hand section.
NOTE Media can only be exported if the sessions it contains are no longer protected or a
recycle has been performed to remove the protection.
Data Protector stores information about each medium based on the Medium ID.
In addition to the GUI, there are several queries that may be issued from the command line to
obtain the media details shown above. The primary command for media information is the omnimm
command.
Edit
19 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Media locations are used by Data Protector to assist in tracking media that are resident or not
within a tape library. Media locations should be created by the Administrator (as shown above) and
should reflect the available media locations in the company. Created locations are stored in the
vault_locations configuration file, an ACSII file, located directly in the <DP_CONFIG> directory.
Defined location entries are shown in the Locations folder within the Devices & Media Context above
Media Pools. The <EMPTY> location is used for media that are not assigned to a specific location.
Each medium should have the location property adjusted whenever the medium is moved. Use the
“Change Location” action after selecting the desire medium.
Restore Priority
Each location can get a Location priority assigned. If a media location priority is set, Data Protector
will use the media set with the highest priority (priority 1 is the highest, priority None is the lowest)
if more than one media set equally matches the conditions of the media set selection algorithm
(e.g. original media set and copy media sets of the same data exists). The location priority based
automated media set selection (AMSS) can be manually overwritten at restore time.
format
media pool Media are
actions
added to import
pools by
the
following copy
medium
actions
actions
move to pool
20 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Data Protector provides the following Media Management actions for media pools:
Format Initialize a new medium, prepare it for Data Protector use by writing a
header to the tape and registering it in the MMDB.
Import Read the header and detail catalog information from a tape. The tape may
be from a different cell or may have been exported from the current cell.
Delete Removes an empty media pool. Delete media in pool first. This is useful for
removing the Default pools that are not needed.
Data Protector provides the following Media Management actions for media within media pools:
Export Delete an unprotected medium from the media management database. Tape
may need recycling to remove protection. The contents of the tape are
unaffected.
Change Alter the vaulting location string associated with a tape. The tape does not
Location need to be in a device for this operation.
Recycle Remove all of the protection from the data that is backed up on the selected
tape. The tape does not need to in a device for this operation.
Move to pool Change the pool that a particular tape(s) is assigned to. The tape does not
need to be in a device for this operation.
Copy Replicate a tape. Two devices of the same type and a blank tape are required.
This uses the omnimcopy functionality for duplicating a single tape.
Verify Read the tape header and verify that it is written in Data Protector format.
The data may also be verified if the tape contains CRC blocks. Note that this
operation checks the whole media, while Object verification under Object
Operation performs validations checks on an Object level.
Import Catalog Recover the detail catalogs from a tape that is still in the database but
without its detail catalog. The detail catalog is automatically purged from the
database when it expires. Protection levels are assigned by the backup
specification.
Media Format
• each Data Protector medium (physical or virtual tape based) must be formatted before backup
• auto-format possible if loose policy is used and InitOnLoose global option set to 1
• File Library /Backup-to-Disk media uses auto-format as a default
• initialize each medium only once to write tape header, media overwrite does not touch header
initialization Label/
parameters Description
Tape Location
Header Logical
Device
Medium ID
Capacity
MMDB
<force>
CLI: omniminit –init <Logical Device> <Options>
21 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Before any medium can be used by Data Protector it must undergo an initialization process called
Formatting. The media format process is performed within the media pool where the formatted
medium is to be added. Media can be formatted from within the GUI in the Devices & Media context
or from the command line using the omniminit command.
NOTE Data Protector File Library and Backup- to _Disk media are automatically created
and formatted. It is not possible run manually format operations on file library
media.
The Data Protector Media Management engine requires a unique Medium ID for each tape or file
used as a backup medium. A unique ID is generated when the media is initialized. This ID is written
to the media header and to the Media Management Database (MMDB). Data Protector uses this
header to distinguish one medium from another. Each time a medium is accessed, the header
information is read to ensure that the correct medium is being used. It is also possible to manually
read media header information by using the Scan operation after selecting a Logical Device or
library slot, or with the command:
Location (optional)
The physical location of the medium may be manually entered or selected from a list of pre-
configured vaulting locations. The location can consist of up to 32 characters. It is suggested that
the administrator pre-configure the possible locations before formatting media to create
consistency.
Logical Device
The logical device used to perform the media initialization. Within the GUI, only logical devices that
match the media type of the media pool are selectable during media formatting. Plan in advance
the block size that is to be used for the media and configure the logical device appropriately. After a
medium is initialized, the block size for the medium may not be altered and may not be appended
by a logical device with a different block size.
• Specify allows the user to input a specific capacity in megabytes that the medium is
expected to hold. The capacity is used only for statistical purposes and does not set a
hard limit on the amount of data that any media can hold. Typically the compressed
capacity of the medium is used and set to two times the native tape capacity.
Initialized Size
The size that a tape medium is initialized to will not ultimately affect the amount of data that can
be written to it. Data Protector writes to the tape until the device reports early end of tape (EOT)
warning. If a tape is formatted to a smaller size than the physical size of the tape, Data Protector
will write to the end of tape, and then update the Media Management (MM) database.
The recorded tape size will be reset to the value of the physical tape size. Each time Data Protector
writes to a newly formatted media, the media capacity value may be updated to reflect the largest
amount of data that has ever been written to the tape. Commonly the formatted size differs from
actual medium capacity.
The same thing applies to media that are initialized to a very large size. Once the tape has been
filled with data, the size will be reset in the MMDB.
Note When using media type File, the specified size will limit the size of the file medium;
the Data Protector default size is 100 MB. This may be permanently altered by
modifying the global option FileMediumCapacity or simple type in a higher value in
the GUI
Force Option
During initialization Data Protector checks the medium to see if it already contains data that is in a
recognizable format. If the format is recognized, then, by default, Data Protector does not initialize
the media. The reason is that this media may contain valuable data. The behavior can be changed
by checking the Force option
In addition to Data Protector media also tar, cpio and fbackup written media are recognized.
DP Cell1
DP Cell2
Right click the medium Right click on a pool
and select Export and select Import
Export Import
• removes information about a medium and its • adds Data Protector foreign media to the cell
contents from the local IDB • information about backed up data on the
• the data on the medium remains unchanged media is read from catalog segments on tape
• only unprotected media can be exported and written into the IDB and can now be
(Use Recycle function to remove protection browsed for restore
22 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The media export/import process allows Data Protector media to be exchanged between Data
Protector cells.
In order to export a media, right-click on the media you wish to export and select Export from the
menu. Media export is an IDB only operation. All information about the media will be identified and
removed from the IDB; the media itself is not required and all data on the media remains
unchanged. By definition only unprotected media can be exported, so use the Recycle function in
Data Protector to remove any existing protection.
After export, physically move the media(s) to the new cell that manages a second Data Center or
Department, put it in their Tape Library and run a Barcode scan in DP to make it known to DP. Now
right click on a Pool or mark the discovered “unknown” media directly under Slots and select
Import from the menu. You need to select a drive that will be used to read the media. The import
process reads the data on the media (in detail it searches and reads the catalog segments on the
media only) and writes the information into the IDB. After import the new objects will show up in
the Restore GUI and can be selected for restore.
Note: At import time immediately set a new protection for the imported data, otherwise
imported data will be removed from IDB during Daily Maintenance.
DP Cell 1 DP Cell 2
Tape + MCFs
23 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Data Protector supports a special file based media export/import method. In difference to the
previously explained method this method does not require a physical reading of catalog segments
on tape, it reads the catalog segments directly from a Media Catalog File (MCF). The feature
allows bulk-transfer of media information much faster to another cell; the MCF import could even
be started, when tapes are still on the way to the new Data Center.
UNIX : DP_CONFIG/export/mcf
WINDOWS : DP_CONFIG\export\mcf
is listed as default in the window, but can be customized e.g. to put the file on a network share right
away. Each medium has its own MCF, identifiable by the Medium-ID in the file name.
Note MCF creation does not export a media from the cell. In case media is no longer
required it needs to be exported in addition to force an IDB cleanup.
UNIX : DP_CONFIG/import/mcf
WINDOWS : DP_CONFIG\import\mcf
Within Data Protector, click on Pools (do not click on a specific pool) and select Import Catalog
from MCF File. The result pane shows the default import location and allows the selection of the
copied MCF to import.
After MCF import the media content is known to DP and can be browsed and selected for restore.
NOTE: After MCF import the media location is still unknown to Data Protector. In order to
prevent mount requests put the media in your tape library and run a barcode scan
LIMITATION: Data Protector File Library media are not supporting the physical export/import
and the MCF based export/Import. Use a Data protector File Jukebox instead.
Modify
description
and
location
move
select
medium to
medium
a new pool
CLI:
omnimm –modify_medium <medium> <NewLabel> <NewLocation>
omnimm –move_medium <medium> <NewPool>
24 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The process of “vaulting" media is essentially a form of protection. Tapes are typically packed up
and sent to an off-site safe storage facility. Tape rotation typically involves moving the tapes off
site, keeping them there for a few weeks/month and putting them back onsite after medium
protection expires.
Data Protector supports the following features to facilitate tape rotations and vaulting:
Multiple media pools may serve as media repositories when media are to be taken out of a device
repository or taken offsite.
• Active_pool This is the set of media available within a device repository (library)
• On-site_vault Tapes here are out of the device, but not yet offsite
• Off-site_vault Tapes are physically at a remote location.
• Free Pool holding area for expired media, prior to moving to the active pool.
Data Protector provides both the GUI and command line to allow you to move media from one pool
to another of the same type. The command line could be used in conjunction with an automation
script to make the media management simpler. Consider using the command line to automated
vaulting operations and media management.
The following example demonstrates the command line method of modifying an existing medium
that is moving to a different location (you may change the label at the same time):
Example:
omnimm -move_medium "SAP_ARCH_SCS_023" Offsite_vault
Contents
Module 9 - Backup 1
9–3. SLIDE: Backup, high level view .................................................................................................. 2
9–4. SLIDE: Backup specification execution ..................................................................................... 4
9–5. SLIDE: Backup Specification Content ........................................................................................ 5
9–6. SLIDE: Creating backup specification ........................................................................................ 6
9–7. SLIDE: Backup context / Group view ......................................................................................... 8
9–8. SLIDE: Creating backup specification ........................................................................................ 9
9–9. SLIDE: Creating backup specification: Wizards ....................................................................... 11
9-10. SLIDE: Creating backup specification: Sources ...................................................................... 12
9-11. SLIDE: Creating backup specification: Destination ................................................................ 14
9-12. SLIDE: Dynamic device allocation 1/2 .................................................................................... 16
9-13. SLIDE: Dynamic device allocation 2/2 .................................................................................... 17
9-14. SLIDE: Static device allocation ............................................................................................... 20
9-15. SLIDE: Object mirroring 1/2.................................................................................................... 21
9-16. SLIDE: Object mirroring 2/2.................................................................................................... 23
9-17. SLIDE: Creating backup specification: Options ...................................................................... 25
9-18. SLIDE: Creating backup specification: Filesystem options 1/2 ............................................. 26
9-19. SLIDE: Creating backup specification: Filesystem options 2/2 ............................................. 29
9-20. SLIDE: Scheduler Overview..................................................................................................... 31
9-21. SLIDE: Scheduler – Feature Comparison ................................................................................ 33
9-22. SLIDE: Using the Legacy Scheduler 1/2.................................................................................. 34
9-23. SLIDE: Using the Legacy Scheduler 2/2.................................................................................. 36
9-24. SLIDE: Using the Advanced Scheduler 1/2 ............................................................................. 37
9-25. SLIDE: Using the Advanced Scheduler 2/2 ............................................................................. 39
9-26. SLIDE: Using an incremental backup chain ............................................................................ 43
9-27. SLIDE: Protection of a backup chain ...................................................................................... 45
9-28. SLIDE: Creating Backup Spec: Backup Object Summary ........................................................ 46
9-29. SLIDE: Backup Object Summary – Object Properties 1/2 ...................................................... 47
9-30. SLIDE: Backup Object Summary – Object Properties 2/2 ...................................................... 49
9-31. SLIDE: Preview backup session .............................................................................................. 50
9-32. SLIDE: Pre- and post- execution ............................................................................................ 51
9-33. SLIDE: Performing backups .................................................................................................... 52
9-34. SLIDE: Backup session message output ................................................................................ 53
9-35. SLIDE: Resume/Restart failed Backup sessions .................................................................... 54
9-36. SLIDE: Missed job executions ................................................................................................. 56
9-37. SLIDE: Reconnect broken connections ................................................................................... 57
Module 9
Backup
3 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
A backup is a process that creates a copy of data on backup media. This copy is stored and kept for
future use in case the original is destroyed or corrupted.
Following a brief outline about the required steps how to configure a backup in Data Protector.
2. Selecting the target backup devices which are connected to Media Agent clients
Data Protector supports backup to tape from a single DDS drive up to tape libraries with 40 or more
drives and thousands of cartridges. Backed up data can also be stored on Magneto-optical media or
on disk based file libraries.
7. Create reports
A single report or a set of reports manually started or scheduled to run daily or weekly. This is
optional feature provides important statistics.
• You need to have a Disk Agent installed on every system that is to be backed up, unless
you use NFS (on UNIX) or Network Share Backup (on Windows) for backing up these
systems.
• You need to have at least one backup device configured in the Data Protector cell.
• You need to have media prepared for your backup.
• You need to have appropriate user rights for performing a backup
• You need to have the required licenses to perform a backup
/data1
Drive1
/data2
Host1 Drive2
E:\
Host2 Drive3
C:\
Drive4
Host3
4 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
On the Data Protector Cell Manager system a dedicated Backup Session Manager (BSM) reads the
content of the backup specification and starts all necessary agents as the Disk Agent (DA) for
reading the data and the Media Agent (MA) for writing the data to the backup media.
The backup specification contains the list of objects to be backed up and defines the devices to be
used. Optionally the backup specification might contains various options and parameters, which
allow to overwrite default settings for the used devices and backed up data, such as the definition
of a specific media pool that will keep the used media of that session, filter settings for the backed
up data, pre and post exec command to be executed or security settings like session ownership
definition or used encryption settings.
A backup specification can be as simple as backing up one single mount point or drive letter to a
local attached standalone tape device, or as complex as backing up 40 large servers to a SAN
attached tape library with 30 drives.
A running backup is called backup session and be started interactively from the DP GUI or CLI, via
the DP build-in scheduler or by executing an external script. During the backup session, the
configured Data Protector Disk Agents read the data within the specified backup objects, transfers
their data to the configured Backup Devices, who writes the data to media residing in these devices.
Each backup session will get a unique Session-ID assigned that allows to track the session status
while the session is running or to query the IDB for session details after the session is completed.
Backup specification
Barlists • Oracle
are used to back up • SAP
Applications and • MS Exchange
Databases online.
• DP Internal Database*,
These are integration
• VMware, … and many others …
specific agents.
*… before Data Protector 8.00 backed up as Filesystem, in later version as Online Database backup
5 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Datalist: The Datalist is used for the classical file system backups. Filesystem backups includes
entire system backups or just selected mount points (Unix) or drive letters (Windows). Special
Filesystem backups are Configuration backups (Windows only) or disks and partitions backups as
Raw disk backup. Up to DP version 7.03 it was possible to backup the Internal Database as
filesystem, starting with version 8.00 IDB is backed up as Online Integration backup.
Datalists are stored as ASCII files on the Cell Manager under:
DP_CONFIG\datalists
Barlist: A Barlist is used to backup a database or application as a true online backup. It contains the
backup objects definition in the language the database or application understands like an RMAN
script for an Oracle database backup or the brbackup call for a SAP backup. These integration
backups are not part of this essential training. Special integration trainings are offered, see Module
1 for more information about these courses.
Barlists are stored as ASCII files on the Cell Manager under:
DP_CONFIG\barlists\<integration name>
6 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Using the GUI, two primary methods can be used to create a backup specification:
• by starting with a template (even a blank template)
• by starting one of the backup wizards
The list shown above illustrates the typical sequence (as guided by the GUI) used when defining a
backup specification. The process for each of the methods is very similar, and both allow the
backup to be started interactively or saved to a backup specification file.
The following slides show step by step, the creation of a backup specification.
A backup specification defines the client systems, drives, directories, and files to be backed up, the
devices or drives to be used, the number of additional backup copies (mirrors), the backup options
for all objects in the specification, and the days and times when you want backups to be performed.
You can easily create multiple backup specifications by copying an existing specification and then
modifying one of the copies. Data Protector provides default options that are suitable for most
cases. To customize the behavior, use Data Protector backup options.
Keep the following key points in mind when you run a backup session:
• The backup type (full or incremental) is the same for the whole backup session. All data
in a group is backed up using the same backup type.
• A backup object can be added to multiple backup specifications. For example, you may
have one backup specification for full backups, one for incremental backups, one for a
departmental backup, and one for the archive backup. You can give a description for
each object. It is important that you choose the description carefully; because this lets
you differentiate among various backups from the same filesystem.
• Objects or clients can be grouped into one backup specification if the media and the
backups are managed in the same way, or if media are put into a vault.
• If many backup specifications exist or are planned, you should structure them in groups
of backup specifications. If the groups are structured along common option settings
(how to back up), then you can apply the backup templates efficiently.
Omnicreatedl
In most cases a backup specification is configured using the Data Protector user interface, either
from scratch or by copying an already created backup spec, which is modified afterwards. To
allow the creation of backup specification within scripts, the command omnicreatedl can be
used. The command omnicreatedl allows scripted backup specification creation from the
command line.
Example 1
Create a datalist containing the entire file systems for a single host, backup specification name
will be system01_all, backed up client is system01, and the logical device name is LTO5_drive3:
Example 2:
Create a datalist containing the entire file systems from two hosts,
backup specification name will be CAD_all, backup clients are system23 and system24 and logical
device name is drv4
For further details about omnicreatdl and available options see HP Data Protector CLI Guide.
“View”
Pull down menu
Backup
Context
Backup
Specifications
of the Group
“cleanup”
Backup
Specification
Groups
7 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Backup Context
A right-click opens the Context menu, where the user can select backup items beside others.
Backup Group
A large environment can have hundreds of backup specifications. Data Protector allows assigning
every backup specification to a group. The user can create as many groups as he needs. He can
create backup groups every time and can move Backup Specifications from one group to another.
You can apply common options settings (for example, for devices) from a template to a group of
backup specifications. Select all the backup specifications within the group (click on the name of
the group and then CTRL+A), right-click a target group, and then click Apply Template.
Default:
Local or LAN
Templates: Backup
Often used backup Static or
specification and schedule
dynamic
characteristics can be
saved in a template. assigned
The template can then be devices
used to generate new
backup specifications
8 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Data Protector backup templates are a powerful tool that can help you simplifying your backup
configuration. A template has a set of clearly specified options for a backup specification, which
you can use as a base for creating and modifying backup specifications. Data Protector enables you
to apply a group of options offered by the template.
Backup templates are created and modified similarly to backup specifications, except that objects
and the backup application configuration are not selected within the backup template.
In blank backup templates, such as Blank Filesystem Backup, Blank Informix Backup, and so on,
there are no objects or devices selected.
To apply a template to backup specifications, do a right-click on the backup specification and click
Apply Template. The Apply Template window appears, in which you apply the desired options.
Once you have applied the template options, you can still modify your backup specification and
change any setting.
If the groups are structured along common option settings (how to back up), then you can apply the
backup templates efficiently.
For detailed steps, refer to the online Help index keyword “templates”.
Load
balanced
Non-Load
balanced
DR preparation
backup for the CM
9 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
When defining objects interactively to be added to a backup specification, you may use a “task
wizard” to create the specification. Within the backup context of the GUI, select “Tasks” from the
bottom of the Scoping Pane. There you will find the wizards. Select either the load balanced, or
non-load balanced wizard.
You will not be able to change the load balance selection later on.
It is not supported to edit the Datalist file with a text editor.
10 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Data Protector uses the term backup object for a backup unit that contains all items selected for
backup from one disk volume (logical disk or mount point). The selected items can be any number
of files, directories, or the entire disk or mount point.
Additionally, a backup object can be a database entity or a disk image (rawdisk).
• Client name: a hostname of the Data Protector client where the backup object resides.
• Mount point: an access point in a directory structure (drive on Windows and mount point
on UNIX) on the client where the backup object is located. Description: uniquely defines
the backup objects with identical client name and mount point.
• Type: backup object type, for example filesystem.
The way in which a backup object is defined is important to understand how incremental backups
are done. For example, if the descriptions of a backup object changes, it is considered as a new
backup object, therefore a full backup will be automatically performed instead of incremental.
As you select objects to be backed up, you may select the check box in front of a host to include the
“host” object, or you may expand the host object and select file systems individually. The coloring
used for the check marks in front of the objects indicate whether the items were selected directly
(blue) or indirectly (black)) because of another selection.
The lightened colors (cyan and gray) are used to indicate partial primary and secondary selections
respectively.
How to restore?
Data Protector provides essentially two methods for data restore, object-based, and session-
based. With session-based restore, Data Protector is able to restore all at once the objects from a
single backup session. The backup session is stored in the Data Protector database and may be
selected for restore. This makes restoration of a complete system very simple, but may change the
way that you will define your specifications for backup. With object restore, you may select to
restore an entire object version, or any subset of it, down to the file level. The next module of this
training explains how restore works.
Up to max 6 drives
are used in parallel
Selected device
11 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
CRC Check
Set this option to have Data Protector calculate the CRC (Cyclic Redundancy Check) when a backup
runs. CRC is an enhanced checksum function that lets you later confirm using the Verify option
whether or not data has been written correctly to the medium. This option can be specified for
backup and object copy operations. The default value is OFF.
Concurrency
Concurrency allows more than one Disk Agent to write to one backup device. Data Protector can
then keep the devices streaming if data can be accepted faster than a Disk Agent can send it. The
maximum concurrency value is 32. Data Protector provides default values for all supported
devices. This option can be specified for backup and object copy operations.
Media Pool
This option selects the media pool with the media you will use for a backup. If not defined, a default
pool, which is a part of device specification, is used.
This option can be specified for backup and object copy operations.
Prealloc List
The Prealloc List is a subset of media in the media pool used for a backup. It specifies the order in
which the media will be used. When using the Prealloc List and the Strict media allocation policy
with the backup device, Data Protector expects the sequence of the media in the device to
correspond with that specified in the Prealloc List. If the media are not available in this sequence,
Data Protector issues a mount request. If no media are specified in this list, then the Data Protector
allocation procedure is used to allocate media.
This option can be specified for backup and object copy operations.
Rescan
(If this option is ON, Data Protector updates repository information before starting your backup.
This is useful when you manually change the media order in the slot or enter and eject media.
Assumed
drive #2 is
the next
17 Backup available
Objects from drive, object
the backup “5” will be
specification the next
stored on
drive #2.
12 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
By default, Data Protector automatically balances the usage of backup devices specified for
backup. This is also called load balancing, and it ensures equal usage of the devices. When you run
backup with the Load Balancing option, Data Protector uses devices in the order they are specified
in the load balanced backup specification.
The backup specification above contains 17 objects and four logical devices. The options for the
backup specification include: “Load Balanced, Min 1, Max 3.” Also, note that the backup objects are
in a specific order, that is to say they have an order within the object list.
When a backup specification is configured as load balanced, the device field for each object that
normally shows the name of the logical device the object is targeted at, now shows “Load
Balanced.”
At run time, media agents are started for the minimum number of logical devices specified in the
backup specification and these devices are locked by the session manager. At most, Data Protector
will start the number of media agents defined in the MAX parameter, in this case “3.”
The media agents that are started depend on the order defined in the backup specification.
How the automatically created order can be changed, is described under Backup Object Summary,
later in this module.
Once the media agent has started, the disk agents are started. The number of disk agents started is
the combined concurrency values for the running devices; in this case, the total is eight. The
concurrency for each logical device will be satisfied before another available media agent is
started.
NOTE: As it is not know in advance which objects will be written to each device, it makes sense to
use a common media pool for all devices that are to be a part of a load balanced backup.
1
2 Drive #1
3
4
5 Drive #2
6
Drive #3
Drive #4
Once a backup of a particular object is done, the next pending object is started and assigned to the
device that has less than three concurrent objects being backed up.
Load balancing ensures that the two devices are running in parallel as long as there are still
pending objects to be backed up. If a device fails during backup, one of the two devices in reserve is
used. The objects that were being backed up to the failed device are aborted, while the next three
pending objects are assigned to the new device. This means that each failure of a device can cause
a maximum of three objects to be aborted, provided that other devices are available for the backup
session to continue.
11 Objects are
backed up in four
parallel streams
3 2 1 Drive #1
6 5 4 Drive #2
11 Drive #4
14 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Note: If you disable the Load Balancing option, you have to select the backup device which is
used to back up each object in the backup specification. If a device becomes
unavailable, then the objects that should be backed up to the device will not be backed
up.
1 1
Drive #1 Drive #3
2
Object 1 Object 1
Object 2
Media
15 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
An object mirror is an additional copy of a backup object created during a backup session. When
creating a backup specification, you can choose to create one or several mirrors of specific objects.
The use of object mirroring improves the fault tolerance of backups and enables multi-site
vaulting. However, object mirroring during a backup session increases the time needed for backup.
Let us take Object 3 in the figure as an example. The Disk Agent reads a block of data from the disk
and sends it to the Media Agent that is responsible for the backup of the object. This Media Agent
then writes the data to the medium in Drive 2 and forwards it to the Media Agent that is responsible
for mirror 1. This Media Agent writes the data to the medium in Drive 4 and forwards it to the Media
Agent that is responsible for mirror 2. This Media Agent writes the data to the medium in Drive 5. At
the end of the session, Object 3 is available on three media.
Limitations
• It is not possible to mirror objects backed up using the ZDB-to-disk or NDMP backup
functionality.
• It is not possible to mirror an object to the same device more than once in a single session.
• Block size of the devices must not decrease within a mirror chain. This means the following:
The devices used for writing mirror 1 must have the same or a larger block size than the
devices used for backup. The devices used for writing mirror 2 must have the same or a
larger block size than the devices used for writing mirror 1, and so on.
Already
configured
devices are
greyed out
Load Balancing options
can be set for the
backup devices and
each mirror
Available
devices
16 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Specify separate devices for the backup and for each mirror. When a backup session with object
mirroring starts, Data Protector selects the devices from those you specified in the backup
specification. To avoid impact on performance, it is recommended that the devices have the same
block size and are connected to the same system or to a SAN environment. The minimum number
of devices required for mirroring SAP DB, DB2 UDB, or Microsoft SQL Server integration objects
equals the number of devices used for backup.
Selection of devices
Object mirroring is load balanced by default. Data Protector makes optimum use of the available
devices by utilizing as many devices as possible. Devices are selected according to the following
criteria in the order of priority:
• devices of the same block size are selected, if available
• locally attached devices are selected before network attached devices
When you perform an object mirror operation from the command line, load balancing is not
available.
Backup performance
Object mirroring has an impact on backup performance. On the Cell Manager and Media Agent
clients, the impact of writing mirrors is the same as if additional objects were backed up. On these
systems, the backup performance will decrease depending on the number of mirrors.
On the Disk Agent clients, there is no impact caused by mirroring, as backup objects are read only
once.
Backup performance also depends on factors such as device block sizes and the connection of
devices. If the devices used for backup and object mirroring have different block sizes, the mirrored
data will be repackaged during the session, which takes additional time and resources. If the data is
transferred over the network, there will be additional network load and time consumption.
Failover
Handling for a
clustered Cell
Manager
17 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Data Protector offers a comprehensive set of backup options to help you fine-tune your backups.
All options have default values that are appropriate for most cases. The availability of backup
options depends on the type of data being backed up. For example, not all backup options available
for a file system backup are available for a disk image backup. Common and specific application
options for integrations like Exchange, SQL, and so on, are described in the specific integration
guide
Additional options
that apply for all
marked backup
objects
E.g.: Software
encryption or data
compression
18 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Based on your company data protection policies, you have to specify how long your backed up data
is kept on the medium. For example, you may decide that data is out of date after three weeks and
can be overwritten during a subsequent backup. The Protection option can be specified for backup
and object copy operations.
The Catalog Protection option can be specified for backup and object copy operations.
The default value for catalog protection is “Same as data protection“. This means that you can
browse and select files or directories as long as the media are available for restore.
Note: If data protection expires, the catalog protection is cancelled. When the data
protection ends and a medium is overwritten, the catalogs for the objects are
removed regardless of the catalog protection. Even when catalog protection
expires, you are still able to restore, but you must specify filenames manually.
Be aware that catalog protection, together with logging level, has a very big impact on the growth
of the IDB. Therefore, it is very important to define a catalog protection policy appropriate to your
environment. Refer to the IDB section in the HP Data Protector Concepts Guide for more
information on catalog protection and usage recommendations.
Logging level can be specified for backup and object copy operations. Data Protector provides the
following four logging levels:
Log All
This is the default logging level. All detailed information about backed up files and directories
(names, versions, and attributes) is logged to the IDB.
You can browse directories and files before restoring and in addition look at file attributes. Data
Protector can fast position on the tape when restoring a specific file or directory.
Log Files
When this logging level is selected, detailed information about backed up files and directories
(names and versions) is logged to the IDB. You can browse directories and files before restoring,
and Data Protector can fast position on the tape when restoring a specific file or directory. The
information does not occupy much space, since not all file details (file attributes) are logged to the
database.
Log Directories
When this logging level is selected, all detailed information about backed up directories (names,
versions, and attributes) is logged to the IDB. You can browse only directories before restoring.
However, during the restore Data Protector still performs fast positioning because a file is located
on the tape near the directory where it actually resides.
No Log
When this logging level is selected, no information about backed up files and directories is logged
to the IDB. You will not be able to search and browse files and directories before restoring. The
different logging level settings influence the IDB growth, backup speed, and the convenience of
browsing data for restore.
Refer to the HP Data Protector Concepts Guide for more information on logging level.
Further options:
(to get a detailed description search with “filesystem options” in the OLH)
Report Level Backup Files of Size
Backup POSIX Hard Links as Files Do Not Preserve Access Time Attributes
Enhanced Incremental Backup Use native FS Change Log Provider
Software Compression Display statistical info
Lock files during backup Backup POSIX hard links as files
Do not preserve Access time attribute Copy full DR image to disk
Security: AES 256-bit encryption, Encode
19 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
back up an open or busy file. The Time out value is the amount of time in seconds during which
Data Protector waits before retrying to back up an open or busy file.
Scheduler - Overview
What is a scheduler?
Important:
Starting with Data Protector 8.10 there are two schedulers available:
- Legacy Scheduler
- Advanced Scheduler
20 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Scheduler overview
A scheduler triggers the execution of a configured backup at a predefined data and time and is an
essential part of your backup specification configuration. Of course it is possible to complete the
backup specification without scheduling it and configure or update the schedule data at a later
time.
To ensure the effective utilization of the available backup infrastructure backups should run fully
unattended and around the clock 24x7. This requires careful planning and each backup
specification run needs to be scheduled individually.
In addition the scheduler allows the configuration of a recurrent execution, so the configured
backup specification can be configured to run several times a day, week or month.
Each backup specification can get multiple schedules configured. For each schedule the backup
type (full, incremental, incremental1...9) and the backup protection type (default or specified new
protection time) can be set. This allow the recurrent run of just one backup specification with
different parameters to achieve a daily incremental with 2 weeks protection and a weekly full with
1 month protection.
Data Protector is shipped with its own build-in scheduler. In addition it is possible to use an
external scheduler, which will use the Data Protector omnib command to start a backup at a pre-
defined data and time. See the Data Protector CLI Reference Guide for the available omnib options.
Important: Starting with Data Protector 8.10 there are two schedulers available:
- Legacy Scheduler
- Advanced Scheduler
The Legacy Scheduler is the existing Data Protector scheduler, which was left unchanged and allow
the use of Data Protector like in previous Data Protector versions.
Newly introduced was the Advanced Scheduler with a new look and feel and added functionality.
Both schedulers can be used for scheduling the backups and will be explained on the following
pages.
Pro Pro
• Easy to use • Priority based job execution
• Access to Scheduler configuration file • Large set of recurrence options
• Global deactivation possible • Missed Job execution discovery
• Holiday file and Template support • MS Outlook Scheduler look and feel
Con Con
• No job periodization • No scheduler configuration file
• Missed execution not discovered • No global deactivation possible
• Limited scheduler options • No Holiday file and Template support
Important: Both scheduler are working fully independent from each other.
21 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The above slide lists the Pros and Cons of each scheduler.
Both scheduler are fully independent from each other. So it is possible to schedule the backup of
one backup specifications using both schedulers. There is no scheduler hierarchy or scheduler
deactivation, if backups were configured in both schedulers.
List of backup(s)
for the selected
day See next
slide
22 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The Legacy Scheduler is the default Scheduler in Data Protector. During an initial backup
specification configuration it shows up after the Backup Option windows. In case on a saved backup
specification click on the Schedule tab to access it.
A calendar shows the configured schedules for a 9-month timeframe. The timeframe can be move
forward/backward by clicking on the arrow icons in the upper left and lower right part of the
calendar window. A special color code is used to show the backup type. If you click on a specific
date the schedule for that date is shown in a frame below the calendar.
Clearing a schedule
To eliminate your schedules that you have already set up, click Reset in the Schedule property
page.
Disabling a schedule
To disable a backup schedule, select the Disable Schedule option in the Schedule property page.
The backup will not be performed until you deselect this option. Disabling backup schedules does
not influence currently running backup sessions.
Daily intensive
Data Protector runs a full backup at midnight and two additional incremental backups at
12:00 (noon) and 18:00 (6 p.m.) every day. This backup type is intended for database
transaction servers and other environments with intensive backup requirements.
Daily full
Data Protector runs a full backup every day at 21:00 (9 p.m.). This is intended for backups
of single workstations or servers.
Weekly full
Data Protector runs a full backup every Friday and Incr1 backups every day from Monday to
Friday at 21:00 (9 p.m.). This is intended for small environments.
Fortnight full
Data Protector runs a full backup every second Friday. Between these backups, Data
Protector runs Incr1 backups every Monday to Thursday, all at 21:00 (9 p.m.).
Monthly full
Data Protector runs a full backup on the first of every month, an Incr1 backup every week,
and an incremental backup every other day. This is intended for relatively static
environments.
Saved schedules of a Backup Specifications are stored as ASCII Files on the Cell Manager under:
Datalists: DP_CONFIG\schedules
Barlists: DP_CONFIG\barschedules\<integration name>
Recurring:
Weekly Recurring:
Daily
Recurring:
Monthly
Backup type
Full or inc x
23 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
First select the Time options, such as Start Time and initial Start Date. Next select the Recurring
mode: None, Daily, Weekly or Monthly. Depending on your selection more recurring options will be
made available as shown above.
• Network Load:
High, Medium and Low
• Backup protection:
Default (use the setting from Backup Options),
None (not recommended), Until <date>,
Number of weeks, Number of days,
or Permanent Protection (not recommended)
24 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The slide above summarizes the main features of the Advanced Scheduler.
(1) from the Legacy Scheduler window (2) from the Actions Menu
At the right part of the window all configured backup specifications are listed, grouped by type.
Click on a Backup Specification to see existing schedules:
Click at the Symbols above the listed schedules to add, edit, delete one or delete all schedules.
How to add a new schedule is explained on the next slide. The missed execution feature is
explained at the end of this module.
• Recurrence Pattern
8
25 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
If you click on Add or Edit the above shown Schedule Option screen is shown.
The following options already exists in the Legacy Scheduler, so there is no need to explain these
option again:
• Backup type
• Retention,
• Start date and End date time
• Network load
Priority (4)
Starting with Data Protector 8.10, it is now possible to assign a priority to a schedule.
The priority range is 1 to 6000, where a backup with priority 1 assigned is getting the highest
priority, 6000 is getting the lowest priority.
Highest Priority:
Lowest Priority:
Incorrect Range:
If multiple jobs are started at the same time, backups are executed according to their assigned
priorities. Assigning priorities to schedules remove the need to schedule backups by time, instead
configure a number of backups with the same start time and control the execution order by setting
different priorities.
Backup jobs without a defined priority receives a default priority of 3000 (Medium).
Note: If more than 1 backup starts at the same time, the first backup is a random selection.
All following backups are started according their priorities
The Advanced Scheduler allows scheduling backups n intervals from every minute to once a year.
Some examples:
• Last day of every second month or
• Every last Sunday of the Month
• Every 30 Minutes between 6am and 10am
• Every 2nd of January
There are 7 sub menus:
• Once:
Backup is executed just one time, no additional options
• Every minute:
If every minute has been selected a
sub menu allows to configure the
interval of 1, 2, 3, 4, 5, 6, 10, 15, or
30 minutes.
• Hourly:
Similar to Every minute, but
instead of minutes the job can be
started every 1, 2, 3, 4, 6, 8, 12
hours.
• Daily:
Either the use can choose every day,
or every weekday. Weekdays are
Monday till Friday.
• Weekly:
Select the days (Monday to Sunday)the schedule should apply
• Monthly:
If monthly has been selected the
user can choose between two
submenus as shown right:
The first submenu allows selecting a day independent of what kind this day is. The
selection is: Day (1-31) of every (1, 2, 3, 4, 6, 12) month(s).
The second submenu allows selecting dedicated days of the week. E.g.: Monday or
Saturday. The selection is:
The (First, Second, Third, Fourth, Last) (Monday à Sunday, weekday, day) of every (1, 2, 3,
4, 6, 12) month(s).
[t] [t]
23 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The rows in this table (shown above), are independent of each other and show different situations.
The age of the backups increases from right to left, so that the far left is the oldest and the far right
is the most recent backup.
The full and Incr<x> represent still protected objects of the same owner. Any existing Incr<x> that
is not protected can be used for restore, but is not considered for referencing on subsequent
backup runs.
Examples
1. In the second row, there is a full, still protected backup and an Incr2 is running. There is no
Incr1, so the backup is executed as an Incr1.
2. In the fifth row, there is a full backup, an Incr1 and another incremental is running. Data
Protector references the currently running backup to the previous incremental that is Incr.
3. In the eighth row, the Incr3 is executed as Incr2, and in the eleventh row, the Incr3 is
executed as Incr1
Incr1 backup
This backup type refers to the most recent still protected full backup with the same ownership. It
does not depend on any previous incremental backups. The files that have changed since the most
recent still protected full backup are included in the backup.
Incr1-9
(Available incremental levels are different for specific integrations.)
Incr1-9, also called leveled incremental backup, backs up only changes made since the last
protected backup of the next lower level. For example, an Incr1 backup saves all changes since the
last full backup, and an Incr5 backup saves all changes since the last Incr4 backup. An Incr1-9
backup never references an existing Incr backup. If there is no protected full backup, Data Protector
starts a full backup instead.
The advantage of an incremental backup is that it takes less time to complete (it backs up smaller
quantities of data) and occupies less space on media and in the IDB.
The disadvantage is that a restore is more complicated as you usually need all the media used since
the last full backup
days
full incr incr incr incr incr incr full incr
expired protected (7days) t1
invalid
25 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
If the protection for full backup expires, Data Protector does not check, whether there are still
depending incremental backups. The user has to choose the correct protection time window, in
order to pretend this case. In such a configuration the corresponding incremental backups, even so
still protected, become useless, since the required full has expired.
26 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The final step in creating a backup specification is to review, and possibly change the objects and
options selected for the backup. Here you may also change the order of the objects in the list. The
order will affect the execution sequence and pairings for concurrency. The object list order along
with the algorithm for load balancing will determine the backup sequence.
Notice that you may select the column headings in the summary within the Results Area to change
the sorting preference for the list; such as the “Order” of the objects.
Each object that is part of mirror chain may have a specific device set for it. By default, devices used
for writing mirrors are selected automatically.
To change the device for a mirror, select the object in the list, then select “Change Mirror.”
From the Mirror options dialog, highlight the mirror, and select a device from the drop-down list. By
default all mirrors are set to <Automatic>.
You may add any additional objects to the backup specification at this point. Each selection will
start an add object wizard. The object types that may be added are: UNIX filesystem; Windows
filesystem; NetWare filesystem; Client System; Intl. Database (IDB) ; Microsoft network shared;
Disk image object (commonly called raw disk)
If this object type is added, use the <- Back button to set the raw disk image properties.
27 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The slide above illustrates where to find General and Trees object properties where you may fine-
tune the scope of the backup.
From here, you may select the parts of a filesystem object to backup, instead of the entire tree,
which is the default. The Trees list is essentially an Include-only list for the backup. The exclude list
allows you to specify the absolute path of the files or directories to exclude from the backup.
When the lists are empty, the entire object (file system) will be backed up.
Use the Filter… button to specify a wildcard list of names to include or exclude. The “Onlys” list is
used for include, and the “Skips” list is used for exclude. In both cases, the list represents a filter
for the entire file system. Whenever a match for the filter is found, the item is either skipped or
included in the backup.
From this object summary screen you may also modify any object options individually.
The options from this summary screen allow individual filesystems to have specific rather than
general properties applied. For example, the pre-post exec script may be modified for each object,
rather than having one script for all filesystems.
Object Qualifiers
The data that is to be backed up requires qualification that is more detailed for Data Protector.
Configuring Data Protector to backup a file system is not sufficient information. A complete
description of the object, such as where it resides, which parts of it are to be backed up, etc., must
be specified.
Specialized backup specifications, such as Oracle, Informix, MS Exchange, etc., have other
qualifiers, such as the instance or SID name. However, we will not be detailing these options, but
focusing on the backup specification datalist and options instead.
Note Data Protector uses four key qualifiers to identify file system objects in the
database: Hostname, Mount Point/Drive Letter, Description and Owner. These
object names are used for restore and reporting.
The following list details the most commonly used qualifiers used with the backup specifications:
• Hostname: Specifies the particular system in the cell that the object resides on.
Example: vindaloo.uk.hp.com is a fully qualified hostname.
• Mount Point: Specifies the file system mount point on a UNIX type system, or the drive
letter on a Windows or Novell system
Example: /opt a UNIX file system mount point
/ a UNIX file system mount point for the root file system
C:\ a Windows drive letter (internally converted to C:/)
The description can be customized to distinguish between this particular backup of the object and
another. This description is stored in the Database as the object description.
This slide shows the remaining tabs: “Option”, “Other” and “WinFS Options”.
There are no differences compared with the windows already shown in backup creation part of this
trainings module. The user can here fine tune object by object without stepping backward.
These options allow the user to fine tune and change options in a similar way as already shown in
Options Tab.
29 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
You can preview a backup to verify your choices. Previewing does not read data from disk(s)
selected for backup, nor does it write data to the media in the device configured for the backup.
However, it checks the communication through the used infrastructure and determines the size of
data and the availability of media at the destination.
You can start an existing (configured and saved) backup after you have given Data Protector all the
information for the backup.
Limitation
Preview is not supported for some integrations and for ZDB.
Steps
1. Select the backup specification that you want to start or preview.
2. In the Actions menu, click Preview Backup
3. In the Preview or Start Backup dialog box, select the backup type (Full or Incremental; some
other backup types are available for specific integrations) and the Network load.
4. In the case of ZDB to disk+tape or ZDB to disk (instant recovery enabled), specify the Split
mirror/snapshot backup option.
5. Click OK to preview or to start the backup.
O System A Object
Object Post-exec
O System C Object
Object Post-exec
Backup Specification Level can be executed on any
system in the cell.
Backup Spec Post-exec
Object Level is always executed on the system
where the object resides Stop Backup
30 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Before a backup or restore session begins, an additional action is sometimes necessary. For
example, you may want to check the number of files to back up, stop some transaction processing,
or shut down a database. Such actions are performed using pre- and post-exec commands. Pre-
and post-exec commands are not supplied by Data Protector. Depending on your needs, you have
to write your own executable to perform the required actions. For backup, pre- and post-exec
commands can be configured on two levels:
Backup Specification
The pre-exec command is executed before the backup session starts. The post-exec command is
executed when the backup session stops. You specify these commands as backup options for the
entire backup specification. By default, pre- and post-exec commands for the session are
executed on the Cell Manager, but you can choose another system.
Backup Object
The pre-exec command for a specific backup object starts before the object is backed up. The post-
exec command for the backup object is executed after the object is backed up. You specify these
commands as backup options that apply for all objects, or for individual objects. Pre- and post-
exec commands for the object are executed on the system where the Disk Agent that backs up the
object is running.
Performing backups
Data Protector offers 3 interactively ways to start a backup session
1) From the GUI manually 2) From the GUI scheduled
Performing backups
Device status
Session messages
Shown above is a sample backup session. Three sections in the results area convey current status
information.
• Object status (running, pending, completed, completed/errors, failed, aborted)
• Device status (inactive, running, inactive/finished)
• Session messages: (auto-scroll is on; may be saved, printed, copied to clipboard)
While a backup session is executing, the GUI does not need to continue monitoring the backup; if
the GUI is exited, the session will continue
With a right-click into the session output window, the user can stop the auto scrolling of the
session output.
Other options are, to copy a part or the whole message output, to clear the output, to find a
string and more.
Resume Session
All objects in status Failed will be
backed up from the point of failure
CLI:
Restart: omnib -restart SessionID
Resume: omnib -resume SessionID
33 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Typically a backup session does not backup just one system. A Backup Specification might get 50 or
100 systems configured that will be backed up together within one backup session. Such large jobs
typically run into various issues, caused by the environment, and one or more systems will not be
backed up correctly. Data Protector helps to isolate the missing systems and offers two different
ways of restarting these failed objects only:
• Restart Failed Objects
• Resume Session
Resume Session
This option was introduced in Data Protector 8.10. Simply said it allows the backup to continue at
the position, where the problem occurred. For failed objects it will not just start from the beginning.
Based on IDB checkpoints Data Protector knows the last file that was backed up successfully and
continues with the backup at that position.
The resumed session will run like an “append” to the original backup session and uses the original
backup Session-ID.
Both feature are available through the DP GUI. Change to the Internal Database context, expand
Sessions, mark the failed session and select either Restart Failed Objects or Resume Session from
the shown menu. Click Yes in the Popup window to trigger the Restart/Resume.
CLI:
Restart: omnib -restart <SessionID>
Resume: omnib -resume <SessionID>
Limitation: The options Resume Session/Restart failed Objects are only available for
Filesystem Backups and Oracle Integration Backups
Columns are
customizable
If Cell Manager services were down, Missed Job Execution list all backups that couldn’t run during
that time. The DP administrator gets now the possibility to start those missed backups selectively
from this GUI.
The Missed Job Execution window can be started from the Action
tab out of the Internal Database context as shown on the slide
above.
Restarting jobs
Select a job and clicking Run Now to immediately start that missed
job. This re-started session and all other entries are not deleted
automatically. This has to be done manually by clicking the Delete or
Delete All button.
Disk Agent
Network Backup
TCP/IP
Cell Manager
TCP/IP
Backup Session
Manager
TCP/IP
Media Agent
34 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
For activation change to Options – Backup Specification Options – Advanced and check the
option: Reconnect broken connections for your backup specification.
When this option is enabled, a more advanced protocol is used for agent communication and data
transfer. This protocol has a performance overhead, and therefore, should be used only if the link
reliability is a problem.
If the BSM loses communication with the disk or media agents, the BSM and the agents will both try
to re-establish communications. It is possible to fine tune it by an omnirc option:
Contents
Module 10 — Restore 1
10–3. SLIDE: What is Restore? .......................................................................................................... 2
10–4. SLIDE: Restore methods ......................................................................................................... 3
10–5. SLIDE: Restore prerequisites .................................................................................................. 4
10–6. SLIDE: Concept of Parallel Restore ......................................................................................... 6
10–7. SLIDE: Restore – Sequence ..................................................................................................... 8
10–8. SLIDE: Restore – Objects ....................................................................................................... 10
10-9. SLIDE: Restore – Session ....................................................................................................... 12
10-10. SLIDE: Restore – Source ....................................................................................................... 14
10-11. SLIDE: Restore – Object properties ...................................................................................... 16
10-12. SLIDE: Restore – Destination ............................................................................................... 18
10-13. SLIDE: Restore – Options ...................................................................................................... 20
10-14. SLIDE: Restore – Devices ...................................................................................................... 22
10-15. SLIDE: Restore – Media......................................................................................................... 24
10-16. SLIDE: Restore – Media/object copies .................................................................................. 27
10-17. SLIDE: Restore – Summary................................................................................................... 29
10-18. SLIDE: Restore – Single or parallel? ..................................................................................... 31
10-19. SLIDE: Restore – Point in time restore ................................................................................. 32
10-20. SLIDE: Restore – By query name or location........................................................................ 33
10-21. SLIDE: Restore – By query backup or modification time ..................................................... 34
10-22. SLIDE: Resume failed Restore sessions ............................................................................... 35
Module 10
Restore
What is “Restore” ?
Restore is the process of recreating the original data from a backup copy back
to the original or to a newly specified location.
Note:
Depending on the platform, the way you specify these features and the available options can vary!
3 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
What is Restore?
A restore is a process that recreates the original data from a backup copy to a disk. This process
consists of the preparation and actual restore of data and, optionally, some post-restore actions
that make that data ready for use.
The Data Protector Internal Database (IDB) keeps track of data such as: which files from which
system are kept on a particular medium.
Depending on the platform, the way you specify these features and available options can vary. For
information on how to restore with application integrations see the HP Data Protector Integration
Guides.
Restore methods
4 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Restore methods
Data Protector offers several methods of restoring data interactively. Three methods (object,
session, and query) are accessible through the GUI and one uses the command line (omnir
command) interface.
In general, restore tasks are infrequent events that are performed only once in the same manner.
As such, there is no need to have the equivalent of a backup specification for restores. omnir is the
corresponding CLI command to the GUI.
Data Protector restore definitions are done on the backup session or object level. Within one
restore session, one or more backup objects may be selected. For each object, files and versions
may be selected. Options may be set on the restore session level, as well as on object level.
When the restore is started, the restore session manager (RSM) is executed on the cell manager
and a restore session ID is assigned to the restore session. The restore session is stored in the IDB
in a similar manner to backup sessions. These sessions may be removed, up to the discretion of the
administrator. While backup data in the Data Protector database is necessary for restore, restore
data is only necessary for auditing and reporting purposes.
Restore prerequisites
• Appropriate user rights to perform a restore task
(These rights are defined according to the user group)
Full Inc Inc Inc Inc Inc Inc Inc Valid restore chain of
protected backup
I
Full Inc Inc n Inc Inc Inc Inc Broken restore chain
c
t t+n Desired
Expired/Deleted Point-in-time
Incremental Backup data
5 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Restore prerequisites
To perform a restore you need to have the appropriate user rights. These rights are defined
according to the user group.
The restore context in the graphical user interface allows you to select a backup session (like an
incremental backup from a specific backup specification) then browse all objects that were backed
up in this session, and all versions of this backup chain.
The backup chain is based on the level of the incremental backups used (Incr, Incr 1, Incr 2, and so
on), simple or rather complex dependencies of leveled incremental backups to previous
incremental backups can exist. The backup chain is all backups, starting from the full backup, plus
all the dependent incremental backups up to the desired point in time.
When performing a restore, the backup chain is used to re-construct the data set selected for
restore. If one of the backups within the chain is missing, then an incomplete restore may be
performed and a warning is generated. The warning indicates that the restore chain has been
broken.
As a consequence the user cannot rebuild the data structure up to the desired point-in-time.
In the below example a depot file representing an incremental back in a Data Protector file library,
was missing:
The media mount request in the restore task got manually aborted. However the data could be
restored up to the point of the missing incremental backup:
DA Disk1
DA Disk2
MA
Host A
DA Disk1
MA ... Media Agent
DA … Disk Agent
Host B
… 3 Backup Objects
6 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
A parallel restore allows you to restore data concurrently from multiple objects to multiple disks or
file systems while reading the media only once, thus improving the speed of the restore.
For the parallel restore to work, the data from the different objects must have been sent to the
same backup device using a concurrency of 2 or more. During a parallel restore, the data for
multiple objects selected for restore is read from the multiplexed media, thereby improving restore
performance.
Prerequisite
At backup time, the data from the different objects must have been sent to the same device using a
concurrency of 2 or more.
Limitation
You cannot restore the same object in parallel. For example, if you select for the same restore an
object under Restore Objects and then select the session that includes the same object under
Restore Sessions, the object will be restored only once and a warning will be displayed.
A parallel restore requires only one pass of a media in order to extract all the selected objects from
it; in other words, a sort of reverse concurrency. A sequential restore only allows the selection of a
single object at a time; thus, multiple passes of the media are required if more than one object from
the concurrent backup is selected for the restore.
When sequential restore is necessary, it is wise to de-multiplex the backup media as a way of
organizing the objects (object copy) for the fastest restore. See the “Media Management and
Replication” module for details on object copy.
• Objects that were backed up in parallel to the same medium are capable of being restored in
parallel.
• Objects that were backed up to separate devices using different media are capable of being
restored in parallel.
• Objects that exist on the same medium in different tape segments are restored sequentially,
even if configured for parallel restore.
Note: A parallel restore may launch multiple DA processes for a single MA, just the
reverse compared to concurrent backup. In addition, Data Protector may start
multiple DA processes for a single object if the data was backed up in that
manner using the trees options.
Restore by:
Object Version
Object
Destination
-or-
Session
Restore Options
-or-
Query
Devices
Media
Restore – Sequence
A standard restore procedure consists of several phases. You have to select the data to be
restored, find the necessary media, and start the restore session. Other settings are predefined
according to the backup process, but can be modified.
Prerequisite
To perform a restore you need to have the appropriate user rights.
Tasks:
Selecting the Data to Restore Selecting a Specific Backup Version
Selecting Restore Location Setting Restore Options
Handling File Conflicts Selecting a Device to Restore From
Finding Media Needed to Restore Previewing and Starting a Restore
Viewing Finished Sessions Resuming Failed Sessions
Specifying Restore Location for Individual Files and Directories
The restore process is conceptually similar to backup, but you may start the restore at any point,
you do not have to walk through all of the option screens if you want to accept any or all of the
restore defaults.
To get a detailed description: Search in the OLH with "Advanced restore tasks".
Restore Options
Data Protector offers a set of comprehensive restore options that allow fine-tuning of a restore. All
these options have default values which are appropriate in most cases.
The following list of options can be set on a per-object basis. The restore options are available
according to the type of data being restored.
Restore - Objects
Internal Database
Integration Objects will
only show up if at least
Integrations
one client got the
Integration Agent
installed, e.g.:
Restore Sessions
MS SQL Server
Oracle Server
Restore by Query
8 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Restore – Objects
Depending on the types of backups performed, different object types may be available for restore.
Each object has restore options that are specific to the individual object type. There are also restore
options that are common to all object types.
Restore Objects
If this item is selected in the Scoping Pane, the types of data backed up (e.g.: Filesystem, Internal
Database, and so on) are listed in the Results Area. You can right-click Restore Objects and select
“List From Media” to restore directly from media.
By default, when you select a whole directory, only directories and/or files from the last backup
session are selected for restore. Directories and files in the same tree structure that have not been
backed up in the same backup session are shaded. If you want to restore the data from any other
backup session, right-click the selected directory and click Restore Version. In the Backup version
drop-down list, select the backup version that you want to restore from.
Object Types
Disk Image
From a rawdisk object backup it is possible to restore the entire raw disk image copy. No single file
or directory objects may be specified with this object type.
Filesystem
Filesystem objects include UNIX, Windows, Netware, etc. From these objects, it is possible to
restore a file, directory, or complete file system. In addition, from the winfs object type, it is
possible to restore the windows registry which is part of the CONFIGURATION backup.
Internal Database
From the omnidb object, the Data Protector internal database can be recovered, including the
<DPCONFIG> directories. This topic will be addressed in much more detail later in this course.
Databases
Data Protector supports integrated online backup for several enterprise database and application
solutions. For example: rman for Oracle, sapdba for SAP, or onbar for Informix. How to backup and
restore such integrations are explained in the dedicated Data Protector integration guides.
Note: By default, the entire restore chain is restored (Show full chain is selected).
9 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Restore – Session
If this item is selected in the Scoping Pane, filesystem sessions and their attributes are displayed in
the Results Area. You can right-click Restore Sessions and set the interval for limiting the displayed
sessions.
Restore Sessions will contain a list of backup sessions with all objects backed up in these sessions.
You can browse all objects that were backed up in this session (like any disk drive from all clients
named in the backup specification), and all versions of this restore chain.
Prerequisite
In order to browse objects and select directories or specific files, the corresponding backups must
have been done using a logging level of directory, filenames, or log all.
Note: By default, the entire restore chain is restored (Show full chain is selected).
To restore only data from this session, select Show this session only.
In some cases, the restore of an entire system is necessary. Normally this would be after a disaster
recovery. A disaster recovery of a system probably includes some out of date files or data. Data
Protector restore in conjunction with disaster recovery tools allows for easy recovery of your
system and data from the most current backup session.
The session restore capability within Data Protector is based upon the specific backup sessions that
have been completed. Data backed up within a single session, usually from a backup specification
(Datalist), may be restored in parallel.
While selecting a session to restore, Data Protector provides individual object selection, so you are
not limited to an all or nothing restore. By selecting a backup session for restore purposes you are
able to restore all of the data that was a part of the backup.
The Data Protector internal database plays a key role in making the session and object data
available for restore. Within each session, you will be able to browse the object trees and select
down to the file level if a partial rather than a full restore is necessary.
Restore - Source
• Select from Restore Objects or
• Select from Restore Sessions
1.
1.
2.
3.
4.
10 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Restore – Source
You can browse for data to restore in two possible ways: either from the list of the backed up
objects or from the list of sessions. The difference is in the scope of directories and files presented
for restore:
Restore Objects with a list of backed up objects classified by client systems in the cell and by
different data types (such as Filesystem, Disk Image, Internal Database, and so on). You can browse
all the directories, files and versions, which were backed up and are still available for restore.
Restore Sessions with a list of backup sessions with all objects backed up in these sessions. You
can choose to view only sessions from the last year, last month, or last week. You can browse all
objects that were backed up in this session (like any drives from all clients named in the backup
specification), and all versions of this restore chain. By default, the entire restore chain of the
selected directories or files is restored, but you can also restore data from a single session only.
Prerequisite
In order to browse objects and select directories or specific files, the corresponding backups must
have been done using a logging level of directory, filenames, or log all.
Required steps for selecting the data from the list of the backed up objects
1. In the Context List, click Restore
2. In the Scoping Pane, under Restore Objects, expand the appropriate data type (for
example, File system).
3. Expand the client system with the data you want to restore and then click the object
(mount point on UNIX, drive on Windows systems) that has the data.
4. In the Source property page, expand the object and then select directories or files that
you want to restore.
By default, when you select a whole directory, only directories and/or files from the last backup
session are selected for restore. Directories and files in the same tree structure that have not
been backed up in the same backup session are shaded. If you want to restore the data from
any other backup session, right-click the selected directory and click Restore Version. In the
Backup version drop-down list, select the backup version that you want to restore from.
Note: If you repeat the steps above and select data under more than one object
(mount point or drive), you can perform a parallel restore.
The amount of file and directory details available for browsing depends upon the log level option
that was used for the backup session.
3.
11 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Data Protector allows each object selected for restore to be “fine-tuned” to meet specific
requirements. You can select the backup version of the object as well as the destination for the
object to be restored.
On the “Restore Only” option tab, you may specify wildcard matches to filter the object content and
restore only certain files or file types. For example; to restore just word documents use *.doc. To
exclude certain files from the restore, enter them in the “Skip” option tab. For example, to exclude
any file named containing “core,” such as *core, or core.*.
Within the Properties GUI shown above, the Destination tab contains options that allow the
destination of the object to be changed. You may alter the name of the object, or place it into a new
directory. Setting the destination here, allows for an override of the destination defaults.
Options are:
• Backup version
− Select a backup version of the file or directory. By default, the most recent backup
version is selected for restore.
• Restore
− To default destination
The file or directory will be restored to the destination specified under Default
destination in the Destination property page. If you leave the default there, the
destination is the original directory on the original client system.
− As
The path from the backup will be replaced with the new location specified below. The
destination path can be a new directory or an existing one. You can rename the files
and directories that you want to restore.
− Into
The path from the backup will be appended to the new location selected below. The
new location has to be an existing directory.
• Location
Enter a new path for the file or directory.
Restore – Destination
• Default destination
• Original location; same client
1. • Any DA client of cell
2.
12 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Restore – Destination
After selecting the data that you want to restore, you can select the location to restore the data to
and the file conflict handling. You can restore the data to another Data Protector client system and
change the directory path. This applies to the entire object to be restored.
Steps:
1. Click the Destination tab and then, in the Target client drop-down list, select the client
system that you want to restore on the new client. By default, Data Protector uses the
original directory structure to restore: if the data was backed up from the C:\temp directory
on system A, it restores the data to the C:\temp directory on system B.
2. You can change the directory path for your restore by selecting the Restore to new location
option and then entering or browsing for a new anchor directory. The directory path at
backup time is appended to the new anchor directory: if data was backed up from the
C:\sound\songs directory and you enter \users\bing as a new path, the data is restored to
the C:\users\bing\sound\songs directory.
Default destination
Some of these settings can also be set for each individual file or directory in the Source page. If you
change the properties there, they will override the settings made in this page.
• Target client
By default, you restore to the same client system from which the data was backed up. You can
select another system in your cell from the drop-down list. The Disk Agent is started on the
selected client system and the data is restored there.
You need to have the Restore to other clients user right to be able to restore to another client
system.
• Restore to original location
By default, you restore your data to the same directory in which it was located when the backup
was performed. It can be on the original client system or on some other client system you have
selected.
• Restore to new location
This option enables you to restore your data to another directory. Specify the path to the
directory to which you want to restore the data. You can browse for it if you are using the GUI
on a Windows system. If you restore to a Windows system, you could select a directory on
another system, but this is not recommended.
Restore – Options
2.
Restore – Options
DP offers a set of comprehensive restore options which are applied on a per-object basis.
When using the Restore As or Restore Into functionality with this option enabled, be careful when
selecting the new location to prevent accidental deletion of existing files. The time on the CM and
clients must be synchronized for this option to function properly.
the filename). The application will keep using the busy file until it closes the file. Subsequently, the
restored file is used. On Windows, the file is restored as filename.001. All applications keep using
the old file. When the system is rebooted, the old file is replaced with the restored file. On Linux,
this option is not supported.
Restore – Devices
• Automatic Device
selection or
1. • Fixed Original Device
14 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Restore – Devices
When restoring data, Data Protector will choose the same Logical Device that was used during the
backup for the object. In most cases, this is desirable, especially if the needed tape is still within the
repository of the tape or file library.
Steps
1. Click the Devices tab to open the Devices property page.
The devices that were used during backup are listed here.
2. To restore your data with an alternative device, select the original device and click
Change. In the Select New Device dialog box, select the alternative device and click OK.
The name of the new device appears under Device Status. The new device will be used
only for this session.
For more information on a device, right-click the device and click Info.
Specify what Data Protector should do if the selected devices are not available during restore (for
example, if they are disabled or already in use). Select either Automatic device selection or Original
device selection.
By default, Data Protector attempts to use the original device first. If the original device is not
selected for a restore or an object copy, then a global variable is considered. To use alternative
devices first or to prevent the use of the original device all together, modify the global variable
AutomaticDeviceSelectionOrder.
If the device no longer exists, a permanent change would be required. Use the omnidbutil
command with the -change_bdev option to permanently change a device to another within the
Data Protector database. The omnidbutil command will be discussed in more detail within the
“Database” module.
Restore – Media
.. List required media for restore, so ensure that listed media are available during restore
3.
Non-resident media:
Media kept outside a library
15 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Restore – Media
After selecting the data that you want to restore, you need to get a list of media containing the
data. This is essential if you use standalone devices or if you keep media outside of libraries to
verify availability.
In the below example there is one media which is required for that restore task at a different
location creating a mount request if started.
If an object version that you want to restore exists on more than one media set, you can influence
the selection of the media set that will be used for the restore by setting the media location
priority, or manually select the media set that will be used.
If you use synthetic backup, there is often more than one restore chain of the same point in time of
an object. By default, Data Protector selects the most convenient restore chain and the most
appropriate media within the selected restore chain.
Note: Copies obtained using the media copy functionality is not listed as needed media.
A medium copy is used only if the original medium (the medium that was used as
a source for copying) is unavailable or unusable.
Limitations
• With some integrations, it is not possible to set the media location priority in the Restore
context. The GUI does not display the Media tab for these integrations.
• You cannot manually select the media set when restoring integration objects.
Steps
1. Click the Media tab to open the Media property page. The needed media are listed.
For more information on a medium, right-click it and click Info.
If an object version that you want to restore exists on more than one media set, all
media that contain the object version are listed. The selection of the media set
depends on the Data Protector internal media set selection algorithm combined
with the media location priority setting.
To override the media location priority setting, select a location and click Change
priority. Select a different priority for the location and click OK.
To manually select the media set from which you want to restore, click the Copies
tab. In the Copies property page, select the desired object version and click
Properties. Select the Select source copy manually option, select the desired copy
from the drop-down list, and click OK.
2. If necessary, insert the media into the device.
Note: You can also list the media needed for restore, including media containing object
copies of the selected objects, by clicking Needed media in the Start Restore Session
dialog box. This dialog box appears when you start the restore.
Label
Labels help you identify media. They can have a maximum of 80 characters, including any keyboard
character or space.
Location
If the medium is in a library device, the location of the medium in the slot (enclosed in brackets),
and if provided, the location of the medium when it is not in a device.
Media ID
A unique identifier assigned to a medium by Data Protector.
Location priority
The order in which media are selected for restore, object copying, object consolidation, or object
verification when copies of the same object version exist in more than one location.
By default, Data Protector automatically selects the most appropriate media set. Media location
priority is considered if more than one media set equally matches the conditions of the media set
selection algorithm.
Location
Media location information helps you find the medium. You should enter the location when you
initialize media, and update it whenever you move media (for example, to off-site storage). The
location information is written on the media and in the IDB.
Data Protector allows you to create a list of predefined locations to simplify vaulting and archiving.
Number of media
The number of media present in a location.
Note: Using the Copies tab to manually select object copies overrides Data Protector’s Automatic Media Set Selection – Handle with care!
16 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
By default, Data Protector restores data from the original media set. However, if the original media
set is not available, but a copy is available, the copy is used for the restore.
If neither the original nor a copy is available in the backup device during restore, Data Protector
issues a mount request, displaying both the original and the copy as the media required for restore.
You can use any one of these.
If you perform a restore using a standalone backup device, you can choose to restore from the copy
rather than from the original. To do this, insert the copy in the device that will be used for the
restore, or select the device containing the copy. However, if you perform a restore using a library
device and the original is in the library, Data Protector will use it for the restore.
Properties
If an object version that you want to restore exists on more than one media set, you can manually
select the media set that will be used. Select the desired object version and click this Properties
button.
Version properties
The version properties allow a change from automatic media-set selection to a manual source copy
selection based on required media for specific copy shown in media list. From the “source copy
created” list you can select which object version to use based on the shown creation data/time for
all point-in-time object versions. Each time a new object version is selected, the media list below
will be updated.
Automatic Media-Set-Selection:
The order in which media are selected for restore, object copying, object consolidation, or object
verification when copies of the same object version exist in more than one location.
By default, Data Protector automatically selects the most appropriate media set. Media location
priority is considered if more than one media set equally matches the conditions of the media set
selection algorithm.
Restore – Summary
Restore Summary: Allow last-minute changes • Add file or directory to
1. restore manually
3.
17 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Restore – Summary
The Restore Summary screen allows for last minute changes to the object list. Data Protector
allows the addition or removal of objects for the restore session. The properties for each object
may be changed by selecting the object and using the pop-up menu (right-mouse-button) to select
its properties. The properties include Version, Destination, Restore Only, and Filters to allow a fine-
tuning for each object.
From the pop-up menu, an additional choice of version selection by time, allows a file version to be
chosen from “best available.” You can specify an acceptable time range for an alternate version, if
your preferred version is not available. Your selection may be from a date and time range from
seconds to hours.
3.
4. Resumable Restore Session
Filesystem restore sessions
IDB restore sessions
Oracle Server integration restore
sessions
18 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The actual restore process can be launched from the Restore Summary page by clicking the restore
button or the restore icon on the toolbar. It is advisable to perform a preview restore task prior the
“real” restore” to test of the availability of object versions, the availability of tape media and the
availability of restore devices
Configure Parallel Restore
When you have selected multiple objects for restore, Data Protector will prompt you with a
notification screen, and choice of performing single or parallel restore. In many cases your choice
of multiple objects was deliberate, but in case sequential restore is needed, you can choose
individual objects for single restore one at a time without losing the configuration specified up to
this point.
Single Restore
When single restore is chosen, you will be prompted for the object to restore. After that object
completes, chose the “start restore” icon from the Tool Bar, and select another of the configured
objects to restore. Repeat this process until all of your objects are restored.
Resuming Failed Sessions
A failed restore session can be resumed. DP has checkpoint files where the progress of the restore
is logged. see OLH “Resuming Failed Sessions “
2. 3.
19 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
To be able to recover data to a certain point-in-time all backups that are necessary for a restore of
a backup object to a certain point in time are required. A restore chain consists of a full backup of
the object and any number of related incremental backups.
To be able to restore to a particular point in time (a date/time of a particular backup) multiple
restores must be performed.
Example: We assume that a weekly full backup is performed followed by daily multi-level
incremental backups (Monday-Friday). To recover a directory to the state it was on the Tuesday,
the following restores (restore chain) must be performed:
1. Restore directory from last full backup.
2. Restore directory from Monday’s incremental1 backup.
3. Restore directory from Tuesday’s incremental backup.
Data Protector takes care of this by building the restore session automatically including the objects
and the order they are to be restored. Data Protector will issue mount requests for media in the
correct order as needed for the restore, if the media is not already in the device.
With this type of restore, it is also possible to omit files that were deleted between backups as well
as omit un-required incremental backups from the restore chain.
20 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The Restore by Query task, located at the bottom of the Scoping Pane in the restore context, will
help locate the files needed for restore by allowing a search of the Data Protector internal
database (IDB). The files must reside within the current Data Protector catalog database to be
located by the search.
You can search for files and directories if you know at least a part of the file name. You can use wild
cards to get a list e.g. of all .docx Word files.
A description of all the options you can choose, see OLH with keywords “restore by query”.
1.
21 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
In the second window of the Restore by Query wizard enter the Backup search interval of the
query.
In addition it is possible to see all modifications of the files you want to restore or you limit the
restore to files that were modified within the specified modification time interval (in days or
month).
The result window provides the same look and feel as the Restore source window and allow to
mark the shown files for restore and proceed with the restore like during a regular restore.
Resume Session
• All objects in status Failed will be restored
from the point of failure
• The feature is activated as a default.
• Checkpoint files are created during restore
and used during Resume Session.
Note:
Option Restart Failed Objects is not available for
restore
CLI:
Resume: omnir -resume <Session-ID>
22 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Restores of File Servers with hundreds of Mount Points are very complex and time consuming. In
case of a failure it is difficult to analyze the status in order to reconfigure a manual restart of this
restore. Data Protector supports such situation with the Resume Session feature.
Resume Session
This option was introduced in Data Protector 6.20. Simply said it allows the restore to continue at
the position, where the problem occurred. Based on IDB checkpoints Data Protector knows the last
file that was restored successfully and continues with the resume restore at that position.
The resumed session will create a new Session-ID for the
restore.
Similar to a Backup session restart this feature is available through the DP GUI. Change to the
Internal Database context, expand Sessions, mark the failed session and select Resume Session
from the shown menu. Click Yes in the Popup window to trigger the Resume.
Limitations: The option Resume Session is only available for Filesystem Restores and
Oracle Integration Restores.
The option Restart Failed Objects is not available for restore
Contents
Module 11 Monitoring, Reporting, Notification 1
11–3. SLIDE: Monitoring, Reporting, Notification overview ............................................................. 2
11–4. SLIDE: Monitoring current sessions ........................................................................................ 4
11–5. SLIDE: Viewing previous session details ................................................................................. 5
11–6. SLIDE: Reporting possibilities ................................................................................................. 6
11–7. SLIDE: Reports and report categories..................................................................................... 7
11–8. SLIDE: Reporting overview ...................................................................................................... 8
11–9. SLIDE: Interactive Reports .................................................................................................... 10
11-10. SLIDE: Scheduled Reports .................................................................................................... 11
11-11. SLIDE: Scheduled Reports cont. ........................................................................................... 12
11-12. SLIDE: Notification overview ................................................................................................ 13
11-13. SLIDE: Default notification ................................................................................................... 14
11-14. SLIDE: Adding a notification ................................................................................................. 15
11-15. SLIDE: Web Reporting........................................................................................................... 16
Module 11
Monitoring, Reporting and Notifications
3 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Data Protector provides a set of tools and features to enable the administrator to manage the Data
Protector environment effectively. Monitoring, reporting and notifications are available through
the following Data Protector tools:
Monitoring allows the administrator to view and manage current cell activity and view previous cell
activity. All currently running sessions can be seen in the Monitor context in the Data Protector GUI.
Completed or aborted sessions can be viewed in the Internal Database context. Monitoring can be
used to check the status of currently running backups, restores, copy jobs etc. and check for
outstanding mount requests. Individual sessions can also be monitored through the CLI.
Reporting provides information on various aspects of the Data Protector environment. For
example, the status of the last backup, object copy, object consolidation or object verification,
check which systems in the cell are not configured for backup, check the consumption of media in
media pools; check the status of device and much more.
Reports can be configured using the Data Protector GUI, any web browser with Java support using
Web Reporting and through the CLI.
Reports can be run individually, interactivity or scheduled as a Report Group. The creation of Report
Groups allows for reports to be scheduled for specific times or to be run when triggered by a
particular notification.
Notification enables the administrator to be alerted to predefined events such as a mount request
or a device error. Notifications can be sent in various forms such as email and SNMP.
External Reporting tools, such as the recently announced HP Backup Navigator (see Module 2), the
Data Protector Reporter or 3rd Party solutions like Aptare can be used to extend the build-in
Reporting features of Data Protector.
These applications typically install their own agent either directly on the Data Protector Cell
Manager system or only on a Data Protector client system with the Cell Console module installed
and run regular queries on the Data Protector IDB. The results are loaded into their own database
and based on the data various customized reports can be run.
Currently running
Sessions
CLI Monitoring
Monitoring allows the management of running sessions and allows the user to respond to mount
requests. The status of the session is displayed, as is session type, owner, the session ID, the time
the session started and the name of the corresponding backup specification.
The currently running sessions can be seen in the Data Protector GUI Monitor context. The status of
current sessions is displayed in the Results pane. The sessions can be sorted by status, type,
owner, etc. by clicking the corresponding column header. Double click on the session to be viewed.
The results pane will then provide detailed session information (the objects, the devices, the
status, session messages, etc.).
When an interactive session is started (from their respective Context), a monitor window opens
showing the objects, backup devices and the messages generated during the session.
A mount request for e new media appears in the Scoping pane and can be confirmed or the session
can be aborted
5 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
To view previous sessions, select Internal Database in the Context List. In the Results Pane all the
sessions stored in the IDB are displayed. It is also possible to view the sessions in the Scoping Pane.
The sessions are stored by date. Each session is identified by a session ID consisting of the date in
YY/MM/DD format and a unique number.
On selecting an individual session, details on each backup object is given. The full object description
is provided, including the client system, the mount point, description, object type, backup status,
and size of each object, the number of errors or warnings and the devices used. The object status
can be completed, failed or running and is a summary of all objects in a session plus the completion
status of pre and post exec commands.
Right click the session and select Properties to view details on a specific session.
Right click on a failed backup/restore session allows restarting/resuming a session.
Example:
omnidb –session Obtain a full listing of previous sessions
omnidb -session <ID> -detail Obtain object level detail from a particular session
Reporting possibilities
• Interactive Reports
(GUI & CLI)
• Report Groups
• Event Triggered
• Post Exec Script
• Web Reporting
6 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Reporting possibilities
Reporting can be used to garner information about the Data Protector environment. For example, it
is possible to check the status of the last backup, object copy, or object verification, check which
systems in the environment are not configured for backup, check on the consumption of media in
media pools, check the status of devices.
Reports can be customized with parameters allowing multiple options to be configured. Reports
can be started interactively using the GUI, the CLI and Web Reporting.
In addition, through the use of Report Groups, reports can be started using the Data Protector
scheduler or through a notification event. The Report Group allows for easier management of
reports; various reports can be included in a Report Group, which can then be scheduled or
triggered by a notification. It is also possible to start a Report Group interactively through the GUI
and CLI. To configure a Report Group, the following needs to be provided:
Reports can also be started by a post-exec script that includes a Data Protector CLI command that
starts the report.
• Configuration
• Internal Database
• Pools and Media
• Session Specifications
• Session in Timeframe
• Single Session
7 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Data Protector provides a rich set of predefined detailed reports to provide all the typical
information that the Administrator may need to assist with day to day Data Protector tasks. All are
available via the Reporting context in the Data Protector GUI and Web Reporting.
Session Specifications: Average Backup Object Sizes, Filesystem or Objects Not Configured for
Backup, Session Specifications
Sessions in Timeframe: Statistics about Client, devices, list of sessions, used media,
In a Manager of Manager (MOM) environment, reports can be configured on a MOM level, so they
include information from all Client Cells.
Reporting overview
Interative Report
Choose Reporting or
Tool Add Report Group &
Add Report to Group
Choose Choose
Report Format Report Content
8 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Reporting overview
Data Protector provides a large number of predefined reports. Reports can be generated
interactively via GUI or command line, in a selected format, such as ASCII, HTML, etc. Reports can
also be used within notifications, such as email, broadcast, etc., and can be scheduled to provide
regular information.
Reporting Tools
Data Protector provides the following mechanisms for defining and running reports:
• Data Protector GUI (Reporting context)
• Data Protector Reporting CLI command - omnirpt
• Web Reporting
• Other Data Protector CLI commands (omnidb, omnistat, omnicellinfo, omnimm,
omnidbutil), allow integration with 3rd party reporting tools
Report Content
Depending on the information about the environment that is required, various types of reports can
be generated:
• Configuration reports
• IDB reports
• Pools and media reports
• Session specification reports
• Sessions in timeframe reports
• Single session reports
Report Formats
It is possible to generate Data Protector reports in various formats.
If the report is started interactively, with each report individually, the report is displayed in the Data
Protector Manager and it is not required to choose the report format.
If reports are gathered into a Report Groups, it is possible to specify the format and the recipients
of each report.
You can choose from the following report formats:
• ASCII - A report is generated as plain text
• HTML - A report is generated in HTML format. This format is useful for viewing using a
web browser
• Short - A report is generated as plain text, in summary form showing only the most
important information. This is the suggested format for broadcast messages
• Tab - A report is generated with fields separated by tabs. This format is useful if you
plan to import the reports into other applications or scripts for further analysis, such as
Microsoft Excel
The actual output of a report varies depending on the selected format. Only the Tab format
displays all fields for all reports, other formats may sometimes display only selected fields.
Delivery Methods
Reports may be delivered using the following methods:
• Broadcast - Allows for pop-up window within the Microsoft Windows environment
• Email - Sends the report as Email, requires a mail sending capability to be available on
the Cell Manager
• External - Executes a program external to the Data Protector product. The report data
is sent to this executable as command line parameters
• Log - Logs the report data to a file on the Cell Manager
• SNMP - Sends the report data to an SNMP manager, such as NNM or Operations Manager
Interactive Reports
3. Modify report, if needed
Interactive Reports
The Data Protector GUI provides an easy method of generating and viewing reports online. Through
the use of wizards, the Data Protector GUI can be used to define, generate and schedule reports on
both Windows and UNIX Cell Managers. Through the GUI, it is possible to run individual reports
interactively or group reports into Report Groups and run all the reports in the Report Group
together. The Mount Request Report and Device Error Report can only be used in a Report Group
and are not available as interactive reports.
How to run an individual report interactively can be done with few clicks and is shown above.
To run all the reports in a Report Group together:
1. In the Context List, select Reporting
2. In the Scoping Pane, browse for and right-click the report group you want to start and
then click Start
3. Click Yes to confirm
The creation of a Report Group, adding reports to a Report Group and the scheduling of a Report
Group will be described later in this training.
3. Schedule Options
10 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The Report Group defines a collection of reports that will be executed together. The Report Group,
unlike individual reports, can be scheduled. Report groups can also be triggered by a notification
event. Therefore, a report group allows you to:
Report Types
11 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The Report Group is used to create a report collection that may be scheduled and executed
together. The Report Group is conceptually, a folder or container for the report definitions. A report
group can contain multiple individual reports that will be executed together via the execution of
the report group. Data Protector allows for several reports to be added to a single Report Group.
Once a Report Group is created, report definitions may be added, as shown in the slide above, to
form the collection. Once defined, the properties of the Report Group and reports within the group
may be modified.
Notification overview
HP Data Protector contains a build-in event driven notification service:
Event: Notification
Method:
• Device Error
• End of Session • Broadcast
• LowDatabase Space triggers • Email – OS Based
• Mount Request • Email - SMTP
• License Warning • External
• Mail Slots Full • Logfile
... • SNMP
• Report Group
12 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Notification overview
Data Protector allows notifications to be sent from the Cell Manager when specific events occur.
For example, when a backup, object copy, object consolidation, or object verification session is
completed, an e-mail with the status of the session can be sent. It is possible to set up a
notification that triggers a report. It is possible to configure notifications using the Data Protector
GUI or any web browser with Java support.
To get a complete list search in OLH for “notification type” and notification send method.
Default notifications
13 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Default notification
Shown above are all the pre-configured notifications that are included with the Data Protector Cell
Manager installation. There are two main types of notifications.
Notifications that are scheduled and started by the Data Protector’s checking and maintenance
mechanism:
Each of the default notifications sends alerts to the Event Log. Many of these notifications send
their alerts based upon pre-configured thresholds that may be modified. The thresholds and
parameters may be viewed using the GUI, as partly shown above.
Adding a notification
14 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Adding a notification
It is recommended to add additional notifications (to the default set that writes to the event log)
instead of altering the default set. Each event type may be configured multiple times to trigger the
desired types of notification. To configure a notification a name for the notification, a type of
notification, message level, send method and recipient are required.
All other input parameters depend on the type of the notification. How to add a notification is
shown in the slide above. After selecting a type of Event, (on the slide it’s End of Session) the
following drop down boxes are adapted.
The Level option refers to the severity level at which the notification will be triggered by a
particular event. The severity of attributes increases as follows:
Normal Warning Minor Major Critical
Once configured, the notification will be sent using the specified send method when the specified
event occurs.
To trigger a report group by a notification, configure a report group and then configure the
notification to use the Use Report Group send method.
Parameters may be viewed using the GUI, as shown above.
Web Reporting
15 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Web Reporting
Data Protector provides a Java applet Web-based online reporting capability that lets you
configure, run, and print all the Data Protector built-in reports of the omnirpt command
interactively. During reporting operations, Data Protector's Java applet directly accesses the Cell
Manager to retrieve current data.
The Java reporting interface is installed as a component of the cell console, which means that it is
available on any client system that supports the cell console user interface.
The Java applet requires a web browser (e.g. Microsoft Internet Explorer).
The Java interface is started from a web browser with the following URL
URL up to version DP8.0: file:/opt/omni/java/bin/webreporting.html
URL in DP8.1 and newer: https://<cell_manager>:7116/webreporting/WebReporting.html
Contents
Module 12 — Media and Object Copy and Verification...................................................................... 1
12–3. SLIDE: Overview ...................................................................................................................... 2
12–4. SLIDE: Media Copy ................................................................................................................... 3
12–5. SLIDE: Interactive media copy................................................................................................. 5
12–6. SLIDE: Automated Media Operation 1/2 ................................................................................. 8
12–7. SLIDE: Automated Media Operation 2/2 ............................................................................... 11
12–8. SLIDE: Object Copy ................................................................................................................ 15
12–9. SLIDE: Object Copy – Example 1............................................................................................ 17
12-10. SLIDE: Object Copy – Example 2........................................................................................... 18
12-11. SLIDE: Object Copy GUI/CLI ................................................................................................... 19
12-12. SLIDE: Interactive Object Copy 1/3 ...................................................................................... 20
12-13. SLIDE: Interactive Object Copy 2/3 ...................................................................................... 21
12-14. SLIDE: Interactive Object Copy 3/3 ...................................................................................... 22
12-15. SLIDE: Automated Object Copy 1/2 ...................................................................................... 23
12-16. SLIDE: Automated Object Copy 2/2 ...................................................................................... 24
12-17. SLIDE: Object Copy wizard – Filter 1/2 ................................................................................. 25
12-18. SLIDE: Object Copy wizard – Filter 2/2 ................................................................................. 27
12-19. SLIDE: Object Copy wizard – Devices ................................................................................... 28
12-20. SLIDE: Object Copy wizard – Options ................................................................................... 30
12-21. SLIDE: Summary ................................................................................................................... 32
12-22. SLIDE: Media and Object verification ................................................................................... 33
12-23. SLIDE: Media verification ...................................................................................................... 34
12-24. SLIDE: Object Verification GUI/CLI ........................................................................................ 36
12-25. SLIDE: Interactive Object verification 1/4 ............................................................................ 37
12-26. SLIDE: Interactive Object verification 2/4 ............................................................................ 38
12-27. SLIDE: Interactive Object verification 3/4 ............................................................................ 39
12-28. SLIDE: Interactive Object verification 4/4 ............................................................................ 40
Module 12
Media and Object Copy and Verification
Overview
Media Copy
Object Copy
3 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Overview
There are multiple reasons to generate duplicate sets of media from the same source data.
Disaster recovery preparation, security aspects or any kind of regulations might still require a
classical media copy, but there are a lot more reasons to consider. Especially Data Protectors
Object Copy functionality opens the door to new advanced backup concepts that includes fast
multiplexed backups to disk, followed by scheduled de-multiplexed object copy sessions to
physical tapes that run fully independent from the source data.
Methods
Data Protector offers two methods for data replication:
• Media copy
• Object copy
Media Copy
Source Copy
4 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Media Copy
Media Copy allows the creation of exact 1:1 copies of a physical media containing a backup. After
creation ether source or copy media can be removed from the library to a safe place for vaulting
purposes and keep the other media in the library for restore requests.
In order to create a media copy a source and a target device with the same tape type are required,
so LTO media can only be copied to other LTO media with the same or higher capacity, but it is not
possible to copy old SDLT media to new LTO media. It is possible to create multiple replicas from
the same source media, but it is not possible to create a second copy from a created copy.
Note: Media copy is not supported for File Libraries or file devices, only exception is a File
Jukebox. Use Object copy to create replica from file based media.
Source and copy are identical, so how to identify if a media is a source media or a copy?
Within Data Protector GUI double click on a media and on the shown media properties select Info.
Under Statistics on the Info page check the Type entry. The source media got a HASCOPY flag,
while the copy got an ISCOPY flag assigned. In addition the source media contains a Copies tab
that lists all created copies from the selected source media
In order to see the list of copies from a given source media run:
• Interactive
• Scheduled
• Post backup operation
Block Padding
There are slight variations in the overall capacity of individual tapes. This can pose a significant
challenge when attempting to make an exact copy from a tape that is slightly larger than the
destination tape. Planning for this eventual issue must be done before media is initialized.
There is a local tuning parameter that may be configured for the Media Agent called
OB2BLKPADDING. This parameter is placed in the omnirc file on each system with connected
devices that is used as a source device and indicates the number of empty blocks to add after the
tape header when a Data Protector media is initialized.
This additional padding allows tapes of the same type to be duplicated without problems even if
they vary slightly in capacity, because the empty blocks from the source media are not copied.
Tape padding is configured in block units. Normally, the empty blocks should take up
approximately one percent of the length of the entire tape.
For more information about OB2BLKPADDING see the description in the omnirc file.
2
4. Follow the Media copy wizard
3
4
An interactive media copy operation can be initiated through the Data Protector GUI from the Device
and Media context, or from the command line interface via omnimcopy command.
The source and destination devices are logical devices. The logical devices may be located
anywhere in the Data Protector cell, but must be of the same media type.
Note: Media copy is also listed under Object Operations. Note that this is a void entry
without any function besides providing the information to use Media Copy from the
Device and Media context.
To start a media copy select the source media from a media pool (2) and select Copy from the menu
(3). A wizard guides through the media copy configuration steps and starts the media copy at the
end.
Wizard 1/5 - Source Device Wizard 2/5 - Target Device and Media
Wizard 3/5 - Target Pool Wizard 4/5 - Target Media label and location
As shown on these screenshots the target media is initialized during copy. In case the target media
already contains data make sure that Force operation option is checked (see Wizard 5/5). The
protection of the copy can be set independent from the source protection.
In case the source media is overwritten after the copy, copy media becomes new source media.
6 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Automated Media Operations (AMO) is a feature that facilitates automated copying of media
containing backups and is located in the Devices & Media context.
• post-backup: enables automatic media copy at the end of a backup session, which copy all
media used in that particular session
• scheduled: schedules an automatic copy of media used for backups at a specified point in
time. Media used in various backup specifications can be copied in the scope of a single
scheduled AMO session. Appropriate device and media pairs must be available during
scheduled-copying; the copy session aborts if either the device or medium is not available
(such as locked in backup mode).
Wizard 1/3 – Select Backup Spec Wizard 2/3 - Select Source and Target Device
A post-backup Automated Media Copy specification needs to be created for each backup
specification to copy. The source and target devices needs to be of the same type and need to
be located in a library, standalone devices are not supported and will not be listed as Devices
in the wizard (Wizard 2/3). In addition it is not possible to select the same device as source and
target device.
It is possible to create up to 5 copies of one source media within one pos-backup job.
The post-backup Automated Media Copy specification is stored in the following directory:
WINDOWS : DP_CONFIG\amo
UNIX : DP_CONFIG/amo
The specification got the same name like the selected backup specification, but file extension
suffix “amc”, eg. if the backup specification is named Backup_MSL the created Automated
Media Copy specification is named Backup_MSL.amc.
Example:
The scheduled Automated Media Operation (AMO) is the process of duplicating media used in one
or more backup sessions at a scheduled time. Scheduled Media Copy seeks backup sessions that
started and have completed, within a user-defined timeframe. Once the sessions are known, AMO
copies all of the media that belong to the backup sessions using a single AMO session.
The media will be copied simultaneously, if enough devices are available. Otherwise, they will be
copied sequentially. Load balancing in AMO strives to simultaneously use the maximum number of
media during the copy process.
As a default omnitrig process polls every minute to see if there are any scheduled AMO jobs
(including backups or reports) to be processed.
Wizard 1/7 - Specify AMO name Wizard 2/7 – Select Source and Target device
Wizard 3/7 - Specify Time Frame Wizard 4/7 – Select Backup specs that
are included in copy job
Wizard 5/7 – Filter source media Wizard 6/7 – Specify Copy option
The main advantage over a post-backup AMO job is the feature to include multiple backups in one
copy job as well as the feature to schedule this copy to a time no or less backups are running. A
filter (see Wizard 5/7) allows elimination of poor or fair media from being copied as well as it
makes sure that only protected data is copied, but the administration can tweak the behavior in
case of a concrete business need.
A new feature is the definition of a timeframe that is checked for all or only selected backups (see
Wizard 3/7). It is possible to select an absolute or relative timeframe to be checked.
Relative
The relative time option apportions a timeframe based on the two input values, namely Started
Within (hours) and Duration (hours). Started Within establishes the beginning of the timeframe,
while Duration sets the actual duration of the time frame. This defines a so-called window of
opportunity, starting some number of hours before the actual AMO start time.
For example, an AMO is scheduled at 10.00PM; specifying relative time option, we may choose
Started Within = 24 hours and Duration = 10 hours. Now AMO seeks all media associated with
backup sessions that started between 10.00PM the night before and 08.00AM the next morning,
and attempts to copy them.
A conflict can be anticipated in case one or more backup sessions that were started within the AMO
time frame were still running beyond this time frame, and simultaneously AMO was attempting to
copy the media that this particular backup specification would produce.
In such situations, AMO will not be able to copy media that are related to that particular backup
specification because they are still locked by the BSM. The AMO session displays the following error
message:
Source medium <medium ID> could not be locked and will not be copied
in this session.
Absolute
You set the scope in terms of absolute days to search for backup sessions. The drop down arrows
serve to open a calendar. This option would probably be used for one-time vaulting purposes, or to
vault media from a certain time to another!
Object Copy
Freeing Media
De-Multiplexing
Data Migration
Vaulting
8 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Object Copy
The object copy functionality enables the replication of object versions to a specific media set.
Object versions are selected from one or several backup sessions. During the object copy session,
Data Protector reads the data from the source media, transfers the data, and writes it to the target
media. The source media for the copy may be an original backup medium, or a copy; a copy of a
copy may be made as necessary. This process is similar to running a restore that is connected to a
subsequent backup.
The result of an object copy session is a media set that contains the selected object versions in the
sequence specified.
Object copy uses a Copy and Consolidation Session Manager (CSM) to read the copy specification
and controls the object copy operation. The session is visible to the user and can be monitored
similar like a backup session via Data Protector GUI/CLI, e.g. viewing messages, session status,
object and devices (read devices are listed before write devices and they have two different states:
reading and writing).
The object copy session details and object version data are stored in the IDB. Additionally, the detail
catalog information for each new copy object version is stored in the DCBF files.
The amount of detail stored depends upon the selected log level for the session, which could differ
from the logging level of the backup session for filesystem backups.
Freeing media: To keep only protected object versions on media, you can copy such object
versions, and then leave the medium for overwriting. For further details please see
example on one of the next pages.
De-multiplexing of medium: You can copy objects to eliminate interleaving of data. For
further details please see example on one of the next pages.
Consolidating a restore chain: You can copy a restore chain (all backups that are necessary
for a restore) of an object version to a new media set. A restore from such a media set is
faster and more convenient, as there is no need to load several media and seek for the
needed object versions.
Support of disk staging: Administrators may use high speed disk backup to for initial
backup, and then replicate (migrate) the data to tape for offsite storage.
Migration to another media type: You can migrate backed up data to another media type.
For example, you can copy objects from file devices to LTO devices or from DLT devices to
LTO devices.
Vaulting: Administrators may create copies of backed up objects and keep them in several
locations. Vaulting is a process of storing media in a safe place (often called a vault), where
they are kept for a specific period of time. It is recommended to keep a copy of the backed
up data on site for restore purposes. To obtain additional copies, you can use the object
copy, object mirror, or media copy functionality, depending on your needs.
Object Copy functionality is the base function and key enabler for other important Data Protector
features like Backup Device Mirroring (Creation of multiple backup copies at backup time), Object
Consolidation (Merging of a restore chain into a synthetic full backup) and Object validation (Object
restore into memory only to verify consistency of a backup).
It allows handling of backup data on a pure logical layer and removes the dependency to the used
storage layer.
Example 1:
SAP archive logs are backed up several times a day to a File Library and
cumulated within one Object Copy session to a physical tape
16:00
14:00
10:00
The customer needs to backup one or more SAP databases. While SAP database backup goes
directly to tape the SAP Archive logs are backed up to disk (e.g. File Library) to prevent ongoing
tape load and unload as well as tape forward/backward positioning operations.
Once a day all backed up archive logs are copied from the disk based backup location to a physical
tape library within one large Object copy job. It is possible to copy several hundred up to several
thousand objects within one copy session, default maximum value is 500, in case of expected more
objects to copy adjust the global parameter CopyAutomatedMaxObjects.
Example 2:
Fast Filesystem backup of multiple systems is de-multiplexed by a
Object copy session to several tapes for fast restore
10 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
In order to keep the backup window short in this sample setup all performed backups are running
with a high device concurrency to fully utilize the available bandwidth of the LAN or SAN
infrastructure. In such a case the backed up data is multiplexed on tape. This will result in much
longer restore times compared to the backup time if only one or a few objects need to be restored,
because data is fragmented on tape. By using object copy functionality it is possible to de-
multiplex the data ether by copying all related data of ether all backed up objects or only of
important objects to special tapes with a dramatic reduced concurrency (level of multiplexing)
setting of the configured target devices.
It is also possible to copy all the backed up data to just one device with concurrency of 1 of the
configured target device for de-multiplexing. While it’s a full offline operation, so original source
data system(s) are not contacted for the copy, this kind of operation will take much longer time
than the backup, so it is recommended to use it only for objects with a higher need of short restore
times.
1. Automated
• Post Backup
• Scheduled
1 1
2
2
2. Interactive
• Media view
• Objects view
• Sessions view
Within the Data Protector GUI Object Copy is part of the “Object Operations” context. This context
contains 3 main sections: Copy, Consolidation and Verification. As indicated by the context name
all those section allow Data Protector backup objects operations with Object Copy as the base
functionality that is extended and utilized by Consolidation and Verification. The two main items
under Copy are Media copy and Object copy. As mentioned in the previous chapter Media copy
simply directs you to the “Devices & Media” context for manual and automated operations which
will perform a bit-to-bit copy of a medium. The second item is Object copy and there are two
methods for object copy:
Automated and Interactive. Both methods and their features are explained in the following pages.
Additionally, there are the Objects and Tasks tabs. The Objects tab allows the definition of a copy
specification. The tasks tab starts interactive session wizards to perform various object copy tasks
and create copy specifications. There is a high degree of overlap between the two tabs, and are
simply multiple ways to access the same functionality.
Note:
• It is possible to select from one or more Media Pool, one or more media up to a single objects to copy
• Select Restore Chain feature not available
• Possible to disable copy of unprotected objects (Check option: Enable selection of protected objects only)
12 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
There are multiple ways to start an interactive Object copy session, sorted by Media, Objects or
Sessions. In certain use cases, situation and depending on the available backup information it
might be more convenient the start with Media, Objects or Sessions.
In case an interactive Object Copy by Media is selected the result pane starts showing all
configured pools (without free pools) – expanded all media of that pool – expanded again all
objects with their session-ids that are stored on that media. The grouping and sorting rule is:
1. by media pool
2. by medium
3. by object + description + session-id (and if applicable “+copy”)
You can select a pool, a medium, one or more objects on one or more media, but only a completed
object version. The object can be either the original object version or a copy object-version. A failed
object can NOT be copied. “Select restore chain” is not available for this starting point.
It is possible to automatically exclude all unprotected objects from the copy by setting the option
“Enable selection of protected objects only”
If this is selected then only objects that have data protection can be selected for copying. The check
boxes of objects without data protection are shaded.
4. Optional:
Right click on a marked Object and
4 click on “Select Restore Chain ..”
5
Note:
• Possible to mix object types in one copy job (Filesystem, IDB backup or Integration objects)
• Select Restore Chain feature is available and will select full with all required incremental backups
• Possible to disable copy of unprotected objects (Check option: Enable selection of protected objects only)
13 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Starting Point Object Copy by Objects shows all backed up Object types, similar to the restore
context.
All object versions are shown, including copies. Selecting a particular original or copy does not
automatically mean that the selected object version will be used as a source during copy. Data
Protector may substitute the selected object version for one of its copies or original. You may avoid
this so called Automatic Media-Set Selection (AMSS) by changing from automatic to manual
media-set selection. This option is available in the Summary window (near the end of the wizard)
by selecting the object name and then modifying the properties.
To copy a restore chain (all backups that are necessary for a restore) of an object version, right-
click the object version and click “Select Restore Chain” option. All required objects (full backup
and all required incremental backups) will become selected. The selection of a restore chain is not
available for integration objects.
Similar like in Object copy by Media it is possible to filter out all unprotected objects by checking
option “Enable selection of protected objects only” and it is possible to select different Object
types within one copy session.
4 4. Optional:
Right click on an Object and click on
“Select Restore Chain ..”
Note:
• Possible to mix sessions with different object types in one copy job
• Select Restore Chain feature is available and will auto select full with all required incremental backups
• Possible to disable copy of unprotected objects (Check option: Enable selection of protected objects only)
14 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Starting Point Object Copy by Sessions lists all performed Sessions, with the latest Session-ID on
top, in the result pane. If you expand the Session-ID all objects that were backed up within that
particular session are listed.
Similar to Object Copy by Object this starting point supports selection of different object types that
were initially backed up within different sessions for one copy session. The listed sessions are not
just backup sessions, also copy and consolidation sessions are listed and can be selected for copy
within one copy session.
Object Copy by Sessions supports the exclude of unprotected objects by “Enable selection of
protected objects only” option and the auto selection of a whole restore chain by selecting the
“Select Restore Chain” option. Data Protectors build-in Automated Media Set Selecetion feature
will check for the best fit of available medias according its rules and autoselect the appropriate
sessions after checking “Select Restore Chain” option.
3
4 4. In the right area mark the
Backup, Copy or Consolidation
Specifications for copy,
click on Next and complete
the wizard
Note:
• Possible to add Backup, Copy and Consolidation specifications for automated copy in one copy job
• Copy job will run any time one of the marked specification was running (Post-Backup, -Copy and -Consolidation)
• Source object filter allow control on copied objects
15 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
There are two types of automated object copy in addition to interactive copy:
UNIX: DP_CONFIG/server/copylists/afterbackup
WINDOWS: DP_CONFIG\server\copylists\afterbackup
Note:
• Possible to add Backup, Copy and Consolidation specifications for automated copy in one copy job
• Configured scheduled copy job can be start interactively (Right mouse click on saved spec and select “Start Copy
16 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
UNIX: DP_CONFIG/server/copylists/scheduler
WINDOWS: DP_CONFIG\server\copylists\scheduler
17 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Object Filter
In this page you specify the criteria for object selection. Only the objects matching the specified
criteria will be copied. The available options are as follows:
Include objects backed up in timeframe (available only for scheduled object copy)
This option defines the timeframe within which Data Protector will search for sessions.
• Relative time
Select this option to set a relative period of time, and then specify the timeframe. The first
number specifies the beginning of the timeframe, and the second number the duration of
the timeframe.
For example, if you specify 24 in the first field and 22 in the second field, and the operation
is scheduled today at 10 pm, Data Protector will copy objects from the sessions that took
place between 10 pm yesterday (24 hours ago) and 8 pm (22 hours after the start time)
today. This time window concept is the same as for the AMO discussed previously in this
module.
• Absolute time
Select this option to set an absolute period of time. Specify the starting and the end date of
the period. Click the drop-down arrows to display the calendar.
Library Filter
• All libraries : No Filter on media location
18 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
In this page you can specify the library filter for object selection. Only objects residing on media in
the specified libraries will be copied.
Selected libraries
Select this option to include only specific libraries. Objects that reside on media for the selected
backups and used within the time window, but are outside the specific library will not be copied.
2. Optional:
Replace Original Device
by other Device
Note:
Automatic device selection is influenced by Logical Device
Device Policy setting and by global variable
AutomaticDeviceSelectionOrder (0,1,2)
19 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
1. Source Devices
Device Policy
In addition it is possible to mark and replace source devices with other devices in the same library
by pressing the Change button and do the replacement in the new popup window.
In case of large copy jobs with lots of devices involved this feature will allow a dramatically
reduction of involved devices.
2. Destination Devices
Show all
Select this option to display all configured devices, including these ones that will be used for source
reading. At this stage Data Protector does not know which devices will be used for reading if user
wants to use the automated media-set selection. With automated media-set selection the device
availability is known only when the copy session starts.
Show selected
Select this option to display only selected devices. The user can select any number of devices but
not more then number of selected objects.
Properties
To display the properties of a device, select the device, highlight it, and then click this button. This
brings up the same dialog as in backup where user can change media pool, define pre-allocation
list, etc.
Min(-imum)/Max(-imum) devices
Specify the minimum and maximum number of available devices similar to the Load Balancing
feature as in backups.
Note: Data Protector will lock the number of maximum devices, so devices cannot be
used for other tasks, even if they are not used in this copy job.
20 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
In this page you can specify options for the object copy operation.
Use replication
This Object Copy wizard can be used to configure Data Replication between two B2D devices, e.g.
two StoreOnce B6500 Hardware Stores. To activate this special replication with or without Data
Dehydration activate this option. For normal Object Copy do not check this option.
“Recycle data and catalog protection of failed source objects after successful copy”
Select this option to remove data and catalog protection of failed objects on the source media. The
failed objects will not be copied in the object copy session, so carefully consider the use of this
option.
Using these option will automatically free space on the source data location and allows re-use of
the space for backup.
• Protection
The data protection of the objects on the target media is by default the same as the
protection of the source objects. To specify a different protection period, deselect the
“same as source” option and select one of the following: days, none, permanent, until, or
weeks.
• Catalog Protection
The catalog protection defines how long information about the objects (such as filenames
and detail catalog information) is kept in the IDB.
By default, the catalog protection of the objects on the target media is the same as the
catalog protection of the source objects.
• Logging
The logging level determines the volume of detail for files and directories that is written to
the IDB during backup or object copy sessions. The logging level of a copy session can differ
from the backup session; it can be even higher than defined at backup time, because full
catalog is always written to tape. Logging option only applies for filesystem backups. The
Logging level cannot be set for Integration backups, here all required information for
restore/recovery are stored automatically in IDB at backup and at copy time.
Ownership
The user who starts a session in Data Protector is stored as session owner in the IDB. Session
Ownership is a security feature, so only users with appropriate permissions are able to see and
browse a session. Object copy allows copy of various sessions, initially performed by different
owners, so incorrect usage might result that users are not able to see their session and data after
source data protection expires. This option allows overwriting of the default session owner by the
specified user.
Summary
Object Copy Object Mirror Media Copy
What is Any combination of A set of objects An entire medium
duplicated object versions from from a backup
one or several backup session
sessions
Time of Any time after the During backup Any time after the
duplication completion of the completion of the
backup backup
Media type of Can be different Can be different Must be the same
source and target
media
Size of source and Can be different Can be different Must be the same
target media
Appendability of Yes Yes No
target media
Result of the Media containing the Media containing Media identical to the
operation selected object the selected source media
version object version
This table provides an overview of the available methods of duplicating backed up data:
• Media copy
• Object copy
• Object mirror (covered in detail in the “Backup” module)
• Media verification
• Object verification
Note: Media verification is located under Devices and Media, similar to Media Copy. Media Verification
entry under Object Operations is just a placeholder that forward users to the right context
22 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Media and Object Verification allows the verification of performed backups, copies and
consolidations without physically restoring the data to any system.
Similar to Media and Object copy Media Copy allows the verification of the whole medium, while
Object copy allows the verification on Object level.
Under normal conditions a verification should not be required. Data Protector checks the correct
function of its Media and Disk Agents and reports any encountered problems in the session report,
so the Backup Administrator is able to get the Media and Object backup status by checking the
session reports, running reports or querying the IDB.
But in special situations such as critical restores, vaulting of media with long retention for legal
purposes or sending media to other datacenter for disaster recovery preparation a media and
object verification might be required to ensure data consistency.
Media verification
Media Verification
• Under Devices & Media expand
Pools and expand your Media Pool
• Mark the Medium and select Verify
from the Context menu
• Select the Backup Device that
should be used for the verification
and click
• Check the status of the started
media session to see the result
Media verification
As stated on the previous slide see Devices & Media context for the Media verification feature.
Usage
Expand Pools and expand the Media Pool holding the medium to verify and mark this medium.
Select Verify from the context menu. In an Option window select the Logical Device that should be
used to read the medium and hit Finish to start the verification.
What is verified?
• Checks the Data Protector headers that have information about the medium
(Medium-ID, Description and Location).
• Reads all blocks on the medium and verifies block format.
• If the CRC check option was used during backup, recalculates the CRC and compares it to
the one stored on the medium. In this case, the backup data itself is consistent within
each block. This level of check has a high level of reliability.
If the CRC check option was not used, and the verify operation passed, this means that all the data
on the medium has been read. The medium did not cause a read error, so the hardware status of
the tape is fine.
1. Automated
• Post Backup
• Scheduled
2. Interactive
2 • Media view
1 • Objects view
• Sessions view
1
2
Object Verification part of the Object Operations context. There are two methods for object
Verification: Automated and Interactive. As an example an interactive Object verification is
explained on the following pages.
Very similar to Object Copy, there are the Objects and Tasks tabs. The Objects tab allows the
definition of a copy specification. The tasks tab starts interactive session wizards to perform
various object copy tasks and create copy specifications. There is a high degree of overlap between
the two tabs, so there are multiple ways to access the same functionality.
1
1. Change to Object Operations
25 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Object verification offers the same look and feel like configuring an Object copy. So entry points are
the same for the creation of an automated or interactive Object verification. As an example the
creation of an interactive Object verification is used to illustrate the usage of that feature.
Change to the Object Operations context in the DP GUI. Under Verification expand Object
verification – Interactive and select Session to verify objects created in a particular session.
In the Result window on the right the three IDB sessions are listed. Expand the specific session,
here 2014/07/30-4, to see the backed up Objects from that session.
6
Note:
For a SAN based verification select Verify on
media agent host, otherwise data is send
over LAN to the selected host for verification
26 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
In the Source Devices window select the device(s) that will be used for reading the data from that
selected session. The device handling is again very similar to Object Copy. As a default the original
devices will be used for verification. If they are not available other suitable devices will be used if
Automatic device selection is used. In case Original device selection is used the verification will
abort, if the original used backup devices are not available. Hit Next to continue.
In the Verification target window select the system that is used to perform the verification. The
normal Restore Disk Agent (VRDA) performs the verification and this process can be started on any
Data Protector client system with a Disk Agent installed. The Restore Disk Agent does not restore
any data, it just checks if a restore would be possible, so it checks if data is readable and complete
(no data block is missing). Therefore the verification does not have to run on the original source
system.
Note: For a SAN based verification select Verify on media agent host, otherwise data is send over
LAN to the selected host for verification
Note: It is possible to select copies of selected objects for verification. Mark the Object and click on Properties
to get to the selection menu.
27 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
In the Media Window check the required media and ensure that media are available.
If you click on Non-resident media you will see all media, which are currently not online accessible,
so these media will issue a mount request.
The last window of the wizard, the Summary window, lists all objects selected for verification. It is
still possible to delete objects from the list, adding objects is not possible here.
28 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Contents
Module 13 — Object Consolidation ................................................................................................... 1
13–3. SLIDE: Object Consolidation - Motivation ............................................................................... 2
13–4. SLIDE: Synthetic Full Backup – How it Works......................................................................... 3
13–5. SLIDE: Synthetic Full Backup – Requirement 1/2 .................................................................. 4
13–6. SLIDE: Synthetic Full Backup – Requirement 2/2 ................................................................... 5
13–7. SLIDE: Object Consolidation GUI/CLI ....................................................................................... 7
13–8. SLIDE: Interactive Object Consolidation 1/3 ........................................................................... 8
13–9. SLIDE: Interactive Object Consolidation 2/3 ........................................................................... 9
13-10. SLIDE: Interactive Object Consolidation 3/3 ........................................................................ 10
13-11. SLIDE: Virtual Full Backup – How it Works ........................................................................... 12
13-12. SLIDE: Virtual Full Backup – Requirements......................................................................... 13
13-13. SLIDE: Synthetic Full Backup vs. Virtual Full Backup 1/2 .................................................... 14
13-14. SLIDE: Synthetic Full Backup vs. Virtual Full Backup 2/2 .................................................... 15
13-15. SLIDE: Object Consolidation – Backup Types ....................................................................... 16
13-16. SLIDE: Restore Considerations ............................................................................................. 17
13-17. SLIDE: Restore with Consolidation and Copies .................................................................... 19
13-18. SLIDE: Limitations ................................................................................................................ 20
Module 13
Object Consolidation
Solution:
• Run an initial full backup and afterwards incremental backups only
• Consolidate full and incremental backups into one synthetic full backup
• Continue run incremental backups only and consolidate these backups with
the last synthetic full to a new synthetic full backup
Consolidation Types:
• Two different Object Consolidation types are supported:
Synthetic Full Backup
Virtual Full backup
3 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Data Protector Object Consolidation is another component of Data Protector’s Object Operations.
It is based on the unique Object Copy feature and, similar to all Object Operations, a full offline
feature that requires no access to the application system. In addition it requires no special license
to use, just the configured Logical devices needs to be licensed.
Data Protector Object Consolidation dramatically reduces the need of frequent full backups. In
large enterprise environments full backups are nearly impossible to run, because of the amount of
data to back up during such a full backup and the given backup time windows. So customers are
running incremental backups, which causes a long restore chain and lots of media to be loaded.
Using Object Consolidation functionality allows to consolidate all incremental backups with an
existing full backup. As a result you will get a newly updated full backup that can be used for
restore. This newly created full backup is called synthetic full backup.
Afterwards continue run incremental backups and consolidate these new incremental backups with
the last synthetic full backup. So there is actually no need to run real full backups again.
Disk
4 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
As stated on the previous slide there are two different Object Consolidation types. Let’s start with
the most common used type - Synthetic Full Backup.
Data Protector Synthetic Full Backup merges a full backup and any number of incremental backups
into a new, synthetic full backup.
This synthetic full backup is fully independent from the previous full backup and can be used like a
normal full backup. Object Copy or further Object Consolidation sessions with newly created
incremental backups are possible.
As shown on this high level overview slide the created synthetic full backup only contains the set of
data you will get after restoring the full and all incremental backups.
In case a file was overwritten the deleted version is not part of this synthetic full backup and for
modified files only the latest version of that file is part of the synthetic full backup.
5 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The synthetic full backup can be written to any supported disk or tape device. There is no
dependency to the location of the consolidated full and incremental backups.
But there are requirements for the location of the backups to be consolidate.
Note: Similar to Object Copy the target device need to have the same or higher block size than the
used source devices. Other parameter like backup device type (disk or tape), number of
used source or target devices or used concurrency of source or consolidated data can
differ.
6 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
In case of synthetic full backups only incremental backups are performed on the Backup client
systems. To ensure that all changes are discovered on these system an enhanced incremental
backup has to be performed.
Enhanced incremental backup is using an enhanced mechanism to detect all changes on the
system, including renamed or moved files, as well as file attribute changes only.
For activation open your backup specification, change to the Options tab and click on Filesystem
Options. In the Popup window select the Other tab
and check the option Enhanced incremental backup
as shown on the slide above.
1. Automated
• Post Backup
2 • Scheduled
1
1
2 2. Interactive
• Objects view
• Sessions view
Within the Data Protector GUI Object Consolidation is part of the “Object Operations” context.
Copy and Verification were covered in the previous chapter, so now let’s focus on Consolidation.
Similar to Copy and Verification there are two methods available for Object Consolidation:
Automated and Interactive. The changes compared to the already covered Object Copy and Object
Validation feature are explained on an interactive Object Consolidation configuration in the
following pages.
Additionally, there are the Objects and Tasks tabs. The Objects tab allows the definition of a
Consolidation specification. The tasks tab starts interactive session wizards to perform various
object consolidation tasks and create consolidation specifications.
Note:
• Objects that cannot be consolidated are grayed out (e.g. every regular or synthetic full backup)
• Direct selections are marked in red, depended restore chain is automatically marked in black
• Only Filesystem Objects can be consolidated
8 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
There are two ways to start an interactive Object Consolidation session, sorted by Objects or
Sessions. In certain use cases, situation and depending on the available backup information it
might be more convenient the start with Objects or with Sessions.
In case an interactive Object Consolidation by Object is selected the result pane starts showing all
backed up filesystem objects (WinFS for Windows and Filesystem for Unix) only, grouped by
system name. Expand the system to see all backed up objects on that system (drive letter, mount
points). Expand the object (e.g. drive letter like C:\ <Description>) to see all performed backup
sessions on that object.
Note: Only objects that are backed up using the enhanced incremental backup option are shown
Mark the latest enhanced incremental backup session you want to consolidate. This session will be
marked in red, all dependent backups from the restore chain are automatically marked in black.
Note:
• As a default the original Source Device(s) are used as Read Devices for full backups and can be replaced via the Change button
• Read Device(s) for the incremental backups are not pre-allocated, user input is required
• Destination Devices support Load Balancing (Min/Max) and more option via Properties button (Media Pool, Concurrency..)
9 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Object Consolidation will consolidate full and incremental backups, so two sets of Reading Devices
are required, one set of Reading Devices that will read the full backup and one set of Reading
Devices that will read all the incremental backups.
In the above shown Source Window this selection has to be made. The required devices to read the
full backup are pre-allocated – the original Source devices. If you need to replace these devices,
because they are currently used for backups, just mark the device and click on the activated
Change button to select a new device. The devices/writers to read the incremental backups are not
pre-allocated. The File Library that keeps all incrementals (listed requirement for Object
Consolidation) is shown and you need to pick up one or more writers to read the incremental
backups. Hit Next to continue.
Note: Similar to Object Copy/Validation a Restore Media Agent (RMA) gets started for each device
to read the data. In difference to a normal restore each device requires a valid license.
In the Destination Device window select the device(s) used to keep the consolidated data. It is
possible to select disk or tape devices and it is possible to select properties like Media Pool,
Concurrency or Load Balancing option similar to a normal backup specification configuration.
10 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The Options Window shows similar options like in Object Copy. So it is possible to define Logging
Level and Protection for the consolidated data and to define the handling of the Source data after a
successful consolidation. For Service providers that backup data of different departments or
customers it is possible to filter objects by owners. Hit Next to continue.
The following, not shown, window lists the required media for this consolidation, grouped by non-
resident media and all media.
In case of an interactive Object Consolidation the session is automatically shown within the GUI.
For a scheduled or post backup consolidation open the Internal Database – Sessions context
within the GUI to see the session details.
Note: Similar to Object Copy the Copy and Consolidation Session Manager (CSM) controls the
Consolidation session.
An interactive Object Consolidation Specification is directly executed and not saved for a recurrent
execution.
The schedule information for this specification is saved into a separated file under:
DP_CONFIG\consolidationlists\scheduled\schedules
Note: After saving a scheduled Object Consolidation Specification the Advanced Scheduler
can be used to schedule this consolidation specification.
Data Protector
File Library with
DFMF
11 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
A special Object Consolidation is a Virtual Full Backup. Data Protector automatically performs a
Virtual Full Backup if the required full and all incremental backups are located within the same
Data Protector File Library and Distributed File Media Format (DFMF) is used in this library.
In such a setup all required data is already available, somewhere within this File Library. By using
the Distributed File Media Format a Virtual Full Backup is created as set of pointers only that points
to the current location of the backed up files and directories within the File Library, which are
identified as content of this Virtual Full Backup.
The main benefit is, no data needs to be copied within the File Library to create this new Virtual
Full Backup. This operation is a very fast and space efficient way to create full backups.
Note: Virtual Full backup is only supported for Data Protector File Libraries using DFMF.
Internally the data protection of the file media within the File Library is managed in a different way
than for normal Data Protector media. Even if the user configures a data protection for the full and
the incremental backups the data might be still required to restore the created Virtual Full Backups.
So Data Protector needs to keep data from the initial full backup and from those incremental
backups, which are still part of the protected Virtual Full Backups, while externally from a Data
Protector GUI these session are not available for restore anymore.
12 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
• Full Backup and all incremental backups need to be located in the same File Library
• Enhanced Incremental Backup option is used
• Distributed File Media Format (DFMF)is activated for the File Library
In order to activate DFMF for a File Library change to Devices & Media context within the GUI,
expand Devices and see the Properties of the File Library. Under Settings activate the option:
Important: It is possible to activate for an existing File Library, which was used for backups
before. In addition it is possible to deactivate DFMF for a File Library. After DFMF
deactivation Data Protector will create Synthetic Full Backups instead of Virtual Full
Backups.
13-13. SLIDE: Synthetic Full Backup vs. Virtual Full Backup 1/2
Data Protector always performs a Synthetic Full Backup, only in case the
requirements for a Virtual Full Backup are fulfilled it automatically performs a
Virtual Full Backup.
13 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The Objection Consolidation wizard provides no option to select if a Synthetic or Virtual Full Backup
should be performed during Object Consolidation. There is also no option for the CLI available.
Data Protector automatically decide, if a Synthetic or Virtual Full Backup is performed. If all
requirements for a Virtual Full Backup all fulfilled automatically a Virtual Full Backup is created,
otherwise a Synthetic Full Backup is created.
13-14. SLIDE: Synthetic Full Backup vs. Virtual Full Backup 2/2
Example2: Full Backup performed to Tape, One Incremental Backup to Disk, one to Tape
• Consolidation not possible
• Requirements for Synthetic and Virtual Full not fulfilled
Session Property Backup Type contains the information if a backup was performed with the
Enhanced Incremental Backup option (full and incremental). In this case the “(enhanced)” was
added to the regular backup type full and incr.:
In addition it is possible to identify regular full backups, synthetic full backups and virtual full
backups:
• full …………………………… regular full backup
• full (synthetic, enhanced) Synthetic Full Backup
• full (virtual, enhanced) .. Virtual Full Backup
In case an Object Copy or second Object Consolidation of the same object was performed “copy” is
added to the Backup Type:
• full (synthetic, enhanced, copy) Object Copy of the Synthetic Full Backup
• full (virtual, enhanced, copy) Object Copy of the Virtual Full Backup
Restore Considerations
Synthetic Full
based on
Syn. Full
restore chain
object copy
Object Copy
Scenario
• 1 full and 5 incremental backups are performed
• after each even incremental a synthetic full is done
• each synthetic backup is copied Data Protector automatically picks
up the shorted restore chain via
5 Restore Chains available AMSS
(Automated Media Set Selection)
16 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Restore Considerations
Using Object Copy and Object Consolidation will add a lot of complexity to the restore area as well.
Now a problem occurred and the backed up data need to be restored. There are five restore chains
available:
1. Restore the full backup and all 5 incrementals
2. Restore the first Synthetic Full and 3 incrementals
3. Restore the Object Copy of the first Synthetic Full and 3 incrementals
4. Restore the second Synthetic Full and 1 incremental
5. Restore the Object Copy of the second Synthetic Full and 1 incremental
The question is now: Which backup chain Data Protector will restore and how to configure it?
Data Protector is using an internal function called Automated Media Set Selection (AMSS) to
identify the best restore chain out of a complex tree of possibilities. In this case the shortest chain
is selected, which will point to Restore Chain 4 or 5.
Which one of these two ways is selected at the end depends on the configured Location Priority.
In case disk based restore is having a higher priority over tape based restore the Restore Chain 4
will be used.
It is possible to switch off AMSS and manually select the copy you want to restore. This is shown on
the next slide.
17 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
For the Restore mark the data in the Source Window and select the session you want to restore,
e.g. the 5th incremental backup, based on the example from the previous slide.
Use the Destination, Options and Media Windows in the same way like during a normal restore.
In the Copies Windows all Object versions are listed. For the full backup the synthetic full object
was automatically selected.
If you want to overwrite this default selection, mark that Object version and click on Properties.
In the Version Property Window check Select source copy manually and pick up the version from
the pull down menu. In the example above the Object Copy is selected.
Limitations
• Synthetic Full Backup requires all incremental backups within the same
Data Protector File Library
• Virtual Full Backup requires full and all incremental backups within the
same Data Protector File Library
• Virtual Full Backup is only supported for Data Protector File Library
18 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Limitations
• Synthetic Full Backup requires all incremental backups within the same Data Protector File
Library. The full backup can be located within any supported disk or tape.
• Virtual Full Backup requires full and all incremental backups within the same Data
Protector File Library. Only in this configuration Data Protector is able via DFMF to create
the set of Pointers that will point to the physical location of the data within this File Library.
• Virtual Full Backup is only supported for Data Protector File Library. No Backup-to-Disk,
File Jukebox or Virtual Tape Library is supported.
• A consolidation of AES 256 encrypted data is not supported. If the full and all incrementals
or any of those backups was using Data Protector AES 256 Encryption the Object
Consolidation is not supported.
Contents
Module 14 — Internal Database 1
14–3. SLIDE: Concept 1/4 – Embedded database system ............................................................... 2
14–4. SLIDE: Concept 2/4 – DP IDB as embedded database............................................................ 3
14–5. SLIDE: Concept 3/4 – What is the IDB used for ...................................................................... 4
14–6. SLIDE: Concept 4/4 – PostgreSQL .......................................................................................... 5
14–7. SLIDE: Architecture 1/7 – Overview ....................................................................................... 6
14–8. SLIDE: Architecture 2/7 – Catalog database .......................................................................... 7
14–9. SLIDE: Architecture 3/7 – Media Management database ...................................................... 9
14-10. SLIDE: Architecture 4/7 – Detail Catalog Binary Files .......................................................... 10
14-11. SLIDE: Architecture 5/7 – Session Message Binary Files ..................................................... 13
14-12. SLIDE: Architecture 6/7 – Serverless Integration Binary Files ............................................ 14
14-13. SLIDE: Architecture 7/7 – Encryption Keystore ................................................................... 16
14-14. SLIDE: IDB Directory structure ............................................................................................. 18
14-15. SLIDE: IDB related Data Protector Services ......................................................................... 21
14-16. SLIDE: Internal Database Size Limits ................................................................................... 23
14-17. SLIDE: Administration tasks ................................................................................................. 25
14-18. SLIDE: Manage IDB grow 1/2 ................................................................................................ 26
14-19. SLIDE: Manage IDB grow 2/2 ................................................................................................ 28
14-20. SLIDE: IDB Maintenance 1/3 ................................................................................................. 30
14-21. SLIDE: IDB Maintenance 2/3 ................................................................................................. 31
14-22. SLIDE: IDB Maintenance 3/3 ................................................................................................. 33
14-23. SLIDE: Maintenance Mode 1/2.............................................................................................. 34
14-24. SLIDE: Maintenance Mode 2/2.............................................................................................. 37
14-25. SLIDE: IDB Backup ................................................................................................................ 38
14-26. SLIDE: IDB Backup objects .................................................................................................... 41
14-27. SLIDE: IDB Incremental Backups .......................................................................................... 42
14-28. SLIDE: IDB Restore - Overview ............................................................................................. 43
14-29. SLIDE: IDB Online Restore 1/4 .............................................................................................. 44
14-30. SLIDE: IDB Online Restore 2/4 .............................................................................................. 46
14-31. SLIDE: IDB Online Restore 3/4 .............................................................................................. 48
14-32. SLIDE: IDB Online Restore 4/4 .............................................................................................. 49
14-33. SLIDE: IDB Offline Restore.................................................................................................... 50
14-34. SLIDE: IDB Restore during Disaster Recovery ...................................................................... 52
14-35. SLIDE: IDB reports ................................................................................................................ 54
14-36. SLIDE: Notifications .............................................................................................................. 56
Module 14
Internal Database
Concept 1/4
3 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
This module will deal with HP Data Protector’s Internal Database (IDB). Before we start let’s have a
quick overview about databases and the specifics of the model DP is using.
A database management system (DBMS) is a program that lets one or more users create and
access data in a database. A DBMS can be thought of as a file manager that manages data in
databases rather than files in file systems. Such systems are very powerful, provide a lot of base
and extended functionality and sold as separate product. But lots of application need to store and
manage some data, but would like to hide the complexity of DBMS systems to the end users. Such
DBMS system with reduced feature set is called Embedded database system:
Concept 2/4
File Properties
Used Medias
The Data Protector Internal Database (IDB) is an embedded database, located on the Cell Manager,
which keeps information regarding what data is backed up, on which media it resides, the result of
backup, restore, object copy, object consolidation, object verification, and media management
sessions, and what devices and libraries are configured.
The Data Protector IDB as an embedded database is fully transparent to the end user, which
means:
Data Protector 8.00 and higher are using a PostgreSQL database as Internal Database .
The details of the PostgreSQL Database are explained later in this module.
Concept 3/4
Fast and convenient restore:
Browse and select the files and directories to be restored. Required list of media
and restore devices will be provided
Media management:
Stores information about all used media in backup, copy and consolidation
sessions, manage protection of stored data and track location of backed up data on
media for fast restore as well as track location of medias in tape libraries
Encryption/decryption management
In case of encrypted backups operation encryption keys are stored in the IDB and
retrieved in case of a restore
5 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Fast and convenient restore: Browse and select the files and directories
to be restored. Required list of media’s and restore devices will be provided
Backup management: The information stored in the IDB enables you to check the details of
performed backup, copy, consolidation and restore sessions.
Media management: Stores information about all used media in backup, copy and consolidation
sessions, manage protection of stored data and track location of backed up data on medias for fast
restore as well as track location of medias in tape libraries
Reporting and Auditing: Support various reports about Data Protector operations as well as
auditing about performed configuration changes
Concept 4/4
Postgres kju el
• Developer(s): PostgreSQL Global Development Group
• Type: Object-relational database management system (ORDBMS)
• License: free, open source software
• Website: www.postgresql.org
• DP Support: Data Protector 8.0X and 8.1X and higher
• Used Version: PostgreSQL 9.1 (on all supported Data Protector Cell Manager OS)
• Replaced IDB: Raima Database System (RDS) version 6.0 (used until DP 7.X)
6 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
PostgreSQL is available for all common instruction set architectures and many platforms -
including Linux, HP-UX, FreeBSD, Solaris, Microsoft Windows and Mac OS X.
It implements the majority of the SQL 2008 standards, is ACID-compliant and fully transactional
(including all DDL statements, has extensible data types, operators, index methods, functions,
aggregates, procedural languages,) and finally provides multi-threating support for high parallel
operations.
More info about PostgreSQL can be read on Wikipedia and the documentation on
http://www.postgresql.org/docs/
Architecture 1/7
Overview
► Catalog Database
► Encryption Keystore
7 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
• Catalog Database
• Media Management Database
• Detail Catalog Binary Files
• Session Messages Binary Files
• Serverless Integration Binary Files
• Encryption Keystore
Each of the IDB components stores certain specific Data Protector information (records),
influences IDB size and growth in different ways, and is located ether in datafiles that belong to
PostgreSQL Server or outside within flat files on the DP Cell Manager. The default location of the
IDB with its components is:
UNIX : /var/opt/omni/server/db80
Windows: C:\ProgramData\Omniback\server\db80
Architecture 2/7
UNIX : DP_IDB/idb
Windows: DP_IDB\idb
Each Data Protector client system with its drive letters (e.g.: C:\, D:\..) on Windows or mount points
(e.g. : /opt, /var, /) on UNIX is creating together with contributing properties like client system
name, label/description and session ownership an unique object entry in the IDB (for details see
Backup module). Each backup of these objects creates a new object version, which is also stored in
the IDB.
The position of each backed up component on the backup media is stored in the catalog segment
on the backup media and in the CDB. This information allows very fast single file restores, because
DP knows exactly the position were the data is stored on tape.
Every DP operation like backup, copy, consolidation, restore sessions are tracked under a unique
session-id. These session id, e.g. 2014/01/30-234 is stored together with additional metadata like
session owner and timestamp in CDB part as well.
Note: Detailed information about backed up files and their attributes are not stored in the
CDB part of the IDB, but in the so called Detail Catalog Binary Files (DCBF).These
DCBF are covered later in this chapter.
Architecture 3/7
9 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The default location of the MMDB datafiles is the same like for the CDB datafiles:
UNIX : DP_IDB/idb
Windows: DP_IDB\idb
Note: MMDB and Objects/Object versions from the CDB are also referred as IDB Core
component.
Architecture 4/7
• are grouped into configurable DCBF directories, which are either filled up
with DCBF sequentially or in parallel (global parameter)
• contain all metadata information about backed up files, such as file name,
size, modification time, attributes/protection, ..
• are created for each medium, will be deleted after medium is exported and
will be recreated if medium is overwritten
10 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The DCBF are grouped by DCBF directories. The default location of the DCBF directories is:
UNIX : DP_IDB/dcbf
Windows : DP_IDB\dcbf
The size of such a DCBF directory is nearly unlimited – the current documented limit is 2PB. Each of
the created DCBF could also grow unlimited in space, the size is limited by the OS and depends on
the number of file to be backed up.
By using DP GUI/CLI the user is able to create and modify DCBF directories. A maximum of 100 DCBF
directories can be configured, even outside of the default db80 area on separate mount points or
drive letters for better IDB performance.
There are two global parameters that have an impact on the DCBF behavior:
# MaxDCDirs
(default = 50, Min=1, Max= 100)
This option specifies maximum number of configured DCBF Directories.
# DCDirAllocation
(default = 1)
This global option control, which algorithm will be used to select the
directory for the creation of the new DCBF file:
0 - Fill in sequence
1 - Balance size
2 - Balance number
Especially the global parameter DCDirAllocation is quite important with regards to IDB
performance. Setting it to 1 or 2 will utilize all configured DCBF directories, which will cause a much
better performance than filling up DCBF directories up in sequence.
Each media that was used in a backup creates its own DCBF. If you run an object or media copy also
a new DCBF file is made for each of the newly created media’s. If you run an object consolidation
session also new DCBF are created for the medium that contains the consolidated data. If a backup
appends to a medium the belonging DCBF is appended as well. If a media is overwritten, first the
belonging DCBF gets deleted and a new DCBF with the same Medium-ID but different encoded time
stamp suffix get created.
The name of the DCBF consists of the medium-id of the media that contains the backed up data and
an encoded timestamp suffix with .dat extension.
In order to identify the Data Protector Medium-ID from a DCBF name do the following:
Afterwards run
CLI omnimm –media_info -detail
Example:
Expand the listed Pool and locate the medium within the pool.
Note: DCBF should not be manually created/modified/deleted. DCBF and DCBF directories
are linked into the CDB part and external manipulation will result in a corrupted
database. Use the appropriate IDB management tools (GUI/CLI) to maintain these
files and directories.
Architecture 5/7
11 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
UNIX : DP_IDB/msg
Windows : DP_IDB\msg
The SMBF’s contain only messages codes, timestamps and system details, which supports localized
output of session messages on DP clients based on their Localization setting.
Architecture 6/7
12 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
NDMP backups require a special NDMP Media Agent (MA) , because the format of the data on the
media is actually following the NDMP standards and not the DP tape definitions, e.g. the catalog
information on a NDMP media is actually stored at the end of a media and not in the regular catalog
segments.
Catalog information in the IDB is handled in the same way like for normal filesystem backups.
For a successful NDMP backup or restore certain environment variables need to set to support
specific NDMP devices or activate filer specific functions. For details about these environment
settings consult the documentation of the configured NDMP filer.
Within DP these environment parameters can be entered in the NDMP environment window with
the DP GUI as shown below.
These environment settings are stored into the SIBF during the backup. The SIBF are located on the
Cell Manager under:
UNIX : DP_IDB/meta
Windows : DP_IDB\meta
Example:
TAPE_NAME netapp1
FILESYSTEM /vol/imtic_bbn_vol1
SMTAPE_DELETE_SNAPSHOT N
SMTAPE_ALL_SNAPSHOTS N
SMTAPE_MODE DR UPDATE Y
SMTAPE_BREAK_MIRROR Y
NDMP_VERSION 4 HIST Y
TYPE smtape
SMTAPE_NO_DISK_WRITE N
DEBUG N
LEVEL 0
These environment variables are required for NDMP restore as well and will be automatically read
and applied from the depending SIBF at restore time.
Architecture 7/7
Encryption Keystore
Encryption Keystore:
Data Protector is currently supporting software (AES-256-bit) encryption and drive based
encryption. All automated or manually created keys are centrally stored on in the IDB keystore.
Theses keys will also be used for object copy, object verification and restore sessions of encrypted
objects. In the case of software encryption, the key identifiers (each consisting of a KeyID and a
StoreID) are mapped to the encrypted object versions.
This mapping is stored in the CDB. It is required because different objects in a backup medium can
result from different backups or clients and therefore can have different software encryption keys.
In case of hardware encryption, Data Protector activates the feature on the backup device and
managed the keys. The key identifiers are mapped to medium ID and these mappings are stored in
a keystore catalog file (not related to CDB).
This file contains the information required to allow an encrypted medium to be exported to another
cell. Refer to omnikeytool in the OLH to learn how to export keys to a csv file and how to import it.
Keystore location
UNIX : DP_IDB/keystore
Windows : DP_IDB\keystore
UNIX : DP_IDB/keystore/catalog
Windows : DP_IDB\keystore\catalog
Note: IDB Directories required for storing ZDB/IR information are not shown here
14 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The slide shows the full IDB directory structure. All the IDB components that were discussed in the
previous slides are listed here. In addition it illustrates, which of the IDB components are stored
within the PostgreSQL database:
DP_IDB\idb
• Location of the CDB and MMDB
DP_IDB\jce
• Job Control Engine, used by Advanced Scheduler and Application server
DP_IDB\pg
• Location of the PostgreSQL system database, IDB configuration files
and Transaction Logs
DP_IDB\dcbf
• (default) location of the DCBF (can be modified via DP GUI or CLI)
DP_IDB\meta
• location of the SIBF
DP_IDB\msg
• location of the SMBF
DP_IDB\keystore
• location of the Encryption keystore
As mentioned on the slide the directories for Zero Downtime Backup (ZDB) and Instant Recovery
(IR) are not shown in the Directory structure. Under the db80\ directory there are directories like
smisdb, vssdb or xpdb, which contain configuration information about specific DP supported disk
arrays integrations or ZDB/IR related data about created and kept replica volumes, which will not
explained in this module.
UNIX : DP_IDB/logfiles/rlog
Windows : DP_IDB\ logfiles\rlog
See the IDB Offline Restore chapter in this module for more information.
UNIX : DP_IDB/pg
Windows : DP_IDB\ pg
For daily operations it is not required to perform any modification within these files. Wrong entries
could have significant impact on the stability of the IDB, so apply changes only if HP support
instruct you to do so in case of an IDB issue.
The only exception is a MMDB merge into a CMMDB. In this situation the pg_hba_conf file needs to
allow access from the CMMDB server. See OLH for details.
IDB Logfiles
In case of IDB issues it might be required to check the PostgreSQL logfiles for more information.
These PostgreSQL logfiles are located under:
UNIX : DP_IDB/pg/pg_log
Windows : DP_IDB\ pg\pg_log
UNIX : DP_IDB/pg/pg_xlog
Windows : DP_IDB\ pg\pg_xlog
As a default backed up transaction logs are truncated after backup. But this option can be switched
off, so transactions logs can be archived and kept for some time on disk to allow more complex
backup scenarios. In addition it will allow faster recovery, if archive logs are still available on disk
and can be used for recovery. The archived transaction logs are stored under:
UNIX : DP_IDB/pg/pg_xlog_archive
Windows : DP_IDB\ pg\pg_xlog_archive
Any next performed IDB backup with truncation option enabled will empty this archive folder after
backup completed successfully.
Reportdb
In addition there is a directory named reportdb, which was not covered in the previous chapter. This
folder is stores information about performed restore sessions that can be used to build up specific
statistics on restores.
The feature is not available as a default. For activation of restore tracking or reporting the
undocumented global parameter EnableRestoreReportStats need to be set to “1”. The reports can
be generated by the following DP command:
IDB Service
• name: hpdp-idb
• default port: 7112
• The IDB Service is the service account for the PostgreSQL database, so it starts and stops
the PostgreSQL server
IDB Connection Pooler
• name: hpdp-idb-cp
• default port: 7113
• The IDB Connection Pooler controls the IDB access. All Data Protector processes, who
want to write into or read from the IDB, e.g. all Session Managers, are connecting to the
IDB Connection Pooler to get a server connection to IDB assigned
Application Server
• name: hpdp-as
• default port: 7116
• The JBOSS Application Server provides web services for Data Protector components. It is
using the secure certificated based https protocol for communication
15 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Introduced by Data Protector version 8.00 the proper function of the Internal Database requires
three Data Protector services to run:
• IDB Services hpdp-idb
• IDB Connection Pooler hpdp-idb-cp
• Application Server hpdp-as
IDB service – The service starts and stops the internal database
Default port number: 7112
Usually this service is accessed locally by processes on the Cell Manager system. Only while
information is transferred from the IDB on the CM to a central MoM Server, this port is accessed
remotely, for example when merging the MMDB to a central MOM Server MMDB database. (i.e. while
the administrative commands omnidbutil -mergemmdb or -cdbsync are executed to merge or
sync mmdb data from the IDB to a cmmdb on a Mom server.) During execution of these commands
this port needs to be open in a firewall for access by the processes running on the MoM Server
system.
IDB Connection Pooler – The Internal Database Connection Pool service controls connections to the
Data Protector Internal Database:
Default port number: 7113
Any Session Manager, who need to write information into the IDB or query the IDB for information
cannot connect to the IDB directly. The IDB Connection Pooler manages the access and ensures,
that all IDB access requests will be served and no request will get lost or time out, even under
heavy load. Only local processes on the CM access this service. There is no need to open this port in
a firewall configuration for external access.
Application Server – The Application Server service provides web services for components used by
Data Protector
Default port number: 7116
Dedicated Data Protector components like the DP GUI based output filtering or the Advanced
Scheduler with priority based scheduling are powered by a JBOSS Application Server, that is running
on the DP Cell Manager.
All communication is using the secure https protocol. To allow GUI clients to connect to this port, it
has to be opened for external access in a firewall configuration.
All listed IDB services are started automatically during system boot and can be managed together
with the other Data Protector services via omnisv –start/stop/status
16 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
With the evolution of Data Protector more and more IDB limits went away. So the majority of IDB
components can grow up unlimited, just limited by the underlying Operation System.
But limits still exists and have to be considered during DP Cell layout planning. While some of the
documented limits are just soft limits to guarantee a proper function of the IDB there are also limits
which cannot be exceeded at any time. Therefore it is suggested operating always within these
documented limitations to prevent IDB problems like performance issues or IDB corruption.
MMDB Limitations
Typically MMDB part is very small in size compared to CDB and DCBF.
CDB Limitations
DCBF Limitations
The DCBF part is the biggest part of the IDB. The following limitations apply:
There a nearly no size limitations for DCBF anymore, so overall DCBF sizes of more than 1TB are
possible and supported and need to be considered in DP Cell Manager system sizing calculations.
Administration Tasks
IDB Maintenance
17 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Administration tasks
In general DP IDB is implemented as embedded database with processes in place to monitor health
and perform automated daily maintenance operations.
But there are still administration tasks the DP administrator needs to perform on a regular base.
During DP Cell Manager installation 5 DCBF directory with an overall maximum size of 1TB are
created. Normally there should be no immediate need for the creation of new DCBF directories.
In order to get prepared for future grow it might be recommended to create more DCBF directories.
• via DP GUI
Open DP GUI and switch to “Internal Database” section. Expand “Usage” , right click on
“Detail Catalog Binary File” and select “Add Detail Catalog Directory ..”
In the new window select the Allocation sequence (defines in which order DCBF directories are
used) select the path (need to differ from any exiting DCBF directory), select the maximum DCBF
directory size, the maximum number of DCBF within the DCBF directory and at which size a “low
space” notification should be send out. If the specified low space is hit the IDB switches to “No Log”
mode. At the end click on “Finish”.
Note: The Low Space limit for each DCBF directory is consolidated and this value is checked, so in
case of 5 DCBF with a Low Space limit of 2GB, overall 10GB free space is required on the IDB
partition. It is possible to modify the limit for existing DCBF directories as shown below.
In case of a modification of an existing DCBF is required just double click on the listed DCBF
directory in the “Detail Catalog Binary File” window and modify the parameters like “Maximum
Size” or “Low Space”.
1. Logging Level
2. Catalog Protection
Note:
In DP versions before 8.00 the missing execution of offline IDB Maintenance jobs was another important
IDB grow factor. These offline IDB Maintenance jobs are no longer required with the current used IDB.
19 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The Internal Database will continue to grow, as more sessions are executed and as more data is
backed up within the cell. Data Protector stores all the details of successful as well as failed
sessions for a later restore query or reporting.
The major contributors to the growth and size of the Internal Database are:
• Logging Level
• Catalog protection
• Growing Backup environment
Logging Level
The number of files and directories backed up and the level of details held in the
database to describe them.
• Log File
all details beside file attributes are stored in IDB, single file browsing and restore is
possible
• Log Directory
only directory details are stored in IDB, no file browsing possible, only directories listed
The most significant influence on the growth of the IDB is the addition of new clients and new files,
as well as the amount of detail logged for each. During the initial configuration of the Data
Protector cell, the growth and dynamics of the data will be very high. Over time, however the
dynamics may average around 3%-5% per client. The selection of “Log File” or “Log Directory” will
prevent the unnecessary storage of file version information for dynamic files. The files will be
recoverable from tape, but their details will not need to be stored in the database, as they are
unlikely to be requested individually. Typically restore by object, or restore by directory is used to
put back the files onto the system.
Catalog protection
Defines, how long detail information about backed up data is to be kept in the database
(default is permanent protection)
Data Protector allows you to set protection for data backed up and backup catalog
information independently. This allows the physical data protection of backup objects on media to
be different from the related catalog information for the same objects stored in the Internal
Database. Setting the catalog retention time to a period lower than the physical protection time
can be useful. For example, if media is required to be kept for a long time span, but realistically, will
not be required for restore (archives, etc), the catalog can be kept for only one month, while the
data on tape is protected for 3 years.
If the catalog protection is set equal to that of the media protection, then the IDB will continue to
grow rapidly. Setting catalog protection to a higher value than data protection is possible, but
makes no sense, because catalog data is removed from IDB regardless of set catalog protection if
data protection expires and tape is overwritten. If tape is overwritten with new data, IDB stores the
new catalog data only and does not keep any obsolete catalog data about used media.
20 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The Data Protector Internal Database requires maintenance to ensure the proper function of all DP
processes. There are two types of maintenance:
Both maintenance types are explained with more details on the following slides.
Daily Maintenance
• is running in the background and requires no IDB downtime
• controlled by global parameter DailyMaintenanceTime
• default start time: 12:00PM
• execute a set a purge operations: purge sessions, purge messages ,purge dcbf, purge mpos
• In addition it finds and de-allocates free unprotected media to the belonging free pool
Daily Healthcheck
• performs a list of checks, which will trigger Data Protector notifications
• controlled by global parameter DailyCheckTime
• default start time: 12:30PM
• executed checks: IDB Space Low, IDB Limits, IDB Backup Needed, IDB Reorganization Needed
(only every Monday), Not Enough Free Media, Health Check Failed, User Check Failed,
Unexpected Events, License Warning, License Will Expire,
• By default any triggered notification is send to the Data Protector Event Log
Note:
In DP8.00 and higher versions IDB downtime for maintenance operations like Filename Purge or IDB Export/Import is
21no©longer required
Copyright 2014 like it
Hewlett-Packard was in Company,
Development previous DPinformation
L.P. The versions. Allherein
contained maintenance operations
is subject to change without notice. can be performed while IDB is online.
Daily Maintenance
Daily Maintenance is running in the background and requires no IDB downtime. It is controlled by
global parameter DailyMaintenanceTime:
# DailyMaintenanceTime=HH:MM
# default: 12:00 (HH:MM)
# This option is used for starting daily maintenance tasks at
# first omnitrig run after the specified time each day. Valid
# values are hour:minute, using the twenty-four hour clock notation.
As listed the default Start Time is 12.00PM .During Daily Maintenance a set of purge operations are
executed, which typically last a few seconds. Mainly obsolete DCBF and SMBF files are removed for
media with expired protection.
• -dcbf
• -mpos
•
by executing the CLI command:
The data that is removed by “-sessions” command is determined by the setting of the
KeepObsoleteSessions global variable, the data cleaned up by “-messages” command by the
KeepMessages global variable, and the purged records of the “-mpos” command are identified by
the QuickMediaFormat global variable.
In addition Daily Maintenance identified free unprotected media and de-allocates these media into
the belonging free pools. The CLI command is:
For details, see the omnidbutil man page or the HP Data Protector Command Line Interface
Reference.
Daily Healthcheck
By default, Data Protector starts a check for a set of notifications once a day. Any triggered
notification is sent by default to the Data Protector Event Log, It is controlled by global parameter
DailyCheckTime:
# DailyCheckTime=HH:MM
# default: 12:30 (HH:MM)
# This option is used for starting daily checks at first
# omnitrig run after the specified time each day. Valid values
# are hour:minute, using the twenty-four hour clock notation
# Specifying 'None' disables starting daily checks.
Note If you want to switch off Daily Maintenance or Daily Health check enter “none”
instead of a HH:MM.
The IDB is running out of The IDB Space • Check for free disk space on the IDB
space Low notification partition
• Extend the DCBF Directory size
• Reduce the IDB Growth by tuning the
Logging level
• Reduce the IDB Current Size by
changing the Catalog Protection
The IDB does not work The IDB • Check the IDB Consistency
properly—might be Corrupted (omnidbcheck <-extended>)
corrupted notification • See OLH or contact HP Support for
information how to fix the corruption
22 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
While the majority of the IDB maintenance tasks in Data Protector are automated there are still
tasks that require manual maintenance.
Two of those manual maintenance task are listed as an example on the slide with the notification
name and the required action:
More information about possible IDB related Notification messages and how to resolve these
issues are listed in the Data Protector OLH (“Notification Types - Events that Trigger Notifications”).
Note:
Stopping the Maintenance Mode is also possible from DP GUI:
Clients context– Select Actions from Menu bar – Click on Stop Maintenance Mode
23 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Purpose
Special Data Protector cell maintenance tasks like a Cell Manager Patch installation or fixing IDB
corruption require a running Data Protector environment, but beside the maintenance activities no
other tasks like backups or IDB queries should run. This special mode is called Maintenance Mode
If the Data Protector cell is running in Maintenance mode the Cell Request Service (CRS) is no
longer serving Session Manager startup requests, like the startup of a BSM for a backup or DBSM
for a report or DP GUI based IDB query. Instead the request is rejected, but logged into a special
maintenance.log file for processing after leaving the Maintenance mode.
Example: DP_CONFIG\log\maintenance.log
********************** vm9.dpdom.com **********************
28-Feb-14 9:22:44 AM CRS.10008.3976 ["/cs/mcrs/sessions.c $Rev: 39487 $ $Date:: 2013-10-14 16:26:07":141]
A.08.10 b154
CRS is in maintenance mode - session rejected
session id: R-2014/02/28-1
session type: bsm
session desc: Backup
start date: 2014-02-28 09:22:44
owned by: DPDOM\ADMINISTRATOR pid=0
Getting into this single user mode was difficult to achieve in older versions of Data Protector and
required a lot of manual steps up to renaming DP binaries and directories on the Cell Manager to
ensure that no interactive or scheduled job could be started during the planned maintenance time.
Option “-GracefulTime” allows to specify a timeout window before entering the Maintenance
Mode. Details are explained on the following page. In case of a MoM environment it is possible to
put the whole MoM with all client cells into the DP Maintenance mode by using the “-mom” option.
omnisv -maintenance 20
The omnisv –status output will change and show that CRS is in Maintenance mode:
omnisv status
If the Maintenance Mode is active open a CMD window with administrative rights and run:
CLI omnisv –maintenance –stop -mom_stop
Graceful Period
• Defined by global variable MaintenanceModeGracefulTime (default: 5min)
• Can be overwritten by omnisv –maintenance <gracefultime>
• No new sessions are allowed to start, but running sessions have predefined time to finish
Shutdown Period
• Defined by global variable MaintenanceModeShutdownTime (default: 5min)
• No omnisv option available
• IDB queries from CLI/GUI still possible
• Running sessions will be aborted and have predefined time to finish before getting cleaned up
24 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Starting Maintenance Mode is not an abrupt mode switch. In order to allow ongoing sessions to
complete, but prevent new sessions from start the transfer into the Maintenance mode takes two
phases:
# MaintenanceModeGracefulTime=WaitForInMinutes
# default: 5 minutes
# Defines the length of Phase 1 Graceful Period
# MaintenanceModeShutdownTime=WaitForInMinutes
# default: 5 minutes
# Defines the length of Phase 2 Shutdown Period
IDB Backup
The IDB is a critical component and must be protected, therefore regular
IDB backups are essential! DP IDB backups are full online backups.
CLI:
25
omnib –idb_list <name> -barmode <full/incr>
© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
IDB Backup
The Data Protector IDB is a very critical component and must be protected by regular backups like
your production data. Therefore regular IDB backups need to be performed.
• In addition it includes Auditing and Log files, DP Event Logs and the ZDB/IR databases
The embedded PostgreSQL database is supporting a “Hot backup mode”, which allows to perform
an IDB backup, while other backups/restores are still running, so no downtime is required while IDB
is backed up. Full functionality of Data Protector is available while IDB backup is running.
The Data Protector IDB backup is a separate backup type, so it cannot be mixed with the backup of
any other object. The creation of a separate Internal Database backup specification is required.
For creation switch to the Backup context, click on Internal Database, select Add Backup to start
the Backup Specification configuration wizard. In the initial window go with the Blank IDB Backup
template, click Next to see the Application Configuration window. Everything is grayed out, so
nothing to configure for an IDB Backup, hit Next again to get to the Source window.
Select the Backup devices in the next window. It is recommended to configure a separate pool for
IDB backups to keep these media separated from other media for easy access in case of an IDB
issue.
Select the regular backup options like Data and Catalog Protection or Encryption.
In addition it is possible to configure an IDB consistency check before the IDB is backed up. If this
check will find any issues with the IDB the session will be aborted. The executed command is:
A good rule is to backup the IDB twice, with and without check. In case, any IDB corruption is found
it is better to have at least a backup from the (corrupted) IDB, which might be restored and fixed
than having no backup at all.
A second IDB Backup option triggers the truncation of backed up IDB Transaction log files.
Let’s have a deeper look on the IDB Transaction log handling to understand this option.
These logs are Write Ahead Logs, so logs are written before committed to the database. Data
Protector backs up these logs and deletes them. In addition it updated some metafiles to track,
which is the log from the last full backup (*.backup file) and what are the last 3 backed up log files
under: DP_IDB\pg\pg_xlog\archive_status
Transaction logs can be archived automatically, so a copy of each transaction log is available
under: DP_IDB\pg\pg_xlog_archive
Note:
Depending on the available overall
device concurrency all IDB objects are
backed up in multiple streams (:0,:1,
…n) beside ConfigurationFiles
26 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Depending on the overall available backup device concurrency all objects beside the Configuration
Files are backed up in multiple backup streams.
Example: In case two File Library writer are used, each having a concurrency of 3, an overall
concurrency of 6 is available. So each object is backed up in 6 parallel streams, seen by the :0, 1, 2,
3, 4, 5 suffix on each object, e.g. vm9.dpdom.com:DPSPECs:0, … vm9.dpdom.com:DPSPECs:5
27 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The DP IDB can grow up to much more than 100GB in size, so an incremental backup might be
recommended for the everyday backup and a full backup just as weekly backup.
As mentioned DCBF are forming the biggest portion in the IDB. So backing up just the changes
makes a big difference than backing up 100GB or more in every backup.
During an offline or online IDB restore always the full backup chain is restored, even during Disaster
Recovery.
28 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The slide shows the three possible ways to restore an IDB backup. All these ways are explained
with more details on the following pages.
29 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The IDB Online Restore can be performed like any other restore by using the DP GUI. It is possible to
restore all IDB components within one restore session, online by using the GUI.
The Internal Database window let you configure the restore of all IDB components, besides the
Configuration Files. Configuration Files are backed up as part of the IDB backup, but do not actually
belong to the IDB, so they are kept separate here.
1. Ensure that Restore Internal Database is checked (Default: Yes) and specify a Restore port
used for IDB Recovery only (Default: 7114). This port is reserved for “Production” restores
only, so in case you do not want to use this IDB later as new IDB (see point 4) you need to
specify another port than 7114. The restored IDB will continue to use the ports 7112, 7113
and 7116, so as mentioned this port is used for IDB recovery only.
2. The IDB Restore cannot overwrite the existing running IDB, so restore need to run into a
new location. Please specify this location here, but be careful – this location might be used
later as location for the new IDB (see point 4), so c:\temp might not be the adequate place
for this new location. All PostgreSQL databases (pg, idb and jce) will be restored into this
location. It is possible to specify another mount point or drive letter, but as mentioned only
PostgreSQL databases are restored into this location.
3. Before IDB can be started it needs to be recovered. Trigger IDB recovery here (Default: Yes)
4. Here decision need to be made, if restored IDB should become the new IDB. If this option is
checked (Default: Yes) the currently running IDB will be shut down after restore and the
restored database will be used. So ensure that specified location meets the requirements.
The PostgreSQL configuration file will be automatically updated to reflect this new
location.
Note: Restore Internal Database will restore PostgreSQL datafiles and logfiles,
SMBF and all data that belongs to the DPSECs object (e.g. SIBF, DP Event Logs, ..)
5. The DCBF can be restored together with the other IDB parts. (Default: Yes)
Note: DCBF location is NOT updated in the IDB, if a restore to a new location is specified.
Ensure that DCBF are still available in their ORIGINAL location after restore or
modify the DCBF locations by using omnidbutil
7. At the end it is possible to specify the Restore Point in Time. (Default: Latest possible
state). Based on this selection DP will decide, which IDB backup session will be restored.
Important: An IDB Online Restore requires temporary twice system resources and disk space.
Depending on the restored components and options it could be very demanding
(especially in case of a DCBF restore into a new location).
1
•Include configuration file restore in IDB restore
( Default: Yes)
1.
2
•Use the same IDB Backup as used for the Database restore
or specify manually the IDB backup to restore (Default:
2. same)
3
•If required deselect directories and folders
(Default: all components selected)
3.
30 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The restore of the Configuration Files can be performed together with the IDB restore or without
the IDB part. Even in case of a combined restore a separate backup session can be selected for the
Configuration File restore.
1. First decide if configuration files should be restored together with the IDB restore
( Default: Yes)
2. Next select the IDB Backup session you want to restore, Either use the same session like
used for the IDB restore or specify manually the IDB backup session to restore
(Default: same like IDB restore)
4. It is possible to select a new location for the restore (default: original). Similar to the
restore of other IDB components DP will not update any internal pointer, if a new directory
is specified, so new IDB still expects the configuration files in the original location.
5. Restored files and directories might already exist in the restored location. Similar to a
normal filesystem restore specify how DP should handle this situation
(Default: Keep most recent)
Now you could start the restore by clicking on Restore or check the other tabs, options, media and
devices tab for restore tuning.
Component2 Component3
• Detail Catalog Binary Files • Configuration Files
31 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
It is possible to restore all 6 IDB backup objects within one restore session.
If you want to restore only part of the IDB, consider the following. The DP GUI based restore allows
only a separate restore of:
The other IDB components can only be restored together under Restore Internal Database:
• PostgreSQL Datafiles
• PostgreSQL Logfiles
• DPSPECs
• SMBF
During an IDB restore it is possible to specify a new location for the restored IDB components.
While the data is restored into these locations it will not automatically becomes part of the
restored and activated IDB.
The slide provides an overview, which restored IDB components are actively used after an IDB
restore and which location is taken.
As shown only the PostgreSQL databases with logs can be restored into a new location and will be
used as new IDB after restore.
All other components will only be restored into the new location, but the new IDB is still expecting
the data in the original directories.
So either copy back the restored data back into the original folder or create links from the old
folder to the new location. DCBF locations can be modified by the omnidbutil –modify_dcdir
command, but such command does not exist for the other IDB components.
Note: Using IDB Online restore it is not possible to fully relocate an IDB.
Only the PostgreSQL DB and all DCBF folder can be relocated by an IDB Online Restore
Prerequisites
Ensure that file obrindex.dat is available. Configure global variable RecoveryIndexDir to
get a second copy of that file created during IDB backup outside of the db80 folder.
Restore (standard)
• Trial run without restore to check available IDB backups:
omniofflr -idb -autorecover [-session ses-ID] –skiprestore
• Restore:
omniofflr -idb –autorecover [-session ses-ID] –force
Restore (advanced)
• Trial run without restore and output redirect into a file (here parm.txt):
omniofflr -idb -autorecover [-session ses-ID] –skiprestore
–save C:\tmp\parm.txt
• Modify the output file (e.g. change the restore device)
• Start the restore using the modified output file
omniofflr –idb –force –read c:\tmp\parm.txt
33 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The IDB Online restore requires a running IDB for proper function. If the IDB is not usable anymore
or DP services are down only IDB Offline Restore is possible. The IDB Offline Restore is only
possible from the CLI via omniofflr.
For easy use all required information to perform an IDB Offline Restore are collected during an IDB
backup and stored outside of the IDB into a flat file named obrindex.dat
The default location of that file is:
UNIX : DP_IDB/logfiles/rlog
Windows : DP_IDB\logfiles\rlog
In case of a disk failure that impacts the IDB this file will also be lost. Using the global variable
RecoveryIndexDir it is possible to specify a path that will get a second copy of that file created.
# RecoveryIndexDir=FullPathToTheBackupDir
# default: none
# This option sets backup directory for the obrindex.dat file.
# If this directory pathname is writable, the recovery index
# will be created/appended into this directory in addition to
# the default recovery index file.
Note: In case the Media Agent is running locally on the Cell manager no DP service is required to
perform the IDB Offline Restore. In case a LAN restore is performed the DP Inet service need
to up and running on the Cell Manager.
The command omniofflr can be used in various restore scenarios that cannot be discussed here.
Starting point is to start a try run and save the used parameters in an output file:
Now modify the output file (e.g. change the restore device)
Note: There is no omniofflr option to specify the current obrindex.dat location. In case of
problems recreate the original path and copy obrindex.dat into this location.
Option “-read” is only working for the converted obrindex.dat file using the “-save” option.
For more information refer to the OLH or contact HP Support for assistance.
Backup
• Both (File system & IDB backup) sessions are needed
for DR image creation
• In case of incremental backups similar to a filesystem restore the full chain is restored
• IDB needs to contain info of an earlier full file system backup
Restore/Recovery
• After file system restore is done, IDB restore sessions starts
• Configuration Files are always restored only once & only from the last session in the
chain
34 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
DR Preparation
For the complete recovery of the Cell Server, all critical objects have to be backed up in order to
insure that the disaster recovery (DR) procedure is valid. This includes a complete file system
backup and an IDB backup. It is important that the session of the IDB backup that is given to the DR
Wizard contains the
session from the earlier
made full file system
backup. It can be an
incremental or full
backup session. In this
way the IDB contains
information about that
restored FS backup.
DR Wizard
During Disaster Recovery first the file system is restored (including full and all incrementals)and
next the IDB is restored with all IDB components and again with the full restore chain, so all full and
incremental backups are restored.
IDB components, which are backed up in full mode during an incremental backup, are restored from
the last incremental session only.
IDB Reports
35 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
IDB reports
The IDB Size Report provides a detailed overview about with current size and the maximum limit for
all IDB components:
The IDB Size Report can be executed from the CLI via omnirpt:
• IDB Limits
Reaching the limit of any of the MMDB or CDB parts
• IDB Corrupted
One or more IDB parts are reporting a corruption
36 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Notifications
Data Protector allows the configuration of notifications, which will be send out if the
configured event occurred, like a start of a backup session or a backup failed.
As a default all IDB notification are sent to the DP Event Log in case an event triggered this
notification. So make sure to check the DP Event Log on a regular base for such events.
It is possible to tune these notifications. Within the DP GUI switch to the Reporting context, expand
Notifications and click on the notification you want to modify. Perform the modification and hit
Apply to save your changes.
The following checks are performed once a day as part of the Daily Healthcheck:
The IDB Corrupted check is built into the omnidbcheck binary. So any time it is executed and a
problem is found a notification will be send out, e.g. during an IDB backup, if IDB consistency is
checked before backup.
Contents
Module 15 Deduplication 1
15–3. SLIDE: Deduplication technology ............................................................................................ 2
15–4. SLIDE: How Deduplication works ............................................................................................ 3
15–5. SLIDE: Supported Deduplication Configurations .................................................................... 5
15–6. SLIDE: Target side Deduplication ............................................................................................ 6
15–7. SLIDE: Source side Deduplication ........................................................................................... 8
15–8. SLIDE: Server side Deduplication ............................................................................................ 9
15–9. SLIDE: Multi side Deduplication ............................................................................................ 10
15-10. SLIDE: Backup-to-Disk (B2D) devices .................................................................................. 11
15-11. SLIDE: Configure a Backup to Disk device 1/6 ..................................................................... 13
15-11. SLIDE: Configure a Backup to Disk device 2/6 ..................................................................... 14
15-13. SLIDE: Configure a Backup to Disk device 3/6 ..................................................................... 15
15-14. SLIDE: Configure a Backup to Disk device 4/6 ..................................................................... 16
15-15. SLIDE: Configure a Backup to Disk device 5/6 ..................................................................... 17
15-16. SLIDE: Configure a Backup to Disk device 6/6 ..................................................................... 18
15-17. SLIDE: Gateway Configuration for Source Side Deduplication ............................................ 19
15-18. SLIDE: Gateway Configuration for Target Side Deduplication............................................. 20
15-19. SLIDE: Gateway Configuration for Server Side Deduplication ............................................. 21
15-20. SLIDE: Creating a backup specification ................................................................................ 22
15-21. SLIDE: Running Backup with Data Deduplication ................................................................ 26
15-22. SLIDE: Creating an Object Replication specification ............................................................ 27
Module 15
Deduplication
Deduplication technology
Deduplication …
identifies and eliminates redundancy by writing only unique data to storage
HP StoreOnce Backup …
• is HP’s federated Deduplication storage solution
• consists of an expanding range of physical and virtual
deduplication appliances (HP Store Once 4500, 6500, VSA, ...)
• supports the HP StoreOnce Catalyst Software Interface
HP StoreOnce Catalyst …
• allows the control of StoreOnce Deduplication functions
through the backup application for a single point of control
• HP StoreOnce Catalyst can be used with HW based solutions
(4500, 6500, VSA,..) or SW based solutions (HP Data Protector)
• supports Data Replication to other StoreOnce devices without
the process of data rehydration
3 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Deduplication technology
Data Deduplication is a special compression algorithm that identifies and eliminates duplicate
data. Using the technology for Backups only unique data is written to the storage device. Besides
the aspect of cost reduction it also simplifies the backup concept of all the Branch and Remote
Offices, because only unique data blocks will need to be sent to the central Data Center.
HP StoreOnce Backup is a world leading federated deduplication storage solution. The flexible,
federated and very scalable architecture offers backup solutions for virtualized environments,
remote offices up to whole data centers. But it is more than Hardware. The main advantage over
other solutions is the build in HP StoreOnce Catalyst Software Interface.
Using HP StoreOnce Catalyst interface Data Protector is able to take control of all deduplication
functions during backup and restore and for data replication between different HP StoreOnce
Backup solutions.
In addition the HP StoreOnce Catalyst library was integrated into a Data Protector Software
deduplication solution that allows to run deduplication backups without using StoreOnce Appliance
systems.
More information about HP StoreOnce Backup solutions can be found on HP’s webpages under:
http://hp.com/go/storeonce
Component 1 Component 2
Chunking and Hash-Key Calculation Hash-Index and Chunk Store
Hash Index
1001000110101001100110110101011 001010
#661254200943
Data Calculate Calculate Calculate #433209888765
Stream the hash- the hash- the hash- #762384528937
key key key #094574857677
#198724683660
2
1 Chunk Store
Is this hash-key already known?
1011010111001
#433209888765
Yes: Store only the pointer 0011010100
No: Store the new hash-key + 110001111101
the data chunk
3
Location of Component1 depends on Deduplication Configuration Exist as HW/SW Store
4 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
This slide provides a high level overview about how HP StoreOnce deduplication solution is
working.
The backed up data shows up as data steam in the left side. The stream of data is independent of
file boundaries or file types. It’s a continuous stream of data provided by the Disk Agent. This
stream is divided in chunks of a variable size, between 2k and 10k with an average size of ~4k.
The process is more complex as shown on this slide and highly optimized, e.g. a set of compressed
hash-keys is sent to the Hash-Index for comparing it, instead of sending single items.
Component 1: The system that performs the Data chunking and hash-key calculation
Component 2: The system that stores the Hash-Index Database and unique Data Chunks
Between these two components only a low-bandwidth connection is required. The amount of data
is typically not high, because only unique data and hash-keys are transferred.
Component1 and 2 can be located on the same or on different systems. In case a StoreOnce
appliance system is used it is possible to send all the data to the appliance system and perform the
deduplication on the appliance, so both components will be located on one system.
Starting with Data Protector 6.21 the functionality of Component1 is part of the Data Protector
Media Agent.
The supported configurations are explained with more details on the following pages.
5 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The main separator between these configurations is the place where the deduplication occurs:
• Source Side Deduplication - Directly on the source or application system
• Server Side Deduplication -. On a separate system, called Gateway system
• Target Side Deduplication -. Directly on the StoreOnce Appliance or DP SW Store
The process of Data Deduplication is a CPU intensive process, which have impact on running
applications. This impact should be considered when selecting the configuration.
Another important separator is the network load. Depending on the configuration the backed up
data is send deduplicated or non-deduplicated over the network. This is indicated by the used
arrow icons on the slide. Thin arrow means deduplicated data transfer, broad arrow means data is
send not deduplicated over the network.
High Bandwidth Note: In case a Data Protector SW Store is used Media Agent and StoreOnce
SW Deduplication agent can run on one and the same system.
6 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Target side Deduplication is the easiest way to use the advantages of deduplication without much
changes to your backup concept is to use a StoreOnce backup appliance system or the
configuration of a Data Protector client system running a StoreOnce Software Store. The slide
illustrates both possibilities.
The deduplication takes place on the target, the StoreOnce appliance or DP SW Store, so a high
bandwidth connection is required to forward the data from the Source/Application system to a
Gateway system, which will transfer it again to the StoreOnce appliance or DP SW Store system.
Only in this setup, the Gateway system that runs the Data Protector Media Agent, is just forwarding
the data to the Appliance system without performing the Chunking and Hashing operations.
Disk and Media Agent can run together on the same system. In difference to a Source side
Deduplication configuration here the Media Agent just forward the data without performing a
deduplication.
Media Agent and Data Protector StoreOnce Software Deduplication Agent can run on the same
system.
Both mentioned configurations would eliminate the need of a dedicated Gateway system and
sending non deduplicated data twice over the network (from Source system Gateway System
and from Gateway System StoreOnce HW/SW Store).
Note: It is not possible to install any DP agent on a StoreOnce appliance system, so the Media
Agent need to run on a separate Gateway system or on the Source system
C
A
T
A
L
Y
Agenda: S
Low Bandwidth T
DP StoreOnce Software Deduplication Agent
Data Protector
Agent
High Bandwidth
7 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Source side Deduplication is used when the application system is running Disk Agents and Media
Agents and the deduplication is performed locally on that system.
In this setup only deduplicated data is forwarded to the StoreOnce appliance or DP SW Store, so a
backup over low bandwidth connection is possible.
The deduplication will have an impact on the source system, so ensure to include the impact in your
server sizing.
Source side Deduplication requires the configuration of a specific Gateway in the Data Protector
B2D device. This Gateway is using localhost as client name and can be used as generic gateway for
all clients that perform Source side Deduplication. All other Gateways that are used for Target and
Server side Deduplication need to have a valid hostname configured as Client name within your B2D
device.
Important: A direct restore from the StoreOnce appliance or DP SW Store would require a
High Bandwidth connection. Consider the use of Object Copy, Replication or
Consolidation to get prepared for restores
Server side Deduplication is most common used deduplication configuration. In this configuration
multiple source systems are forwarding their data to a dedicated Gateway system that deduplicate
the data and forward it over a low bandwidth connection to the StoreOnce appliance or DP SW
Store.
In this setup the deduplication load is offloaded to a separate server and has no impact to the
application system.
This setup is often used in Branch Office or Remote Office backup concepts.
Similar to the given example for Target side Deduplication, the Gateway system can forward the
deduplicated data to a local small/medium sized StoreOnce Backup appliance or virtualized
StoreOnce Virtual Storage Appliance (VSA), which transfers the backed up objects over a Low
Bandwidth connection via Object Replication to other StoreOnce appliances for DR preparation or
directly to a remote StoreOnce appliance, located in the central data Center.
Office 2 Standby DC
Note: Deduplication is done locally in the branch offices and replicated to Central Datacenter using Low Bandwidth Replication
9 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The slide is just an example that illustrates the backup concepts from the previous pages.
Office 1 is using Server side Deduplication. Multiple application/sources systems are sending their
data to a dedicated Gateway system, which deduplicates the data and forward it via a Low
Bandwidth link to the StoreOnce appliance system in the main Data Center.
Office 2 is using a larger office that is using again Server side Deduplication. Multiple
application/sources systems are sending their data to dedicated Gateway systems, which
deduplicate the data and forward it to a local StoreOnce appliance system. The backed up data is
replicated via Object Replication into a larger StoreOnce appliance system, located in the main Data
Center.
In addition a second Object Replication replicates company critical data from the main Data Center
into a Standby DC for Disaster Recovery preparation.
There are many other possibilities with all their advantages and disadvantages.
Check the web under http://hp.com/go/storeonce or http://hp.com/go/dp for updated information
about StoreOnce and Data Protector
10 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The support of StoreOnce Deduplication backups requires the introduction of a new Logical Device
type in Data Protector – the Backup to Disk (B2D) device.
A B2D device backs up data to disk based devices only. In this module we put the focus on the
StoreOnce Deduplication Integration, so usable disk storage is either managed by a DP SW Store or
by a HP StoreOnce Backup appliances. Refer to the Platform and Integration Matrix to get the list of
supported systems.
A gateway system is a system with a Data Protector Media Agent installed, which performs the
deduplication and forwards the data to the store. Each host that is acting as gateway system for
Target and Server side Deduplication requires a dedicated gateway configuration for this host
within the B2D device.
In case Source side Deduplication is used only one gateway needs to be configured for each B2D
device. This implicit gateway is using localhost as client name and can be used for Source side
Deduplication on any host with a Media Agent installed.
Note: It is not possible to configure the same store within 2 different B2D devices
Licensing
B2D devices are licensed via a capacity-based licensing, similar to file library licensing.
B2D devices requires a capacity-based Advanced Backup to Disk LTU, based on the usage of
deduplicated data on disk.
- B7038AA – 1 TB
- B7038BA – 10 TB
- B7038CA – 100 TB
For more information about capacity-based licensing, see the HP Data Protector Installation and
Licensing Guide.
Select
Backup to Disk
as Device Type
In the
Devices & Media
context right click
on Devices
Click Next to
continue
11 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The following slides will explain the configuration steps of a Backup to Disk (B2D) device with the
focus on the StoreOnce Integration.
• In order to configure a B2D device start the Data Protector GUI and change to the Devices
and Media context. Right click on Devices and select Add Device
• Under Device Name specify a speaking name for the B2D device and add a description that
helps to identify this device
• Select Device Type : Backup to Disk
• Select the appropriate Interface type:
StoreOnce software Deduplication… Interface to a DP SW Store
StoreOnce Backup system… Interface to a HP StoreOnce Backup appliance
• Click Next to continue.
Store:
Configure the Store of a HP
StoreOnce Backup Appliance or
DP StoreOnce Software
Deduplication Client
Gateways:
Configure one or more
Gateway systems and their
Deduplication method (Source,
Server, Target Deduplication)
12 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
This is the most import window that offers the creation/selection of the Data Store in the Store
section and the configuration of the Client gateways in the Gateways section
13 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The B2D Store configuration will focus on the configuration wizard for a HP StoreOnce Backup
appliance, as it offers more complexity.
• Under Deduplication system enter the FQDN or IP address of the appliance system
• Enter Client ID and Password, if this was configured for your Backup appliance. As a default
no information is required in these two fields.
• Select the Select/Create Store button to open the Select store window
• Within the Select store window it is possible to create a new store or browse the existing
stores. The table inside displays exiting stores and the B2D device they belong to. Either
select an unallocated store or create a new one.
Filters settings help to reduce the selection.
Available filters are:
- Encryption list all encrypted stores
- Federation list all federated store
(new feature in DP9.00)
In case Create new store is selected specify the new store name and check Encrypted store,
if the new store should be encrypted on StoreOnce Backup appliance level.
14 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The B2D Gateway configuration section allows the configuration of the implicit Source side
Deduplication gateway and the hostname dependent gateways used for Target and Server side
Deduplication. The detailed Gateway setup of each Gateway type is explained later in this module.
The Add Gateway window will show up after clicking the Add Button. It looks pretty similar to a
device property window and allows the configuration of a Gateway Name and Description, but also
options like Block Size or Media Pool.
Optional:
Limit the Number of Media Agents
that can connect to this Store
Optional:
Define Soft Quota (not enforced) for
Backup Size and Store Size
Optional:
Define a size for the virtual medium
within the Store (Default: Unlimited)
Optional:
Select to enable one object per
virtual Store medium
15 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Max Numbers of Connections per Store limits the number of Media Agents that can connect to that
store. The default is unlimited. An Object consolidation of one full and 8 incrementals that are
stored in that store would require 10 connections (plus one connection for the synthetic full), so in
case a limit is set ensure that it is able to support the planned backup concept.
Backup Size Soft Quota and Store Size Soft Quota are soft limits that will trigger a message in the
session report, if the specified quota value is exceeded. These settings are not enforced, so session
will continue.
Store Media Size Threshold will limit the size of the created store internal media to the specified
setting. The default setting is unlimited.
Single Object per Store Media will cause the creation of separated media per object, e.g. C-Drive, D-
Drive or Configuration. It is similar to a non appendable medium on an object level.
Note: All listed options in the B2D Settings window are optional, no input is required.
Note: A B2D Device can only manage one Store. In case multiple stores exist,
a B2D device is required for each configured store.
16 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The B2D Summary window lists the configured store, no configuration is possible.
The large table accidently indicates that multiple stores can be configured for a B2D device.
This is not possible, only one store can be configured as Data Store for a B2D device, regardless if a
DP SW Store or HP StoreOnce Backup appliance is used.
In case multiple stores exist and should be used for Deduplication backups, multiple B2D devices
needs to be configured.
17 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Source side Deduplication requires only the configuration of one gateway. This is the reason, while
only a check box exists that enables a Source side Deduplication gateway.
The Source side Deduplication properties window allows the configuration of the Gateway name
and Description. Gateway system is set to localhost. This option is grayed out and cannot be
changed.
As stated before this gateway can be used for all clients and will cause a Source side Deduplication
on all selected Backup objects.
Note: Mark each configured Gateway and click on Check to validate connection.
Gateway status need to be listed as OK.
18 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Target and Server side Deduplication require the configuration of explicit gateways. The selection
box lists all DP client systems with a Media Agent installed. Select the client system that should act
as a Gateway within a Server or Target
side Deduplication configuration and click
on Add to configure it.
Target side Deduplication is the default gateway configuration. There are no additional
configuration steps required. The gateway shows up as Server side deduplication=No in the
gateway table.
Check the connection to that gateway. Mark the gateway and click on Check to validate the
connection. If it shows up with Status =OK in table the check succeeds.
Note: Server Side Deduplication option can be switched on/off within a Backup
specification under Device Properties
19 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
After the configuration of all Gateways for Target side Deduplication identify gateways that should
be used for Server side Deduplication. Mark these gateways and select Server side deduplication
from the menu.
To complete the configuration proceed with the other gateways in the same way.
Creating a backup specification that will use deduplication devices is very similar to the
configuration of a normal backup specification, beside the fact that gateways need to be used
instead of tape drives or writers.
1. Repeat step 1-6 from Target side Deduplication, just ensure that Server-side
deduplication option is checked.
4. Under Source select the Objects for backup and click Next
5. Under Destination select the implicit Source side Deduplication Gateway that
belong to the B2D device you want to use for backup. Note that all other gateways
are grayed out and under Properties Server side deduplication option is
deactivated
Note: In case the checkbox Source side Deduplication was missed in the Create new
Backup window the option can be still checked in the Options window under
Backup to Disk Device options as shown below
21 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Start the backup as usual and check the session report. The main difference compared to a normal
backup are the used gateways with their gateway-id. To keep a good overview it is highly
recommended to use “speaking” gateway names for identification that contains details like the used
B2D device, configured store and gateway hostname.
The most important thing for a deduplication backup is the deduplication ratio, the ratio between
backed up data and stored data. The deduplication ratio is listed in the session report:
• for each object and
• for the whole session.
Activates replication
22 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Depending on the used Backup concept there might be a need to replicate the backed up data into
another StoreOnce Backup appliance, like in an environment, where backed up data from a remote
office should be replicated into the central data center. This feature is called Object Replication
and can be configured by using the Object Copy configuration wizard. The main benefit of Object
replication over a normal Object Copy operation is the fact that the data does not need to be
dehydrated/un-deduplicated during transfer.
For the configuration of an Object Replication specification change to the Object Operations context
and click on Add on any of the available automated or interactive Object Copy types.
Follow the wizard like for a normal Object Copy specification configuration. In the Destination
Devices window it is possible to show only devices capable of replication, as Object Replication is
only working between StoreOnce appliances. Under Options activate the option Use Replication as
shown on the slide above. This will trigger the start of an Object Replication instead of an Object
Copy. Notice the difference in the session report sample on the slide.
Contents
Module 16 — Access control and Security 1
16–3. SLIDE: Access control and security levels .............................................................................. 2
16–4. SLIDE: Access Control ............................................................................................................. 3
16–5. SLIDE: User Rights .................................................................................................................. 4
16–6. SLIDE: User Groups ................................................................................................................. 7
16–7. SLIDE: The Admin Group ......................................................................................................... 8
16–8. SLIDE: The Operator Group .................................................................................................... 9
16–9. SLIDE: The User Group .......................................................................................................... 10
16-10. SLIDE: Custom Groups .......................................................................................................... 11
16-11. SLIDE: Default group permissions ....................................................................................... 12
16-12. SLIDE: Add User Group ......................................................................................................... 13
16-13. SLIDE: Add Users .................................................................................................................. 15
16-14. SLIDE: User Restrictions ....................................................................................................... 16
16-15. SLIDE: LDAP user integration ............................................................................................... 19
16-16. SLIDE: Client and Cell security .............................................................................................. 21
16-17. SLIDE: Certificate based DP GUI connection ........................................................................ 23
16-18. SLIDE: Network Access—INET (HP-UX) ............................................................................... 24
16-19. SLIDE: Network Access—INET (Windows) ........................................................................... 25
16-20. SLIDE: Inet User Impersonation ........................................................................................... 26
16-21. SLIDE: Creating Impersonator Users .................................................................................... 28
16-22. SLIDE: Changing the Web Password .................................................................................... 32
Module 16
Access control and Security
User Restriction
3 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Out of the box, Data Protector is both secure and un-secure from different perspectives. By default,
the only user that is able to use the Data Protector GUI and CLI is the user who installed the product
– typically root on UNIX and Domain or Local Administrator on WINDOWS. All other user need to be
configured into Data Protector group’s that own certain privileges to be able to perform designated
operations in Data Protector.
On the other hand, the installed Data Protector Agents (Disk Agent, Media Agent...) are configured
to respond to any Session Manager that attempts to connect to it. What appears to be a huge
security hole can be easily closed or restricted in a way that only the current Cell Manager or a set
of defined systems are able to access these clients within a defined disaster recovery, site failover
or load balancing concept.
4 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Access Control
Access to Data Protector’s functional areas, such as Client Installation, Device Configuration,
Backup, and Restore, is strictly controlled by the allocation of specific permissions to Data
Protector User Groups.
Dedicated operating system users, such as root, Administrator or Oracle DBA accounts, etc. who
either directly start sessions or own sessions need to be configured as members of Data Protector
User Group.
The Data Protector operations that the users are able to perform depend on the capabilities
assigned to the User Group to which they belong.
You can complement the user security layer provided by Data Protector user groups with
restrictions of certain user actions to certain systems of the cell. Such restrictions can be
configured in the user_restrictions file. They apply only to members of Data Protector custom user
groups other than the admin and operator group.
User Rights
Reporting
Start backup Save backup
and Start backup
specification specification
notifications
Switch
Back up as
session Monitor Abort
root
ownership
5 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
User Rights
Data Protector provides a rich set of user rights to implement advanced security functionality. The
following user rights are available:
• Clients configuration: This user right allows the user to install and update Data
Protector software on client systems.
• User configuration: This user right allows the user to add, delete, and modify users
and user groups. Note that this is a powerful right!
• Device configuration: This user right allows the user to create, configure, delete,
modify, and rename devices. This includes the ability to add a mount request script to a
logical device.
• Media configuration: This user right allows the user to manage media pools and the
media in the pools and to work with media in libraries, including ejecting and entering
media
• Reporting and notifications: This user right allows the user to create Data Protector
reports. To use Web Reporting you also need a Java user under the applet domain in
the admin user group.
• Start backup: This user right allows users to back up their own data as well as monitor
and abort their own sessions.
• Start backup specification: This user right allows the user to perform a backup using a
backup specification, so the user can back up objects listed in any backup specification
and can also modify existing backups.
• Save backup specification: This user right allows the user to create, schedule, modify,
and save any backup specification.
• Back up as root: This user right allows the user to back up any object with the rights of
the root login on UNIX clients. This user right is effective only for UNIX clients. Backups
on Windows always run in the context of the Data Protector Inet service. See the
section Inet Impersonation to see how the DP Inet Service can run under another user
credentials
• Switch session ownership: This user right allows the user to specify the owner of the
backup specification under which the backup is started. By default, the owner is the
user who started the backup. Scheduled backups are started as root on a UNIX Cell
Manager and under the Cell Manager account on Windows systems. This user right is
appropriate if the Start backup specification user right is enabled.
• Monitor: This user right allows the user to view information about any active session in
the cell and to access the IDB to view past sessions.
• Abort: This user right allows the user to abort any active session in the cell.
• Mount request: This user right allows the user to respond to mount requests for any
active session in the cell.
• Start restore: This user right allows users to restore their own data as well as monitor
and abort their own restore sessions. Users that have this user right are able to view
their own and public objects on the Cell Manager.
• Restore to other clients: This user right allows the user to restore an object to a
system other than the one from where the object was backed up.
• Restore from other users: This user right allows the user to restore objects belonging
to another user. It is effective only for UNIX clients.
• Restore as root: This user right allows the user to restore objects with the rights of the
root UNIX user. Restores on Windows always run in the context of the Data Protector
Inet service. See the section Inet Impersonation to see how the DP Inet Service can run
under another user credentials.
• See private objects: This user right allows the user to view and restore objects that
were backed up as private.
• KMS Key generation: This user right is required to manage the KMS Key database via
the omnikeytool binary.
User Groups
6 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
User Groups
A Data Protector User Group is a set of access rights that permit execution of certain portions of
Data Protector Functionality. Each Data Protector user is member of a User Group. Each User Group
has a set of user rights that are given to every user in that User Group. The number of User Groups
with their associated user rights can be defined as desired.
Data Protector provides three default user groups that provide the typical level of delegation and
control required by most customers:
• Admin
• Operator
• User
In addition it is possible to create custom user groups that own sets of privileges that perfectly fit
the needs of the managed environment.
7 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The Admin Group is all-powerful. Members of this group have complete control of all Data
Protector Operations on the whole cell. When Data Protector is installed, the user account that was
used for installation is added to the Admin group.
Overall 4 accounts are configured in the admin group on WINDOWS after installation:
• "Initial cell administrator" (Account used at installation time)
• "CRS service account" (can be specified at Installation time, default account is the
same account that is configured as "Initial cell administrator")
• "Local System account on the Cell Manager" (SYSTEM user)
• "WebReporting"
Overall 2 accounts are configured in the admin group on UNIX after installation
• "root" (Cell Manager root account)
• "WebReporting"
If you require other users to have full control of the Data Protector Cell, they must be added to the
Admin group.
The Admin group can neither be modified in any way nor deleted, as it must always have full-
control. It is not possible to apply any user restrictions to members of the admin group.
8 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The Operator group has fewer capabilities than the Admin group. The members of the Operator
group are prevented from executing the following operations:
• Client system installation
• User Configuration
• Logical Device configuration
• Reporting and Notification configuration
Operators have admin group user like privileges through Backup and Restore!
The main purpose of the Operator group is to provide operators the ability to perform the day-to-
day operation of the Data Protector Cell. This is why the Operator group does not have any
Configuration permissions, as these are functions typically performed by the system
administrator.
By default, there are no users configured into the Data Protector Operator Group.
The permissions of the default Operator user group can be modified if required.
9 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The User Group has permission only to initiate a restore of the user's own data. Those responsible
for backup must assign ownership of the backup job to allow a member of the user group
permission to see the data available for restore within the restore GUI.
Any media requests that accompany the restore session must be satisfied by the members of the
Operator or Admin groups.
Giving users the ability to restore their own data may be desirable in environments where users
have access to their own tape drives or libraries or where data is always available like stored on
large disk arrays.
No intervention on the part of the Admin or Operator group members is required to satisfy mount
requests, if the correct media is loaded in the device specified by the restoring user. By default,
there are no users configured into the Data Protector User Group.
Custom Groups
10 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Custom Groups
In addition to the predefined default groups, Data Protector allows you to create your own groups.
You may choose to create custom groups that match the structure and requirements of your IT
department.
Example
The default Operator user group has all access rights, except client configuration, User
configuration, Device configuration, and Reporting and Notification. The default User group user
has only the Start restore access right.
An IT organization may require some sort of a hybrid solution where more senior users can format
tapes (media configuration), monitor, start backups, start backup specifications, mount prompt,
abort, and restore. In this case, a custom group can be created to satisfy this requirement. The
relevant users are then added to (or modified in) this group.
There are two ways to create custom user groups allowing more flexibility:
11 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The above slide displays the complete set of Data Protector Permissions, as mapped to each of the
default user groups offered
Many of the permissions will allow super-user capability indirectly, and are considered very
powerful rights. (e.g. Restore as root)
For complete definitions of each of the above user rights, see the chapter “User rights”.
5
6
12 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Users and Groups can be created, modified, and deleted from the Data Protector GUI. Alternately, if
no GUI is available, modifications can be made directly to the configuration files.
Modifications directly to the users file are NOT supported. If no GUI is available, you can use the
omniusers command to add, modify, or remove users
The “*” is displayed in the GUI as <Any>; this is a wildcard that may be used in any of the first 4
fields.
Example:
DP_CONFIG/users/UserList
The <DP_CONFIG>/users/ClassSpec file is somewhat more complex, and therefore, it is best not to
modify it manually. The ClassSpec contains the user rights assigned to each Data Protector group.
Each of the user rights is assigned a numeric value. The total of all of the numeric values for each of
the user rights added to the group is stored along with the group name in the Class Spec file.
Additionally, the integrations with third party databases, such as Oracle, typically require that a
special user be added to the admin group or operator group to allow the backups to be performed
by the database administrator's user id. This will require that the backup specifications are "owned"
by that user as well.
Add Users
• select Users context
1
1
• right click the desired User
2 2 Group
5
3 • select Add/Delete Users
3
• Type in Type, Name,
Group/Domain,
4 4 Description & Client
6
13 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Add Users
All operations in Data Protector are only permitted if a Data Protector user account exist for the
logged in OS user account and the Data Protector group this account belongs to owns the privilege
to perform this operation. As a default only members of the Data Protector admin group are
eligible to create new Data Protector users for existing OS accounts.
The easiest way to configure a user for Data Protector is to use the Data Protector GUI.
Note: All GUI input fields accept <any> as a white card to ease the user configuration in
large environments, but remember that using white cards might compromise your
Cell security.
User Restrictions
Example:
#user_restrictions file
#Users from following user groups are allowed
#to access hosts listed below as soon they belong
#to the same system group
group1: core_group
group2: net_group mail_group dba_group
14 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
User Restrictions
You can complement the user security layer provided by Data Protector user groups with
restrictions of certain user actions to certain systems of the cell. Such restrictions can be
configured in the user_restrictions file. They apply only to members of the Data Protector user
groups other than the admin and operator group. When configured, the file is located at the
following location on the Cell Manager:
UNIX: DP_CONFIG/cell/user_restrictions
WINDOWS: DP_CONFIG\cell\user_restrictions
As a default the file does not exist. In order to create a template file in the mentioned directory
execute the following command on the command line:
The created file is just a template file. It contains one system group that owns all configured Data
Protector client systems and all Data Protector groups. This setting ensures that there is no
unexpected problem after activation. For activation, rename the file to user_restriction.
If the user_restrictions file does not exist, all users are allowed to perform all actions for which
they have user rights. If an empty user_restrictions file exists, nobody from the user groups is
allowed to perform any action. Only users from the admin and operator user groups are always
allowed to perform actions for which they have user rights.
If the user_restrictions file exists, the action that you requested is executed only if the system on
which the action is to be performed and the Data Protector user group to which the user belongs to
are assigned to the same system group. In order to restrict users from certain systems remove the
client system from the system group the user belongs to.
As a default there is no restriction to certain clients for configured Data Protector users, so in case a
user owns the Start Backup specification right all backup specification could be started. A
restriction file allows limiting the right to certain systems only.
In the example below 4 Data Protector groups are assigned into 2 system groups (group1, group2).
All Data Protector users who belongs to Data Protector group core_group are now restricted to
access only Data Protector client system w28kdev21.vm2.com.
Example:
#user_restrictions file
group1: core_group
group2: net_group mail_group dba_group
After activation the restriction are enforced for the following user rights, called BASIC rights:
• Start Backup
• Start Backup Specification
• Start Restore
• Restore as Root
Additionally, if the global option CheckAdditionalUserRestrictions -is set to the value 1, Data
Protector checks for restriction of user actions covered by the following user rights (referred to in
the global options file as additional user rights) in addition to the BASIC ones:
• Monitor
• Abort
• Restore to other client
• See private objects
# CheckAdditionalUserRestrictions=0 or 1
# default: 0
# This option enables (1) or disables (0) the check for restrictions of user
# actions covered by additional user rights. The effect of this option is as
# follows:
# a. If the option is set to the value 0:
# - If the user_restrictions file does not exist, user actions are not
# restricted at all.
# - If the user_restrictions file is properly configured, only user actions
# covered by basic user rights are checked for potential restrictions.
# - If the user_restrictions file is empty, only user actions covered by
# basic user rights are unconditionally restricted, and user actions
# covered by additional user rights are allowed.
# b. If the option is set to the value 1:
# - If the user_restrictions file does not exist, user actions are not
# restricted at all.
# - If the user_restrictions file is properly configured, user actions
# covered by basic and additional user rights are checked for potential
# restrictions.
# - If the user_restrictions file is empty, user actions covered by basic
# and additional user rights are unconditionally restricted.
# For information which user rights are basic and which ones are
# additional (this distinction is used only for the purpose of the
# CheckAdditionalUserRestrictions option), see the Data Protector
# online Help index: "user restrictions".
Important: DP GUI Authentication window always shows up, if the user cannot be found in
the DP user configuration, regardless if LDAP Integration is configured or not.
15 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Introduced with Data Protector 8.10 it is now possible to configure existing Windows Active
Directory based users and groups as Data Protector users using the LDAP protocol. The Data
Protector LDAP Integration does not exist after installation and requires a configuration with the
existing LDAP server. See the Data Protector Installation Guide or refer to the OLH for details how
to configure the integration.
Once configured it is possible to add LDAP configured users and groups within the DP GUI. As
described before run Add User and select User Type LDAP and User Entity LDAP Group or LDAP
user. The LDAP user or group name needs to be entered using the Distinguished Name as shown on
the sample below (e.g. copied from the MS ADSI Edit tool).
In case a LDAP group is configured as a member of the DP Operators group, all members of this
group will own Operators group privileges in DP. If users are added to this group or removed from
this group within Active Directory no changes are required within DP.
As a member of a configured LDAP group you will be prompted for Username/Password during DP
GUI startup.
If the default LDAP Integration was used for configuration the User Principal Name of the user is
expected under User Name, together with the user password, for User Authentication.
Important: DP GUI Authentication window always shows up, if the user cannot be found in
DP user configuration, regardless if LDAP Integration is configured or not.
The current implementation provides LDAP user authentication support only for the DP GUI, not for
the DP CLI.
Secure a
particular
Client
Secure
all Clients
in the Cell
16 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Data Protector default installation allows any (foreign) Cell Manager, Installation Servers, or
Session Manager to access any Disk Agent and Media Agent, even those that are not members of
the same cell.
This designed-in feature allows a remote recovery of data from one cell to another. Although this
may be a very valuable feature for some environments, it is also a security risk!
DP Cell Administrators may want to totally prevent access by external systems to members of a
Data Protector Cell, or at least regulate access to only known, trusted systems. This may be
accomplished by configuring external systems that are authorized to access specific Data Protector
Clients of the cell by using the Secure feature shown above.
1. Data Protector Client Level: You can select one or more DP Clients Secure at a time
(Shift+Control) from the GUI and securing them by selecting the Secure option
2. Data Protector Cell Level: You can select all DP Clients of a cell, and secure all of them in
one convenient operation by selecting the Cell Secure option
Using the Data Protector GUI, enter the names of the external systems (foreign Cell Manager or
Installation Server etc) that you want to authorize to connect to specific or all Data Protector
Clients of your Data Protector Cell. The resulting list of configured (external) hosts is stored in the
following file, on the cell manager:
UNIX /etc/opt/omni/client/allow_hosts
WINDOWS DP_HOME\config\client\allow_hosts
This file is created on the cell manager, and distributed to each cell client automatically when the
entire Data Protector Cell is secured.
The allow_hosts file is not created on the cell manager if specific Data Protector Client/s are
secured! In this case, the allow_hosts file is only created on specific Data Protector Client/s!
Alternatively, each DP Client may be independently secured. By default the Cell Manager is always
able to access the client, as the Cell Manager is registered in the /etc/opt/omni/client/cell_server
file on Unix and in the registry on a Windows system.
The list of external system/s that will be afforded exclusive access to a specific DP Client may
include remote managers, such as Standby or Recovery Site Cell Managers.
To remove exclusive access by external system/s to a particular Data Protector Client, either
modify the allow_hosts file on the affected Data Protector Client, or alternatively use the Data
Protector GUI to unsecure the cell or the affected DP Client.
The screenshot below shows the dialog windows offered when you attempt to unsecure a specific
Data Protector Client:
Note:
Use omnigencert.pl for Certificate Management,
located on the DP Cell Manager in DP_HOME\bin
17 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Any time a DP GUI is started on a client system a secure certificate based SSL communication is
established with the DP Cell Manager. The Cell Manager is acting as standalone root Certificate
Authorization (CA), which is installed as a default during the Cell Manager installation.
If a DP GUI is started the first time on a client system a CA certificate is installed on the client
system for each user that starts the GUI. Using the SSL protocol with public/private keys the
communication with the DP Cell Manager is now encrypted.
If the DP Administrator want to replace the certificates the command omnigencert.pl can be used.
The command is located on the Cell manager only under:
DP_HOME\bin
Refer to the Data Protector Installation Guide for details how to use the omnigencert.pl
command.
Note: The encrypted SSL communication via Certificates is activated as a default and cannot
be switched off
/var/opt/omni/log/inet.log /sbin/init.d/inetd
1
6
/opt/omni/lbin/inet 2 /etc/services:
omni 5555/tcp
5
/etc/inetd.conf:
omni stream tcp nowait root DP request
/opt/omni/lbin/inet (Port 5555)
18
-log /var/opt/omni/log/inet.log
© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Any Data Protector service request to a UNIX system is handled by the inet-Demon inetd. This
demon is started at boot time and is waiting for incomming connections.
Default:
Local system
account
19 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
When Data Protector is installed on Windows systems the Data Protector Inet (also referred as
omniinet ) service is configured to start automatically when the system starts. The service is used
to intercept requests from the Cell Manager on port 5555. These requests are used to start the
local Disk, Media or Integration Agents.
By default, the Data Protector Inet service logs on as the Local System account, which is sufficient
to run filesystem backups. For Data Protector Integration backups like Oracle database backups or
Microsoft Exchange online backups or for Windows 8 or and higher backups Local System account
is not sufficient, here Data Protector Inet service need to run under a specific user account to start a
session. See the next slide on how to configure Data Protector Inet service user impersonation.
Required for Windows 2008 backups and selected Integration backups like SharePoint or SAP
20 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Impersonation is a security concept that allows a server application to temporarily "be" the client in
terms of access to secure objects. To employ impersonation, you need to know the password of the
user you want to impersonate. Impersonation is like logging in on a Windows machine. On Windows
systems, backup and restore sessions are started by the Data Protector Inet service, which by
default runs under the Local System account, also called SYSTEM. Consequently, a backup or
restore session is performed using the same user account. Majority of Data Protector integrations
on Windows require Data Protector Inet service to run under a different username/Windows
Domain account instead the system account. Examples are:
- MS SharePoint
- MS SQL
- Oracle/SAP
- Informix
Lotus, DB2 and Exchange (Server and single-mailbox) and others do not require running Data
Protector Inet service on a special account.
In Windows Server 2003 this can be achieved by simply restarting the Data Protector Inet service
under a different user account. For other supported Windows operating systems, like Windows
Server 2008, this in no longer allowed!
Windows 2008 recommends that inet be run under system account at all times!
Otherwise it may cause problems if it runs under a username that has insufficient privileges for
running others of the various tasks the inet process has. Therefore, Data Protector uses alternative
concept: User Impersonation. It means that, although the Data Protector Inet service runs under
the Windows local user account SYSTEM, the service can impersonate a Windows domain user
account and can consequently start the integration agent under that user account.
Although impersonation can be applied to several OS versions, Data Protector supports it only on
Windows 2008 or higher.
To enable the Data Protector Inet service impersonation, Windows domain user account must be
specified in the backup specification or in the restore wizard and the user account (including its
password) must be (prior to backup or restore) saved in the Windows Registry under hidden key,
passwords in encrypted form. The user account must be configured, and hence saved locally in the
registry subsequently, on client where it will be invoked by Data Protector Inet service for
impersonation during backup or restore operations.
GUI:
• Change to Clients context
• Right click on client
• Select Add Impersonation
• Follow the wizard
User account names that will participate in impersonation in backup or restore operations must be
created prior before deploying them. They can be configured in two ways, using the:
Impersonation Prerequisites
Data Protector Inet service needs to have the following information in order to impersonate a user:
• Username
• Domain
• Password
The passwords are encrypted and stored in the Registry of the client systems! If user passwords
are changed (e.g. due to policies) then Data Protector Inet service user configuration on clients
must be changed accordingly.
Moreover, the user account that will participate in impersonation must have the following
properties:
4. In the Select Client Systems page, select the client systems for which you want to
configure the Data Protector Inet service user impersonation and click Next
In the Add, delete or modify impersonation page, add a new user account, or modify or
delete an existing one, and click Finish
A confirmation message, as shown below, appears confirming that the user account has been
added (to the registry) that will take part in potential impersonations:
Verify that the user has been created by issuing the omniinetpasswd command, on CLI, as
follows:
The user account that will potentially be used to impersonate can be created locally on the client
where it will be deployed.
Besides the local configuration directly on the client there is also a command to set up a user
account for the user impersonation on multiple Data Protector clients, use the omnicc command
Log in to the Cell Manager and, from the DP_HOME\bin directory, run:
For details on the omniinetpasswd and omnicc commands, see the HP Data Protector Command
Line Interface Reference.
No old
password by
default
When Data Protector is installed on the Cell Manager, there is a web user called java inserted into
the Admin group. There is no password required to access the Web Reporting applet by default! To
provide more security it is recommended to password protect the web functionality. The protection
requirement is largely due to the fact that through the web interface, notifications and report
groups may be modified, as well as the cell data is available. To set the Web User Password open
the Data Protector GUI, change to the Users context and select Set Web User Password from the
Actions menu on top. If you configure the password the first time leave the Old password filed
empty and just specify the new password. Press OK to confirm.
WINDOWS : DP_CONFIG\users\WebAccess
UNIX : DP_CONFIG/users/WebAccess
on the Cell Manager. This file exists as an empty file by default! Removal of the file will prevent a
new password from being created, so create a new empty file in case old password got lost.
Contents
Module 17 — Auditing 1
17–3. SLIDE: Auditing overview ........................................................................................................ 2
17–4. SLIDE: Backup session auditing .............................................................................................. 3
17–5. SLIDE: Enhanced Event logging .............................................................................................. 7
Module 17
Auditing
Auditing overview
Auditing overview
Backup session auditing was mainly driven due to administrative concerns and the Sarbanes-
Oxley regulation (see: http://en.wikipedia.org/wiki/Sarbanes-Oxley_Act)
A need arose for Data Protector to be able to store information about all backup tasks that were
performed over extended periods for the whole cell backup environment, and to be able to provide
this information on demand of a member of the DP admin group.
Enhanced event logging is a security feature that tracks all GUI based modifications on
specifications (backup, copy and consolidation), on devices and media, user configurations and
tracks all client agent installations/upgrades/removals in the Data Protector Event Logs.
Note: Both methods are deactivated after a Data Protector Cell Manager installation and
need to be manually activated by setting global file parameters like explained on
the next pages.
Backup session auditing stores information about performed backup session, used media and
Backed up objects outside of the Internal database in encodes files. These files are non-tamper
able to prevent direct alteration of the stored information and backed up as part of a Data
Protector IDB backup.
To activate backup session auditing set the parameter AuditLogEnable in the global file to 1:
# AuditLogEnable=0 or 1
# default: 0
# This option enables or disables the logging of auditing
# information. By default, the auditing information is not logged.
# If the value is set to 1, the auditing information is logged.
After activation Data Protector starts logging information about performed backups in encrypted
format into special files named:
Windows : DP_VAR\log\server\auditing
UNIX : DP_VAR/server/log/auditing
Example:
These files are backed up as part of the IDB backup without any special configuration.
The default retention time of these audit files are 90 month. This retention time can be configured
via global file parameter AuditLogRetention:
# AuditLogRetention=0 or NumberOfMonths
# default: 90
# Specifies how long (number of months) audit log files are kept
# before being purged. Audit logs are purged on a monthly
# basis, meaning that the session information for an entire month
# is removed after the specified number of months.
# By default, the audit log is retained for 7,5 years (90 months).
# If the value is set to 0, audit log purging is disabled.
In order to create a backup audit report open the DP GUI and change to the Internal Database
context and extend Auditing. Choose a Search Interval from the drop down menu or select
Interval and define your own Start and End date.
After search interval selection click on the Update button to get the list of performed backup
sessions. Click on a session to see the used media and backed up objects of that session.
Audit report generation can also be triggered from the CLI via the omnidb –auditing command and
can be used for scripting or 3rd party tool integration.
Example:
omnidb -auditing –last 10 -detail
2013/10/16-3 Quick Start2 Completed full 2012-10-16 13:50 2012-10 -16 13:50
VMW39201\ADMINISTRATOR@vmw39201.deu.hp.com c9272e10:507bf7dc:09f4:0005
Default File_68 7488 WinFS vmw39201.deu.hp.com:/C "C:" Completed
Option description:
The Data Protector Event Log represents a centralized event management mechanism, dealing
with specific events that occurred during the Data Protector operation. This event logging can be
enhanced to track all GUI based user operations as a Data Protector event.
To enable the enhanced Data Protector Event Logging set the following global parameter to 1:
# EventLogAudit=0 or 1
# default: 0
# This option enables (1) or disables (0) logging of GUI-related user actions
# into the Data Protector event log. Logging is disabled by default.
Note: A restart of the Data Protector services is required after global parameter
EventLogAudit was changed.
The data being logged in case of an event of type User Operation is:
• the name of the operation, or multiple operations separated with a comma
• username and host from which the GUI user triggered the operation
• description of performed operation
Example1:
Backup specification deleted by administrator
Example2:
Device FJ_Drive5 deleted by administrator
Any of the following specification modification will cause an event in the Data Protector Event log:
Event Logs are stored on the Cell Manager in the following location:
Windows : DP_VAR\log\server
UNIX : DP_VAR/log/server
Note: Data Protector Event Logs are not backed up as part of the IDB backup.
In order to get a backup of the Data Protector Event Logs include the log directory
in the Data Protector Cell Manager Filesystem backup
Contents
Module 18 Disaster Recovery 1
18–3. SLIDE: Overview Disaster Recovery ....................................................................................... 2
18–4. SLIDE: Disaster Recovery Phases ........................................................................................... 4
18–5. SLIDE: DRM for EADR and OBDR ............................................................................................. 5
18–6. SLIDE: Phase 0 a: Perform a full backup ................................................................................ 6
18–7. SLIDE: Phase 0 b: Create DR Image 1/4.................................................................................. 7
18–8. SLIDE: Phase 0 b: Create DR Image 2/4.................................................................................. 9
18–9. SLIDE: Phase 0 b: Create DR Image 3/4................................................................................ 10
18-10. SLIDE: Phase 0 b: Create DR Image 4/4 ............................................................................... 11
18-11. SLIDE: Phase 1: Booting the recovery image ....................................................................... 13
18-12. SLIDE: Recovery Method/options GUI .................................................................................. 14
18-13. SLIDE: Recovery progress monitor GUI ................................................................................ 16
18-14. SLIDE: DR on Dissimilar hardware........................................................................................ 17
Module 18
Disaster Recovery
(2)
Assisted Automated
Manual DR System Recovery
(AMDR) (ASR)
(3)
Disk Delivery
DR
(DDDR)
1 … preferred method
2 … supported on older Windows version only
3 … supported on Unix only
3 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
A disaster is any situation in which a system does not function properly, whether due to human
error, hardware failure, or natural disaster. In these cases, the root (boot) partition of the system is
not available, and the environment needs to be recovered before the normal restore operation can
begin. This includes a Hardware recovery, followed by re-partitioning and re-formatting the boot
partition. Afterwards recover the operating system with all the configuration information that
defines the environment. This step must be complete before any user or application data can be
recovered.
There are three components of the Data Protector architecture that may require recovery:
• Client System
Recovery of a client system may be necessary because of hardware failure, or corruption or
loss of critical system software or configuration.
• The Data Protector Database
It may be necessary to recover the Data Protector database if it becomes corrupted and
beyond repair with normal database maintenance tools. The database must also be
recovered as a part of the cell manager recovery procedures if the cell manager fails.
(The IDB recovery is covered in the Internal Database module of this course)
Note This module focuses mainly on EADR as the current preferred DR method that is
supported on Windows and Linux OS.
Full client backup and IDB backup (CM only). Prepare and update
Phase 0: Preparation the System Recovery Data File. Prepare DR OS image.
Boot the system from the DR CD or over the network or from the
Phase 1: Configuration USB drive and select the scope of recovery.
4 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Phase 0: Preparation
• Perform full client backups and IDB backup (Cell Manager only)
• The backup creates a System Recovery Data (SRD) that contains information about the used
backup device and media. In case of a CM recovery the IDB is not available and the SRD file is
used. Make sure you have a copy of the SRD file outside of the CM for usage if CM is down.
• Update the DR OS Image after hardware/software changes
• In case AES encryption is used, it is necessary to export the encryption key on a removable
media, so that it is available during the DR process.
Phase 1: Boot the DR OS
• Replace any faulty hardware and boot the system from the DR Image
• Select the scope of the recovery
Phase 2: OS configured and Data Protector installed
• Critical volumes are automatically restored
• (including the boot partition, OS, and the partition containing Data Protector)
Phase 3: Restore missing data
• Restore any data not restored from the Phase 1 and 2 using Data Protector
5 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
To prepare the system for Disaster you need to install a special Data Protector component on the
client, the Automatic Disaster Recovery Module (DRM). The module supports EADR and OBDR. The
component can be push-installed via DP GUI together with the other modules.
The module contains configuration files that allow changing the recovery process:
• drm.cfg- configuration file with setting for the recovery process
• kb.cfg configuration file that allow to specify additional drivers for DR
Both file are located on the client that got the DRM module installed under:
Windows: DP_HOME\bin\drim\config
These files do not exist on Linux, while OBDR and EADR are supported on Linux as well.
While kb.cfg file is available on the client after DRM installation the drm.cfg file only exist as a
template file (drm.cfg.tmpl) and need to be copied/renamed as drm.cfg to become active.
Phase 0a:
1. Run a full backup of your DP client that
includes all Mount points/Drive letters
2. Specify option:
Backup share information for directories
6 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
In Phase 0 the Data Protector EADR process collects all relevant environment data automatically at
the time of backup. During a full client backup the data required for temporary DR OS setup and
configuration is packed in a single large DR OS image file.
The DR OS image file contains all of the necessary information and files to install a minimal
operating system which is later used for full restore session; included information is the data about
partition type, size and all operating system boot, necessary driver files.
This information may be stored on the Cell Manager in the DP_CONFIG\dr\p1s directory and is also
stored on the on the backup tape or disk device.
1 • Select EADR
3.
2
• Select Source for DR Media Set
3 4.
7 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
b. Within the DP GUI change to the Restore context, click on Tasks at the bottom to start the
Disaster Recovery wizard. The wizard will guide you to the process to create a bootable DR ISO
image that can be used for recovery.
Note: The Disaster Recovery wizard can be used for DP Image creation of any DP Client after the
disaster occurs. The client does not have to be available for the a successful image creation
The client needs to fulfill the following requirements before the wizard can be used:
• The client has/had the DP Disaster Recover (DR) Module installed
• A full backup (plus incremental) backups was performed after the DP DR Module was
installed
On Windows Vista/7, Windows Server 2008/2012 (incl. R2 versions) systems, you can create a
bootable network image or a bootable USB drive version instead of a bootable CD.
If the full client backup was encrypted, the encryption key has to be stored on a removable
medium. For details see HP Data Protector Disaster Recovery Guide
8 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Depending on the made selection on the initial page of the wizard you either need to select a
session that will be restored on the failed client (Option Backup session) or a select a set of
sessions to restore if client objects were backed up in different sessions (Option Volume List).
9 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
In the next window make the selection about how Data Protector should attempt to get access to
the Disaster Recovery information that were collected during the last CONFIGURATION object
backup of that client. A session sample from such a CONFIGURATION backup is shown below:
The information was backed up together with the system data on your selected target device. If
you specified the backup option Copy Recovery Set to disk in your used backup specification the
information was copied in addition to the Data Protector Cell Manager under DP_CONFIG\dr\p1s.
In this window specify, if Data Protector should extract the Recovery Set from the CONFIGURATION
backup (This selection triggers a restore!) or read from the specified location. Hit Next to continue.
10 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
In the last DR wizard windows specify the Image Format and Destination Folder of the bootable
image. The created image file is named recovery.iso.
Note: Data Protector only creates the bootable iso file. Afterwards you need to manually record it
to a CD or copy it to the USB drive or Network share.
To create a DR OS image for Windows Vista and later releases, you must install the appropriate
version of Windows Automated Installation Kit (AIK) or Assessment and Deployment Kit (ADK)
on the selected creation system:
Insert Drivers
This field contains a list of drivers the user selected to be injected into DRMiniOS. The user has the
possibility to specify missing driver(s) thus enabling them to be injected into DRMiniOS.
Note: Click on Inject to include the default drivers in the bootable image.
Starting with Data Protector 9.00 default drivers are automatically included.
The .iso file has been created in the specified destination folder (approx. 200-300MB)
11 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The new BMDR GUI is invoked in Phases 1 and 2a of the Disaster Recovery process. By showing up
during these DR phases, it has effectively replaced the previously used Command Line Interface
that was used until DP 6.2.
The screenshot above shows the first GUI dialog window that is presented right after successfully
loading DR MiniOS.
Dissimilar HW allows to
restore with different
Select the appropriate:
HW then the original
• Recovery Method and system had.
• Recovery Options
To start recovery of the
target client, click Finish
12 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
After the target DP Client system has been initialized, the Recovery Method as well as Recovery
options must be selected, in the Recovery Options GUI dialog box, as shown above. Click Finish
after selecting both options.
The following is a list of what the various Recovery Method options mean:
• Default Recovery: Recover the boot, system and Data Protector volumes
• Minimal Recovery: Recover only the boot and system volumes
• Full Recovery: Recover all volumes in the Restore Set
• Full with Shared Volumes: The option is displayed only for cluster nodes. In such case all
volumes in the Restore Set including cluster shared volumes will be recovered. If any other
option is selected, shared volume restore is excluded.
• Restore DAT (checkbox): Restore VSS writer files (COM+, License, Registry, Profile and
WMI). By selecting this option user can choose if and when the writer files should be
restored.
• Restore BCD (checkbox): Restore Boot Configuration Data (BCD) data. By selecting this
option user can choose if the BCD data should be restored or not.
• Dissimilar Hardware: To enable dissimilar hardware functionality user has to select one of
the available methods from list “Generic” or “Unattended”. It is recommended to use
“Generic” only if “Unattended “ fails.
• Manual Map Cluster Volumes: Enabled only for cluster environments. Enables user to
manually map cluster volumes.
Note:
You can follow the restore process in the “Monitor” context in the DP GUI.
13 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
After selecting the Recovery Method as well as Recovery options and hitting Finish, the, recovery
process commences and its progress can be monitored in the GUI window, as shown above.
Data Protector GUI during the recovery process
DR on Dissimilar Hardware
DR on Dissimilar Hardware support enables restore of a system backup to
partially or completely different type of hardware
• physical to virtual, or virtual to physical, are supported
• DISSHW restore does NOT have any chip-manufacturer dependencies (Intel or AMD), thus enabling
cross platform restores as:
X86 (AMD or Intel) to X86/X64 (AMD or Intel)
X64 (AMD or Intel) to X64 (AMD or Intel)
14 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
DR on Dissimilar hardware
When a system is in a non-bootable state for any reason (e.g. hardware failure) or when the user
wants to restore the system to dissimilar hardware, Disaster Recovery to Dissimilar Hardware
helps the user to recover the corrupted/moved system.
DR on Dissimilar HW is supported in both directions, i.e., from physical to virtual, and virtual to
physical! Moreover, DP 6.2 DR on Dissimilar HW is independent of chip-manufacturer (AMD or Intel)
technology. This facilitates straight forward cross-platform restores as in the following cases:
The following brief description provides some common use cases of dissimilar hardware restore
Hardware failure
In cases where some of the boot critical hardware (i.e. Storage Controller, Processor, Motherboard)
fail and must be replaced with a non-identical hardware, the only restore option would be a restore
to dissimilar hardware.
Disaster
In total machine disaster scenarios where:
• no matching machine hardware can be found (because of limited budget, the crashed
machine’s age, or other causes)
• down time cannot be afforded, and system must be up and running immediately
In these situations the only restore option would be to resort to dissimilar hardware recovery.
Deploying dissimilar hardware restore would probably mean lower budget cost since no exact
clones of the original systems are needed.
Migration
Migration scenarios applicable are:
• moving to another machine (i.e. to faster or newer hardware) where OS reinstallation and
reconfiguration is not an option
• moving to or from virtual environment. Dissimilar hardware restore would be the only
option in cases where user would like to move physical machine to virtual environment or
via versa. One of the reasons why user would move to virtual environment is the cost
benefit, whereas migrating to physical environment is primarily done for reasons of
performance
From DP’s point of view, a virtual environment is just another hardware platform, for which the
correct critical drivers must be provided in order to restore a system backup taken on some other
virtual or physical platform.
Contents
Module 19 — Patching 1
19–3. SLIDE: Data Protector Enhancements and Fixes .................................................................... 2
19–4. SLIDE: How to download Fixes and Enhancements................................................................ 4
19–5. SLIDE: Download from Software Support Online (SSO) ......................................................... 5
19–6. SLIDE: GR Patch Installation ................................................................................................... 8
19–7. SLIDE: Step 1: Update the Installation Server (IS) .................................................................. 9
19–8. SLIDE: Step 2: Update the Client ........................................................................................... 10
19–9. SLIDE: List installed Data Protector Patches ........................................................................ 11
Module 19
Patching
General Release
(GR) Patches
Site Specific
Patches (SSP) Scheduled cumulative fix
bundle, build for each
Tested Patch Pre- Data Protector agent on
Release to address all supported Cell
urgent issues in a Managers/ Installation
Test Modules short timeframe Server
Result of a Support Call
to fix an isolated
customer problem
3 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP Data Protector is a powerful product that offers a lot of functionality on nearly all the available
Operating Systems. Before release each version of Data Protector is extensible tested in large and
complex test environments, but it is not possible to validate all possible combinations of supported
Hard- and Software components. If you discover any malfunctions contact HP Support or search on
the Software Support Online portal for existing fixes or solutions.
A Test Module is generated in the Product Support labs as a result of a Support call to fix an
isolated Data Protector issue that was reported by the customer. This fix is created for a specific
Operating System, Processor type and Data Protector version to address the customer needs.
Typically it consists of one or more binaries that need to be manually installed on the customer
system. Once the customer has validated the fix in his environment, this fix is ported to other Data
Protector versions and OS releases if they are impacted by the reported problem as well.
Together with other Test Modules that were built for other customers to address different product
issues, these fixes are released within General Release Patches.
Beside fixes, these GR Patches are also used to introduce new product features like the support of
new Backup devices, Operating system or Database Integration versions. GR Patches are created
for each Data Protector module (e.g. Disk Agent Patch, Media Agent Patch, SAP Integration Agent
Patch ...), for each supported Data Protector version and always contain a complete Data Protector
agent. All GR Patches are cumulative patches, so every GR Patch contains all the fixes from the
previous GR Patches and there is no need to apply older version of the patch first. They are built as
packages including an installation routine and installation instructions. Some GR-Patches have
dependencies to other Data protector GR Patches. Check the patch installation instructions for
details.
GR Patches have committed release dates and typically rolled out 3-4 times a year. In order to
provide a fast solution for critical Data Protector issues before the rollout data of the GR Patch a
Site Specific Patch can be created by HP Support. A Site Specific Patch contains validated Test
Module(s) with installation instructions and documentation and considered as official fix for
reported problems that impact a large number of customers. Similar to Test Modules also Site
Specific Patches are consolidated by GR Patches. Which Site Specific Patches and Test modules are
consolidated within a certain GR Patch is listed in the respective Patch documentation.
GR Patches can be bundled as Patch Bundle for a One-Step installation. Data Protector Patch
Bundle BDL901 is the most recent example for such a bundle. In difference to regular GR patches
the Patch Bundle has to be installed first in the environment before newer GR Patches, Site Specific
Patches or Test Modules can be applied to the Data Protector cell, regardless to whether the new
patched component was part of the Patch Bundle or not.
4 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Test Modules (TM) and Site Specific Patches (SSP) cannot be directly downloaded. If such fixes are
referred to in the Data Protector Internet forum, Knowledgebase articles or Security Bulletins
always contact the HP Support Line. First HP Support has to verify if the found fix applies for the
concrete customer found issue. In addition they are able to query for updated fix versions and
provide assistance if the SSP or TM for the same problem, but different OS, Architecture
(32bit/64bit) or Data Protector version is required.
General Release (GR) Patches can be directly downloaded for HP Software Support Online (SSO)
portal. The details are explained on the next page.
5 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Software Support Online (SSO) is the central place for all support related information around Data
Protector. It provides access to a searchable Knowledge base with technical solutions, product
manuals and whitepapers, support matrices and GR Patches.
Note: The SSO Dashboard page provides access to support information about all products that
are part of the configured SAID Support identifier in your HP Passport user profile.
1. Under My Products select “data protector”, select your Data Protector version and OS and
click on View
2. The Software Patch table in the SSO Dashboard now lists all Data Protector patches for the
above made selection.
Click on the Data Protector patch/patch bundle you want to install to get access to patch
description and download link
3. The new windows contains the Patch details and a direct download link.
Click on the highlighted link next to Download Patch to initiate the download
GR Patch Installation
2-Step Installation
6 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
GR Patch Installation
7 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Before patch installation on the Installation Server run a system backup. Afterwards install all Data
Protector Patches by executing them one by one following the installation instructions that are
bundled with the patch. On Windows OS simply start the <Patch>.exe file, e.g. execute the patch
file DPWIN_00614.exe. On Linux or HPUX OS use the OS bundled installation utilities rpm and
swinstall.
Note: In a Windows Cluster environment the patch need to be installed on the active
node. In an UNIX Cluster environment the patch need to be installed on the node
that owns access to /opt/omni and /etc/opt/omni/IS.
On the Installation Server the directory with the Data Protector agent packages is located under:
UNIX: /opt/omni/databases
WINDOWS: DP_CONFIG\DEPOT (as default shared as OmniBack)
In case of multiple Installation Servers repeat the procedure for all configured Installation servers.
In order to update the Data Protector clients open the Data Protector GUI and switch to the Clients
context. Right click just one client system and select Upgrade from the context menu.
In the Upgrade Client system wizard first select the Installation Server and click on Next.
The following window shows all Data Protector clients that can be served by the chosen Installation
server, e.g. for a Windows Installation Server only Window clients are shown.
Select the clients you want to upgrade and click on Finish to start the upgrade.
Note: Make sure that there are no running Data Protector operations during Upgrade on
the selected clients. These operations (Backup/Restore, open GUI window) prevent
the upgrade process to replace all components on the client systems, so open files
cannot be replaced and remain unpatched.
9 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
To get a list of installed patches on a Data Protector client start the GUI, change to the Clients
context and select the client system to check. In the client system property window click on
Patches to see the list of installed patches on that selected client system.
If you want to get the patch level of the configured Installation servers select the Installation
server and similar to the client query click on Patches in the Installation Server property window.
If you want to script the Patch query use omnicheck from the CLI:
Each Data Protector client maintains a local list of configuration files, located in DP_HOME, one file
for each Data Protector module patch. If a GR Patch is pushed to this client the associated
component file got updated during upgrade. In case of a Data Protector GR Patch query these files
are read.
Contents
Module 20 — Troubleshooting 1
20–3. SLIDE: Log files ........................................................................................................................ 2
20–4. SLIDE: Debug (Execution Tracing) ........................................................................................... 5
20–5. SLIDE: Debug Log Collector ..................................................................................................... 8
20–6. SLIDE: Message Details ......................................................................................................... 11
20–7. SLIDE: Network Connectivity................................................................................................. 12
20–8. SLIDE: Services ...................................................................................................................... 14
20–9. SLIDE: Backup Devices .......................................................................................................... 17
20-10. SLIDE: Backup and Restore .................................................................................................. 19
20-11. SLIDE: omnihealthcheck....................................................................................................... 22
20-12. SLIDE: HealthCheckConfig file .............................................................................................. 23
20-13. SLIDE: omnitrig –run_checks ............................................................................................... 24
Module 20
Troubleshooting
Log Files
Valuable troubleshooting information
can be obtained by examining the
HP Data Protector log files, located on the
Data Protector CM and Client systems
DP Log Files
DP CM only
• media.log
• omnisv.log
• sm.log
• trace.log
• HealthCkeck.log
DP Client
Location of DP Log Files • debug.log
• inet.log
DP CM only : DP_VAR\log\server • ctrace.log
DP Client : DP_VAR\log • oracle8.log,..
3 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Log files
Data Protector operations are always flagged as business critical operations. If a backup failed or a
database cannot be restored, in any case the root cause needs to be quickly identified and resolved
in order to bring services back to normal operations. Therefore Data Protector needs to provide
valuable troubleshooting information – apart from the session reports – that can be analyzed in
order to identify and fix observed hitches.
In case of trouble, checking the Data Protector log files is always a good approach. Data Protector
writes log files on its clients and on the Cell Manager system.
The directory in which Data Protector log files are kept depends on which operating system you are
using. The following list shows the directories where the log files can be found:
UNIX:
Windows:
The following list shows the Data Protector log files and describes their contents (not all files are
present for every version; some files have become obsolete):
debug.log Unexpected conditions are logged into this file. While some can be
meaningful to the user, it is used mainly by the HP support organization. Do
not confuse it with the DP Debugging feature that is explained in this
chapter
inet.log Requests made to Data Protector’s inet program (a program that starts
agents) are logged to this file. It can be useful to check the recent activity
of Data Protector on client systems.
media.log This is a very important file. Each time a medium is used for backup,
initialized, or imported, a new entry is made to this log. Media that contains
the Data Protector IDB backup is also marked. For this reason,
media.log can be used after disaster recovery to find the tape where
that database was backed up and what media were used after the last IDB
backup.
IS_install.log This file contains the trace of the remote installation and is located on the
installation server.
omnisv.log This file is updated when the Data Protector services are stopped and
started.
sm.log This log file contains errors that occur in backup and restore sessions, such
as errors in parsing the backup specifications.
HealthCheck.log Log file of the Daily Health check, a daily operation that performs certain
checks on the Cell Manager. Details about that check are explained later in
this module.
trace.log/ctrace.log Log file on Cell Manager (tracle.log) and on client (ctrace.log) that keeps
track of Data Protector debugging sessions, used by Debug Collector, which
is explained later in this module
oracle8.log, .. Application specific logs that contain traces of Data Protector Integration
agent backups, like Oracle log backups or Exchange server backups. These
log files are always located on the application or database systems.
Logfiles of the Internal Database are not written into the default Data Protector log
directory. Any time the PostgreSQL database is started a new logfile is created with a
timestamp in the filename. So in case of a problem check also the previous logfiles.
These logfiles is are located within the Internal Database directory under:
Windows: DP_VAR\server\db80\pg\pg_log
During installation Data Protector directories do not exist, so logfiles are written outside
of Data Protector within system temp directories, such as:
UNIX /tmp
WINDOWS %TEMP%
(e.g. C:\Users\<user>\AppData\Local\Temp)
For installation issues troubleshooting on Unix check also the logfiles from OS based
software installation utilities, like swagent.log on HPUX. Data Protector is silently using
these native tools for Cell Manager and client installation.
Activation via CLI : add options –debug <range> <suffix> to any DP command
Example: omnib –oracle8_list DATA1_arch –debug 1-200 RMAN_01.txt
4 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Data Protector processes may be started in a special mode called the "debug" mode to allow for
extensive tracing of their execution. This execution tracing produces voluminous data sets which
may consume a significant amount of disk space; use with caution.
<suffix> String extension that is added to each debug file. It allows an easy
identification of generated trace files that belong to one debug set
Debugging ends if the command ends and the command prompt returns.
trace.log/ctrace.log
Data Protector creates a log file called trace.log on the Cell Manager whenever tracing is
enabled. This trace log contains information about when and where debug traces were
generated within the cell. On each client, a ctrace.log file keeps track of all debug sessions
on that particular client. These files are used by DP debug collector tools like the GUI based
Debug collector or the CLI based omnidlc binary to identify files to be collected for a
particular debug session (session-id based) or a particular suffix string.
These trace files are located in the previously explained Data Protector log directory.
OB2DBG_<DID>_[<SID>]_<Program_Name>_<Host>_<PID>_<Postfix
Where:
DID is the debug ID; this is the PID of the first process that accepts
the debug parameter; all debugs are “children” of this process
SID is the session id added by backup and restore agents (MA, DA)
Program Name is the program name of the Data Protector program writing the
trace
Host is the system name where the trace file is created
PID is the process ID.
POST is the postfix as specified in the -debug parameter.
Note: The command omnicc –debug 20 Global.txt creates a Debug file that lists all active
global parameter settings
5 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
To support customers in the debug collection process Data Protector offers a CLI utility called
omnidlc, which allows the debug file collection from the CLI. This utility is also available through
the Data Protector GUI, which allows debug collection and management in a much more
comfortable way. After a problem was reproduced with debugs enabled there are two ways to
collect the generated debugs within the Data Protector GUI:
1. From the Client context by ether right click on a client name and select
Collect debug files or highlighting a client name and click on
Actions Debug Files in the Menu Bar
2. From the Internal Database context by ether right click on a session and
select Collect debug files or highlighting a session-id and click on
Actions Debug Files in the Menu Bar
Any of the above mentioned selections will open the Debug File Collector wizard.
The wizard offers different option and filters depending if it is started from Clients or Internal
database context. In the last way a session id was already marked before, so debug collection will
focus on that particular session id and while started from Clients it is possible to specify ether by
session-id, debug-id or suffix what debugs should be collected.
Based on the selection, the wizard will identify and include all involved client systems, will allow
the selection of additional directory or no default debug directories collected in the next window
and offers supported options and output file settings in the last window. After pressing OK the
debug file collection will start. The output file will be created on the Cell Manager system.
A Monitor window will show the overall progress and the performed actions. It also shows the full
omnidlc command string that is executed in the background by Data Protector
After successful debug file collection the same wizard can be used to clean up all involved clients
from the generated debug files – just select Delete Debug Files from the menu to perform this
action.
Because of the large files sizes of debug files the defined target destination for the output file
might not big enough to keep the output file, so it is possible to run a preview by selecting the
Calculate debug File space option from the menu.
Message Details
6 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Message Details
In case of difficulties during the operation, Data Protector provides additional information with an
interactive troubleshooting dialog. You can get a detailed explanation of messages that occur
within a running session by selecting the message ID number.
An example of the error message ID number format is: [x:y]. When displayed during a session,
the message number may be selected to reveal the troubleshooting utility dialog window. The
dialog window consists of four text fields:
Message Text You will see the message as displayed in the session.
Details A check box to view the message description and action.
Description Detailed description of the error message.
Actions Possible action(s) that may be taken to solve or avoid the problem.
All error messages are stored in an ASCII file called trouble.txt that is located in the following
folder:
UNIX : DP_HOME\help
WINDOWS : DP_HOME/help/C
Network Connectivity
DP GUI
Cell Manager
TCP/IP Shared Memory
Session Session
Manager Manager
7 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Network Connectivity
Troubleshooting DNS
In order to troubleshoot DNS problems the following command may be used to check for DNS
mismatches:
Data Protector Inet service needs to be available on all clients in order to perform normal
backup/restore operations. As a default the service is running on port 5555 on all OS, so in case of
connection problems check if it is possible to connect to port 5555 on that problematic client, check
also the reverse connection.
example:
telnet vmw39201 5555
Trying...
Connected to vmw39201.deu.hp.com.
Escape character is '^]'.
HP Data Protector A.07.00: INET, internal build 100, built on Sunday, July 22, 2012, 7:21 PM
Connection closed by foreign host.
Services
Important: All Data Protector services must be up and running!
8 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Services
Services are critical components on Data Protector systems. They are used for communication of
Data Protector components and unattended systems tasks like scheduled backups.
Due to maintenance and other system tasks, it can occur that Data Protector services are stopped
or are not installed on the Data Protector client you are targeting for backup. First, make sure that
name resolution is not a problem; see “Networking and Communication Problems.”
The following daemons run on the Data Protector UNIX Cell Manager system:
The Data Protector Inet program (/opt/omni/lbin/inet) does not run all the time like the Data
Protector Inet service on Windows. It is started by the system inet daemon (inetd) when an
application tries to connect to the Data Protector port, which, by default, is port 5555.
There are many possible reasons why Data Protector services may fail to start. The majority of the
problems are often caused by permission problems, like insufficient privileges, incorrect users or
expired passwords of configured users.
As a default CRS is running under a user account (administrator), in certain setups – if Database
Integrations like Oracle or SAP are configured - the Inet service as well. If the user password was
changed make sure that service password was changed as well and service was restarted
afterwards.
Backup Devices
9 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Backup Devices
When you encounter a device problem in DP, you may jump to the conclusion that it is a DP
problem. The best practice is to eliminate Data Protector as the source by accessing the device with
another utility, such as HP Library & Tape Tools (L&TT) or other non DP utilities, like tar, dd or
cpio. If the utility is unable to access the device, the problem is not with Data Protector. However, if
the utility can access the device, the problem may be with Data Protector, and further investigation
is required.
Can the system access the device? On HP-UX, use the ioscan –fnCtape command to verify
connectivity and device files. On Windows use the Data Protector command: devbra –devices, in
addition to verifying that the device is available in the Windows Device Manager.
Supported Devices (SCSITAB)
Data protector provides support for devices of many types from HP and other vendors. On the Cell
Manager is a file named “scsitab” which provides a list of all supported models. Periodically new
devices are added to this list by HP. Download the most recent scsitab from HP to add support for
the newest devices. Do not modify this file manually.
Media Problems
Is the media bad, does the operation work with other media? Use the Data Protector verify function
to verify existing backups.
Library Devices
The most common error message for an improper configuration is, “Cannot access exchanger
control device.” This implies a problem with the robotic control device file. Verify the robotic
configuration and use DP uma command to validate the configuration.
UNIX : DP_HOME/lbin/uma
WINDOWS : DP_HOME\bin\uma.exe.
Uma can be started interactively or in batch mode. The only option that needs to be specified is the
pathname of the device file that controls the robotics of the target library:
CLI uma –ioctl <device name>.
Within uma type in “help” to get the list of available commands. Type in “stat” to get the inventory
of the library with all drives and slots. For testing you could move a tape from a slot to a drive and
back and check with “stat” if it is working fine. Use the “move” command for this operation, e.g.
move S20 D1 Move a tape from slot 20 to Drive1
move D1 S20 Move tape back from Drive 1 to slot 20
10 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
This chapter deals with typical problems around backup and restore and how to resolve these
problems. Typical issues are
• Unexpected full backups
• Unexpected mount requests
• Backup did not start
• Restore failed
• Licenses available?
Make sure that the required DP licenses are available at start time and not used by other
parallel running sessions. To check the licenses ether click on Help – About
in the GUI or run omnicc –query from the Command Line. See the HP Data Protector
Licensing and Installation Guide for licensing details.
crontab –l
# omnitrig entry was automatically added by Data Protector
* * * * * /opt/omni/sbin/omnitrig
• If the line does not appear, restart Data Protector (omnisv stop/start), which will rebuild
the crontab entry:
• Missing permissions
The user, who runs the restore need to have the appropriate DP user permissions, like
Start Restore, Restore to other clients, Restore from other users or Restore as root. In
case private objects are configured the user might not even see the performed backups,
because of insufficient permissions, the user right See private objects is required. So make
sure the restore user belongs to a DP group with the appropriate permissions.
omnihealthcheck
• Run as notification “HealthCheckFailed” as part of the daily check
• Execute commands listed in HealthCheckConfig file and stores all output in
HealthCheck.log file and in case of failure in Event Log
11 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
omnihealthcheck
The omnihealthcheck command reads the HealthCheckConfig file and executes the listed
commands from that configuration file.
The command is executed as check for the notification Health Check Failed.
This notification is checked together with other notifications within the so called Daily
Check – see also slide 20-13 – but it can be started directly from the CLI.
HealthCheckConfig File
HealthCheckConfig … Configuration file for omnihealthcheck command
OPTIONS:
Timeout=200
COMMANDS:
# Checks DP Services
omnisv -status
# Checks Pools and Media
omnimm -list_pools
# Checks DP Internal Database
omnidbutil -show_cell_name
Unix/Windows : DP_CONFIG
12 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HealthCheckConfig file
The HealthCheckConfig file may be modified to include additional checks to the defaults provided;
operating system commands may also be used.
If you want to use other non-Data Protector commands, then full path must be used to run a
command. Commands used in HealthCheckConfig file run under administrator / root account during
omnihealthcheck command execution. On Windows, commands are executed under the system
account; which is the user associated with the Data Protector CRS service.
Running omnihealthcheck with default HealthCheckConfig does not have any impact on the
performance of Cell manager. Also, there is almost no impact on running backup / restore sessions.
A timeout variable in HealthCheckConfig file determines time for execution of each command in
HealthCheckConfig file. If this time is exceeded then return error code for this command and
execute next command. Timeout variable is defined in seconds (by default this is 200 seconds).
omnitrig -run_checks
• by default executed every day at DailyCheckTime (configurable global file parameter)
• Possible to run omnitrig -run_checks directly from CLI
• Starts checks for a list of predefined notifications
• any triggered notification is send to Data Protector Event Log (default)
Checked Notifications
- IDB Space Low
- Not Enough Free Media
- Unexpected Events
- Health Check Failed
- IDB Limits
- IDB Backup Needed
- IDB Reorganization Needed
- License Will Expire
- License Warning
- User Check Failed (if configured)
13 © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
omnitrig –run_checks
By default every day at 12:30PM the command omnitrig -run_checks is executed automatically as
part of the Data Protector Daily Health Check. The start time can be changed by setting the global
file parameter DailyCheckTime. The following notifications are checked: