OST Admin Guide

OpenDedup OST Connector 2.2

The OpenDedup OST connector provides integration between NetBackup and OpenDedup. It supports the following OST features:

  • Write/Backup to OpenDedup volumes
  • Read/Restore from OpenDedup volumes
  • Accelerator support from backups to OpenDedup Volumes

OpenDedup provides an open source filesystem, SDFS, that includes inline deduplication to local or cloud storage targets. Backup specific features of OpenDedup are as follows:

  • In-Line deduplication to a Cloud Storage Backend – SDFS can send all of its data to AWS, AZURE, Google, or any S3 compliant backend.
  • Performance  – Compressed Multi-Threaded upload and download of data to the cloud.
  • Local Cache – SDFS will cache the most recently accessed data locally. This is configurable but set to 10 GB by default
  • Security – All data can be encrypted using AES-CBC 256 when sent to the cloud
  • Throttling – Upload and download speeds can be throttled.
  • Cloud Recovery/Replication – All local metadata is replicated to the cloud and can be recovered back to local volume.
  • Glacier Support – Supports S3 Lifecycle policies and retrieving data from Glacier
  • AWS Region Support – Supports all AWS Regions

BackupExec Requirements :

Windows 2012 R2 or Windows 2016 (64 Bit)

Certified SDFS Version : SDFS-3.6.0.13-Setup

Ceritified OST Version : 2.2.7

BackupExec 2016+ – This has been tested and developed for BackupExec 2016 fp2.

CPU : 2+ CPU System

Memory : 3GB of RAM + 400MB of RAM per TB of unique data stored

Disk :

2GB of additional local storage per TB of unique data.

2.5GB of additional storage per TB of logical storage that is used for filesystem metadata

100GB for local cache of cloud data (Configurable)

Quickstart Guide:

https://www.veritas.com/support/en_US/article.100039194.html

NetBackup Requirements :

CentOS/RHEL 6.7+ – This is been tested on CentOS 7.0 and CentOS 6.7..

NetBackup 7.6.1+ – This has been tested and developed for NBU 7.7 and has been tested back to 7.6.1.

 

CPU : 2+ CPU System

Memory : 1GB of RAM + 400MB of RAM per TB of unique data stored

Disk :

2GB of additional local storage per TB of unique data.

2.5GB of additional storage per TB of logical storage that is used for filesystem metadata

10GB for local cache of cloud data (Configurable)

Netbackup Quick Start Instructions

Step 1 – Download and install the ost package and sdfs:

On A Standard Media Server Run:

wget http://www.opendedup.org/downloads/ost-2.2.9.tar.gz
tar -xzvf ost-2.2.0.tar.gz
cd dist
./media-install.sh
/etc/init.d/netbackup stop
/etc/init.d/netbackup start

For the Appliance Run:

NOTE: The opendedupe dedupe engine cannot run on the appliance only the OST connector. To use with an appliance run opendedupe on a separate server


Appliance Step 1 – Install opendedupe on a Separate Server:

Follow the Installation (not initialization) instructions at https://opendedup.org/odd/linux-quickstart-guide/

Appliance Step 2 – Install the OST Connector on the appliance:

Download http://www.opendedup.org/downloads/ost_appliance2.0_OST_redhat_64.tar.gz
Install the OpenDedupe OST Appliance package by following the OST plugin installation instructions in the Appliance documentation at
https://www.veritas.com/content/support/en_US/58991/appliance-docs/appliance-docs-30.html Create an Appliance user, if one does not exist.
Login to the Appliance using the user credentials created in the previous step.
Modify the /usr/openv/ostconf/OpenDedupe/ostconfig.xml file

Step 2 – Create an sdfs volume

Local Storage

sudo mkfs.sdfs –volume-name=pool0 –volume-capacity=256GB –backup-volume  –sdfscli-disable-ssl

For an appliance setup run this on a separate opendedupe server

sudo mkfs.sdfs --volume-name=pool0 --volume-capacity=256GB --backup-volume --sdfscli-listen-addr 0.0.0.0 --sdfscli-listen-port 6442 --sdfscli-require-auth --sdfscli-disable-ssl

AWS Storage

sudo mkfs.sdfs --volume-name=pool0 --volume-capacity=1TB --aws-enabled true --cloud-access-key <access-key> --cloud-secret-key <secret-key> --cloud-bucket-name <unique bucket name>  --backup-volume  --sdfscli-disable-ssl

For an appliance setup run this on a separate opendedupe server

sudo mkfs.sdfs --volume-name=pool0 --volume-capacity=1TB --aws-enabled true --cloud-access-key <access-key> --cloud-secret-key <secret-key> --cloud-bucket-name <unique bucket name>  --backup-volume --sdfscli-listen-addr 0.0.0.0 --sdfscli-listen-port 6442 --sdfscli-require-auth --sdfscli-disable-ssl

Generic S3 Storage

sudo mkfs.sdfs --volume-name=pool0 --volume-capacity=1TB --aws-enabled true --cloud-access-key <access-key> --cloud-secret-key <secret-key> --cloud-bucket-name <unique bucket name> --cloud-url <url> --backup-volume  --sdfscli-disable-ssl --cloud-disable-test

For an appliance setup run this on a separate opendedupe server

sudo mkfs.sdfs --volume-name=pool0 --volume-capacity=1TB --aws-enabled true --cloud-access-key <access-key> --cloud-secret-key <secret-key> --cloud-bucket-name <unique bucket name> --simple-s3 --cloud-url <url>  --backup-volume --sdfscli-listen-addr 0.0.0.0 --sdfscli-listen-port 6442 --sdfscli-require-auth --sdfscli-disable-ssl --cloud-disable-test

Azure Storage

sudo mkfs.sdfs --volume-name=pool0 --volume-capacity=1TB --azure-enabled true --cloud-access-key <access-key> --cloud-secret-key <secret-key> --cloud-bucket-name <unique bucket name>  --sdfscli-disable-ssl

For an appliance setup run this on a separate opendedupe server

sudo mkfs.sdfs --volume-name=pool0 --volume-capacity=1TB --azure-enabled true --cloud-access-key <access-key> --cloud-secret-key <secret-key> --cloud-bucket-name <unique bucket name>  --backup-volume --sdfscli-listen-addr 0.0.0.0 --sdfscli-listen-port 6442 --sdfscli-require-auth --sdfscli-disable-ssl

Google Storage

sudo mkfs.sdfs --volume-name=pool0 --volume-capacity=1TB --google-enabled true --cloud-access-key <access-key> --cloud-secret-key <secret-key> --cloud-bucket-name <unique bucket name>  --backup-volume  --sdfscli-disable-ssl

For an appliance setup run this on a separate opendedupe server

sudo mkfs.sdfs --volume-name=pool0 --volume-capacity=1TB --google-enabled true --cloud-access-key <access-key> --cloud-secret-key <secret-key> --cloud-bucket-name <unique bucket name>  --backup-volume --sdfscli-listen-addr 0.0.0.0 --sdfscli-listen-port 6442 --sdfscli-require-auth --sdfscli-disable-ssl

Create a OST Disk Pool and STU In the NetBackup Console.

 

  1. Select “Configure Disk Storage Servers” from the Wizard Page.

 

  1. Select the “OpenStorage” option from the “Select the type of disk storage that you want to configure.”
  1. Add the following options to the Storage Server Details
    1. Storage server type : OpenDedupe
    2. Storage Server name : the name in the <NAME></NAME> tag in /etc/sdfs/ostconfig.xml. This is “local” by default.
    3. Username : anything can go in here. It is not used
    4. Password/Confirm Password : Anything can go in here as well

 

  1. Finish the storage configuration wizard and make sure “Create a disk pool using the storage server that you just created” is selected
  2. Select the storage pool that was just created
  3. Add a disk pool name
  4. Finish the Wizard and Select “Create a storage unit using the disk pool that you just created”.
  5. In the Storage Unit Creation page select “Only use the selected media servers” and select the media server that the storage was created on. For maximum concurrent jobs select “8”.

 

Setting up multiple media servers in the same domain

 

To setup the OST connector on multiple media servers in the same domain additional steps must be taken on each media server before adding the storage pools in NetBackup.

Step 1: Follow “Setting up the OST Connector” instructions outlined in the document on each media server that will use the OST Connector.

Step 2: Edit /etc/sdfs/ostconfig.xml and change the <name> tag to something unique in the NetBackup domain such as the host name with an incremented number e.g.  <NAME>hostname-0</NAME>  

Step 3: Follow “Create a OST Disk Pool and STU In the NetBackup Console“ instructions and use the name in the <NAME> tag as the Storage Server name designated in Step 3 of the “Setting up the OST Connector” section.

 

Setting up multiple SDFS volumes on a media servers

 

The OST connector supports multiple SDFS volumes on the same media server but additional steps are required to support this configuration.

Step 1: Follow “Setting up the OST Connector” instructions outlined in the document on each media server that will use the OST Connector.

Step 2: Run the mkfs.sdfs command for each additional SDFS volume. E.g.

sudo mkfs.sdfs --volume-name=pool1 --volume-capacity=1TB --aws-enabled true --cloud-access-key <access-key> --cloud-secret-key <secret-key> --cloud-bucket-name <unique bucket name>

 

Step 3: Create a mount point for each additional volume under /opendedupe/volumes/ . e.g.

mkdir /opendedupe/volumes/pool1

mount -t sdfs pool1 /opendedupe/volumes/pool1

 

Step 4: Mount the new volume and get the control port number of the additional volume. The port number will be appended to the filesystem column when running df -h. In the example below pool0 has a tcp control port of 6442 and pool1 has a control port of 6443.

Step 5: Edit the /etc/sdfs/ostconfig.xml and add a new <CONNECTION> tag inside of the <CONNECTIONS> tag for the new volume.  In the new <CONNECTION> tag add the port identified in Step 4 to the <URL> tag (https://localhost:6443/) add a name that is unique the <NAME> tag and specify the new volume name in the <LSU_NAME> tag (pool1). Below is a complete example of a ostconfig.xml with two volumes.

 

<!– This is the config file for the OST connector for opendedup and Netbackup –>

<CONNECTIONS>

<CONNECTION>

<!–NAME is the local server name that you will reference within Netbackup –>

<NAME>

local

</NAME>

<LSU_NAME>

pool0

</LSU_NAME>

<URL>

https://localhost:6442/

</URL>

<!–PASSWD – The password of the volume if one is required for this sdfs volume –>

<PASSWD>passwd</PASSWD>

<!–

<SERVER_SHARE_PATH>

A_SUBDIRECTORY_UNDER_THE_MOUNT_PATH

</SERVER_SHARE_PATH>

–>

</CONNECTION>

<!– Below is the new volume–>

<CONNECTION>

<!–NAME is the local server name that you will reference within Netbackup –>

<NAME>

hostname0

</NAME>

<LSU_NAME>

pool1

</LSU_NAME>

<URL>

https://localhost:6443/

</URL>

<!–PASSWD – The password of the volume if one is required for this sdfs volume –>

<PASSWD>passwd</PASSWD>

<!–

<SERVER_SHARE_PATH>

A_SUBDIRECTORY_UNDER_THE_MOUNT_PATH

</SERVER_SHARE_PATH>

–>

</CONNECTION>

</CONNECTIONS>

 

Detailed OST XML Configuration setup

 

The OST driver is configured from the XML file located at /etc/sdfs/ostconfig.xml. It contains all of the configuration parameters required to allow SDFS to communicate with netbackup over OST.

 

Below is a typical ostconfig.xml :

 

<!– This is the config file for the OST connector for opendedup and Netbackup –>

<CONNECTIONS>

<CONNECTION>

<!–NAME is the local server name that you will reference within Netbackup –>

<NAME>

local

</NAME>

<!–LSU_NAME is the name of the volume to be mounted and maps to the path under

/opendedup/volumes/ where the volume will be mounted. As an example if the volume

name is pool0 the LSU_NAME would be pool0 and the volume would be mounted at

/opendedup/volumes/pool0

<LSU_NAME>

pool0

</LSU_NAME>

<!–URL – this is the url that the plugin will use to communicate with sdfs volume.

If the volume is local and the only one url should be set to https://localhost:6442.

Otherwise do a df -h on the host containing the volume the mount point for the volume

will contain the port used to connect to the volume. Use this port plus the host name

as the url. https://<server-name>:<volume-tcp-port>/–>

<URL>

https://localhost:6442/

</URL>

<!–PASSWD – The password of the volume if one is required for this sdfs volume –>

<PASSWD>passwd</PASSWD>

<!–SERVER_SHARE_PATH – This is the subdirectory under the mount path that corresponds

to the LSU_NAME. This is required if you are mounting the volume remotely via nfs and

exporting a subdirectory or are using a subdirectory under a local sdfs mount as the volume path.

An NFS example is if the sdfs volume is mounted on the remote server to /media/pool0 and the folder

“nfs” is being exported under this mount (/media/pool0/nfs). The export would be mounted locally

to /opendedupe/volumes/pool0 and the SERVER_SHARE_PATH would be set to “nfs”.

A local example would be that you mount an sdfs volume to /opendedupe/volumes/ and create a

subdirectory, where the backups will be stored, under the mount called “pool0”. In this example,

the SERVER_SHARE_PATH would be set to the subdirectory name of “pool0”.

–>

<!–

<SERVER_SHARE_PATH>

A_SUBDIRECTORY_UNDER_THE_MOUNT_PATH

</SERVER_SHARE_PATH>

–>

</CONNECTION>

</CONNECTIONS>

 

Troubleshooting (Logs)

 

For help in troubleshooting issues associated with SDFS of the OST plugin you can email the sdfs forum. The forum can be found at :

http://groups.google.com/group/dedupfilesystem-sdfs-user-discuss?pli=1

SDFS Logs:

SDFS creates logs under /var/logs/sdfs/<volume-name>-volume-cfg.xml.log. Errors can be identified in this log file

OST Plugin Logs:

The OpenDedupe OST plugin log can be found in /tmp/logs/opendedup.log .

NetBackup Logs:

Pertinent OST related errors and logging are trapped in the BPTM log. Netback logging for bptm can be enable by creating the bptm logging directory.

mkdir/usr/openv/netbackup/logs/bptm

 

Certified Regions

AWS S3

us-east-1
us-east-2
us-west-1
us-west-2
ca-central-1
eu-central-1
eu-west-1
eu-west-2
eu-west-3
ap-northeast-1
ap-northeast-2
ap-northeast-3
ap-southeast-1
ap-southeast-2
ap-south-1
sa-east-1

Azure

East US
East US 2
Central US
North Central US
South Central US
West Central US
West US
West US 2
Canada East
Canada Central
Brazil South
North Europe
West Europe
France Central
France South
UK West
UK South
Germany Central
Germany Northeast
Southeast Asia
East Asia
Australia East
Australia Southeast
Australia Central
Australia Central 2
Central India
West India
South India
Japan East
Japan West
Korea Central
Korea South

Google

Region Name Region Description
North America
northamerica-northeast1 Montréal
us-central1 Iowa
us-east1 South Carolina
us-east4 Northern Virginia
us-west1 Oregon
South America
southamerica-east1 São Paulo
Europe
europe-west1 Belgium
europe-west2 London
europe-west3 Frankfurt
europe-west4 Netherlands
Asia
asia-east1 Taiwan
asia-northeast1 Tokyo
asia-south1 Mumbai
asia-southeast1 Singapore
Australia
australia-southeast1 Sydney