N+1 DS-System
Summary
This section contains details about configuring an N+1 DS-System. It is reproduced from the corresponding section in the DS-System Installation Guides (Windows / Linux).
1. The N+1 formation of the DS-System (N+1 DS-System) is designed to give better scalability, and to increase availability for backups. It is designed so that the N+1 DS-System can survive failures of some of its nodes without interrupting the backup service.
2. Configuring an N+1 DS-System means that several DS-Systems will work together to provide backup and restore services to the same DS-Clients. Any of those DS-Systems is able to provide any DS-Client with the same service (backup, restore, delete, synchronization, admin, etc.).
3. To ensure that each individual DS-System's activities are synchronized (i.e. no two activities will conflict), the N+1 DS-System will select one DS-System to act as a synchronization point between all the DS-Systems (called the "DS-Director"). Any DS-System that is not the DS-Director is called a "Node" (or Leaf).
4. To make sure that only one DS-Director is active at any time, a DS-Director will only be created if the DS-Director is connected to "n/2" Nodes. This means an N+1 formation can exist as long as at least "n/2+1" DS-System instances in the N+1 are running and can connect to each other.
- In case less than "n/2+1" DS-Systems are running, the N+1 will switch to stand-by mode (no DS-Director, none of the DS-Systems are accepting DS-Operator or DS-Client connections) until enough DS-Systems start.
This gives an upper bound on how many crashes an N+1 configuration can survive:
- An N+1 made up of 3 DS-Systems can survive the crash of any 1 DS-System.
- An N+1 made up of 17 DS-Systems can survive the crash of any 8 DS-Systems, etc.
5. When "n/2+1" DS-Systems are active, the N+1 configuration will form once the DS-Systems have selected a DS-Director. Once the DS-Director has been selected, the N+1 DS-System starts and all the DS-Systems will accept incoming DS-Client connections (including the DS-Director) and the DS-Director will accept incoming DS-Operator connections.
6. You must synchronize the time on each DS-System that makes up the N+1. You can use UTC (Coordinated Universal Time) via the NTP (Network Time Protocol) server, or any other third party utility that can keep the times synchronized on all DS-Systems in the N+1.
Hardware and Software Setup (N+1 DS-System)
1. All DS-Systems that form the N+1 must have access to the same DS-Client data (in order for each DS-System node to be able to provide the same services to any of the DS-Clients). This means that the DS-System nodes must have access to the same backup root.
- This can be obtained by using a shared SAN (read / write access for all nodes).
- The actual implementation is up to you (the Service Provider), but you must perform your own integration and testing of the environment to ensure its stability.
2. All DS-Systems that form the N+1 must have access to the same database (containing the DS-Client, Library and other data). This means that a central database must be configured for the DS-Systems to access. It is recommended that this database be clustered.
- Windows Platform: On the database computer, you must add the Windows Component "Simple TCP/IP Services" (For Windows 2003, it can be enabled from Control Panel > Add/Remove Programs > Add/Remove Windows Components > Networking Services. For Windows 2008, it can be enabled from Server Manager > Features: Add Features. For Windows 2012, it can be enabled from Server Manager > Add roles and features: click 'Next' until the 'Features' wizard screen.).
- Linux Platform: On the database computer, make sure the "echo" service is running. On each DS-System node, you must have the PostgreSQL Client Utility (pg_dump) installed (either run the PostgreSQL installation, or copy the command). This enables the DS-System database dump feature.
3. Since each of the DS-System nodes runs on its own machine, each node will have its own configuration file specifying database connectivity, identification inside the N+1, and registration information. In addition, there will be a configuration file for the entire N+1.
4. Install and configure the database server that the N+1 DS-System will use for its dssystem database: N+1 DS-Systems use the same type of database as the standalone DS-System. For the supported database and OS combinations, refer to the file "Installation and Backup&Restore Support.pdf".
Configuring a Microsoft SQL Server database (Windows DS-System):
- Install the database on a separate machine (the database can be clustered).
- Make sure the DS-System service account is able to access this remote database. This means you must create the same username and password on the remote database computer (or you can use a Domain User for the DS-System service account).
Configuring a PostgreSQL database (Linux DS-System)
- Install the PostgreSQL database on a separate machine (the database can be clustered).
- Open the file "<pg_data_path>/postgresql.conf" and edit the following lines:
- In <pg_data_path>/pg_hba.conf, replace all un-commented lines (ones that do not start with #) with the following lines:
- Restart the PostgreSQL service as root user with the following command:
- After configuring pg_hba.conf, set a password for the database user "postgres" (The DS-System must use this username / password to connect to the PostgreSQL server):
- To verify the password (to test if PostgreSQL accepts a DS-System connection), switch to any of the DS-System nodes and type:
5. Configure the shared storage.
The following example uses a UNC path (but the actual implementation is up to you):
The following example uses NFS (but the actual implementation is up to you):
- Create a NFS share allowing read/write access for the DS-System machines.
- Mount the NFS share on the DS-System machines (for example in /mnt/bak).
6. Install the DS-System on each of the destination node machines:
- Run the installations one-at-a-time (Do not run in parallel).
- The installation on the first node will create the DS-System database, while each of the following installations must re-use that same database.
- On each node, edit the DS-System configuration file (dssys.cfg).
- Add the following line to dssys.cfg:
8. Configure the N+1 configuration file ("<backup_root>/cluster/config").
- This is a text file named "config". It must be located in the directory "<backup_root>/cluster" and formatted as follows:
- This file will be used by each of the DS-Systems in the N+1 to know how to reach the other DS-Systems that form the N+1.- Sample configuration file:
Connectivity
1. The DS-Operator will only be able to connect to the DS-Director (once the N+1 is active).
2. A DS-Client can connect to any of the DS-Systems (including the DS-Director) for any activity. The DS-Client is able to accept multiple DS-System IP addresses to connect to (as it is the case for the N+1) and will perform load-balancing of connections by itself. Another option is to use a hardware load-balancer and a single IP address.
Frequently Asked Questions (FAQ) about N+1 DS-Systems
- In case a Node crashes (for example because of hardware failure), all activities on that Node will be interrupted. However, if there are still "n/2+1" DS-Systems active, the rest of the N+1 still functions and the rest of the activities running on other DS-Systems continue to run. In addition, in case the activities that were interrupted were scheduled, they will retry in 5 minutes to connect to the DS-System and will succeed since they will connect to another DS-System in the N+1 formation.
- In case the DS-Director fails, the DS-Systems will have no synchronization point and to avoid any problems, they will stop any activities and will try to find another DS-Director. Once a new DS-Director is elected, the N+1 will start again.
- Because of the importance of the DS-Director, any Node that loses connection with the DS-Director will stop activities and move to stand-by mode.
- A minimum N+1 would probably be 3 DS-Systems, since it can survive the crash of 1 DS-System. A N+1 of 2 DS-Systems is theoretically possible, however it does not make any sense since in case 1 system crashes, n/2+1 DS-Systems are not running and the N+1 will move to stand-by mode.
- There is no actual maximum. It can be anywhere from 3 to 50 DS-Systems or more. However it must be kept in mind that each Node must have a DS-Director connection (see the star-shaped connectivity in the "N+1 Status" dialog). This means that the higher the number of clustered DS-Systems the more load the DS-Director is required to handle and more communication will go to/from the DS-Director.
- No. A new N+1 license is required: from a DS-License Server.
- On the DS-License Server, you only need to add the new node's IP address to the existing N+1 DS-System.
- You can add the node to the N+1 configuration (via the DS-Operator GUI). See "Add an N+1 Node".
- You must manually edit the new node's local "dssys.cfg" file to add the "Cluster ID : <number>" line, then you can start the node and it will automatically join the N+1.
- Stop the DS-System service / daemon on the node you want to remove.
- Update the license to the lower number of nodes. This is done from the DS-License Server.
- Remove the node from the N+1 Cluster Config file using DS-Operator > N+1 Menu > Status > List Tab: Delete Node. See "Delete an N+1 Node".
See Also
The information provided in this document is provided "AS IS", without warranty of any kind. ASIGRA Inc. (ASIGRA) disclaims all warranties, either express or implied. In no event shall ASIGRA or its business partners be liable for any damages whatsoever, including direct, indirect, incidental, consequential, loss of business profits or special damages, even if ASIGRA or its business partners have been advised of the possibility of such damages. © Asigra Inc. All Rights Reserved. Confidential.
![]() ![]() |