High Availability Deployment (Countly Enterprise)

Follow

Gathering analytics data is not as critical as gathering a customer's information and keeping it safe, we must make sure that all data is replicated, and recovered in case a failure occurs. Major advantages of replica sets are business continuity through high availability, data safety through data redundancy, and read scalability through load sharing (reads).

With replica sets, MongoDB language drivers know the current primary. All write operations go to the primary. If the primary is down, the drivers know how to get to the new primary (an elected new primary); this is auto failover for high availability. The data is replicated after writing. Drivers always write to the replica set's primary (called the master), the master then replicates to slaves. The primary is not fixed – the master/primary is nominated.

Typically you need 3 MongoDB instances in a replica set on different server machines. You can add more replicas of the primary if you like for read scalability, but you only need 3 for high availability failover. If there are three instances and one goes down, the load for the remaining instances only go up by 50% (which is the preferred situation).

Network Diagram

Below, you can see the network diagram for this set. There is a load balancer, which is typically a hardware load balancer deployed internally inside the enterprise; therefore this document won’t go into detail on this. This load balancer simply distributes traffic between two Node.js servers.

Behind the load balancer, there are two Countly instances, which act as application servers. They accept connections from Countly SDK and then store data to the MongoDB replica set.

Server Specifications

Server specifications stated below are the minimum recommended specifications for Countly Enterprise, and might need to be adjusted based on your traffic.

Node 1 and Node 2 servers are Countly servers with the following configuration:

  • 2 CPU cores
  • 4 GB RAM
  • 20 GB root (boot) disk
  • 80 and/or 443 ports (http and https) ports (and 25 for mail) need to be open to the load balancer (in/out)

MongoDB replica set (primary and secondary) have the following configuration:

  • 1+ CPU core(s)
  • 8 GB RAM
  • 20 GB root (boot) disk
  • 100 GB SSD disk attached (mounted) to /data (/data directory should be owned by mongodb user: (chown -R mongodb:mongodb /data))
  • Port 27017 needs to be open to all instances (in/out)

A MongoDB arbiter server is a simple server with a configuration as follows:

  • 1 CPU core
  • 2 GB RAM
  • 20 GB root (boot) disk
  • Port 27017 needs to be open to all instances (in/out)

MongoDB Replica Set Installation

As a prerequisite, you need to install MongoDB 4.4.x on all three MongoDB instances (primary, secondary, and arbiter). For installation on RedHat, follow this guide, and for Ubuntu, follow this guide; or you can run our MongoDB installation script, which also sets all system configs by curl -L https://c.ly/install/mongodb | bash. You’ll need an operating system user, with sudo privileges for installation, to perform configurations stated below and to restart mongod service later on.

1. Bind IP

By default, MongoDB configuration sets the bindIP directive to localhost, meaning that mongod service will not listen to any connections other than the ones initiated from localhost. Edit /etc/mongod.conf file on all three MongoDB instances and comment out or remove bindIP directive.

net:
   port: 27017
   # bindIp: 127.0.0.1

Please follow our Securing MongoDB Guide to secure your MongoDB instances. By removing bindIP, your MongoDB instances will be open to all. You need to have the necessary firewall configurations in place to prevent connections from the Internet.

2. Enable and Configure Replication

Edit /etc/mongod.conf file on all three MongoDB instances and add the below replSetName directive under replication section; our replica set name will be rs0 (you can define anything you want here).

...
replication:
  replSetName: rs0
...

Then, restart MongoDB on all 3 instances.

sudo systemctl restart mongod

3. Edit Hostnames

Before initiating the replica set, make sure that all three MongoDB instances can access each other on port 27017 and /etc/hosts file in each of the three machines have a configuration such as below.

10.10.200.1    mongodb01.yourdomain.com mongodb01

This is to ensure that when replica set is initiated, each server will be able to access the others in the replica set.

4. Initiate the Replica Set

In the terminal of primary or secondary MongoDB instance, connect to MongoDB (type in your shell):

mongo

And execute the below command to initiate the replica set.

rs.initiate({
   _id: "rs0",
   members: [
      { _id: 0, host: "mongodb01.yourdomain.com:27017" },
      { _id: 1, host: "mongodb02.yourdomain.com:27017" },
      { _id: 2, host: "mongodb03.yourdomain.com:27017", arbiterOnly: true }
   ]
})

Note that in members array, the server that is dedicated to be the arbiter needs to have arbiterOnly: true field present.

After initiating the replica set, if you open the mongo shell (mongo command) in all three instances, each one will be marked with their role, as in PRIMARY, SECONDARY, or ARBITER.

Countly Enterprise Installation

Standalone Countly Installation

Countly installation packages contains all packages for a Countly server so you need to stop and disable MongoDB service after installation of Countly if you are going to use external database server as a replica set.

sudo systemctl stop mongod
sudo systemctl disable mongod

On Node 1 and Node 2, you'll need to install two Countly servers as instructed here and then make MongoDB configurations as stated here.

Getting Health Checks

If you need a health check for Countly, you can use https://URL/ping URL endpoint. It will return "success" if Countly server is running on that URL.

If you are using an offline, full Countly Enterprise installation package then follow the instructions below:

  • Upload the installation package to Node 1 and Node 2
  • Extract the archive
  • Run bash countly/bin/offline_installer.sh

In either online or offline installation modes, you need to run the installation script as an operating system user that has sudo privileges.

Looking for help?