Install On-Premise

SnappyData runs on UNIX-like systems (for example, Linux, Mac OS). With on-premises installation, SnappyData is installed and operated from your in-house computing infrastructure.

Single-Host Installation

This is the simplest form of deployment and can be used for testing and POCs.

Open the command prompt, go the location of the downloaded SnappyData file, and run the following command to extract the archive file.

$ tar -xzf snappydata-<version-number>bin.tar.gz
$ cd snappydata-<version-number>-bin/

Start a basic cluster with one data node, one lead, and one locator:

./sbin/snappy-start-all.sh

For custom configuration and to start more nodes, refer configuring the SnappyData cluster.

Multi-Host Installation

For real-life use cases, you require multiple machines on which SnappyData must be deployed. You can start one or more SnappyData node on a single machine based on your machine size.

Where there are multiple machines involved, you can deploy SnappyData on:

Machines With a Shared Path

If all the machines in your cluster can share a path over an NFS or similar protocol, then use the following instructions:

Prerequisites

  • Ensure that the /etc/hosts correctly configures the host and IP address of each SnappyData member machine.

  • Ensure that SSH is supported and you have configured all the machines to be accessed by passwordless SSH. If SSH is not supported then follow the instructions in the Machines Without Passwordless SSH section.

To set up the cluster for machines with a shared path:

  1. Copy the downloaded binaries to the shared folder.

  2. Extract the downloaded archive file and go to SnappyData home directory.

    $ tar -xzf snappydata-<version-number>-bin.tar.gz
    $ cd snappydata-<version-number>.-bin/
    
  3. Configure the cluster as described in Configuring the Cluster.

  4. After configuring each of the members in the cluster, run the snappy-start-all.sh script:

    ./sbin/snappy-start-all.sh
    

    This creates a default folder named work and stores all SnappyData member's artifacts separately. The folder is identified by the name of the node.

Tip

For optimum performance, configure the -dir to a local directory and not to a network directory. When -dir property is configured for each member in the cluster, the artifacts of the respective members get created in the -dir folder.

Machines Without a Shared Path

In case all the machines in your cluster do not share a path over an NFS or similar protocol, then use the following instructions:

Prerequisites

  • Ensure that /etc/hosts correctly configures the host and IP Address of each SnappyData member machine.

  • Ensure that SSH is supported and you have configured all the machines to be accessed by passwordless SSH. If SSH is not supported then follow the instructions in the Machines without passwordless SSH section.

To set up the cluster for machines without a shared path:

  1. Copy and extract the downloaded binaries into each machine. Ensure to maintain the same directory structure on all the machines. For example, if you copy the binaries in /opt/snappydata/ on the first machine, then you must ensure to copy the binaries to /opt/snappydata/ on rest of the machines.

  2. Configure the cluster as described in Configuring the Cluster. Maintain one node as the controller node, where you can configure your cluster. Usually this is done in the lead node. On that machine, you can edit files such as servers, locators, and leads which are in the $SNAPPY_HOME/conf/ directory.

  3. Create a working directory on every machine, for each of the SnappyData member that you want to run.
    The member's working directory provides a default location for the logs, persistence, and status files of that member.
    For example, if you want to run both a locator and server member on the local machine, create separate directories for each member.

  4. Run the snappy-start-all.sh script:

    ./sbin/snappy-start-all.sh
    

Machines Without Passwordless SSH

In case the machines in your cluster do not share a common path as well as cannot be accessed by passwordless SSH, then you can use the following instructions to deploy SnappyData:

To set up the cluster for machines without passwordless SSH:

  1. Copy and extract the downloaded binaries into each machine. The binaries can be placed in different directory structures.

  2. Configure each member separately.

    Note

    The scripts used for starting individual members in the cluster do not read from the conf file of each member, hence there is no need to edit the conf files for starting the members. These scripts will start the member with the default configuration properties. To override the default configuration, you can pass the properties as arguments to the above scripts.

  3. Start the members in the cluster one at a time. Start the locator first, then the servers, and finally the leads. Use the following scripts to start the members:

    • $SNAPPY_HOME/sbin/snappy-locator.sh

    • $SNAPPY_HOME/sbin/snappy-server.sh

    • $SNAPPY_HOME/sbin/snappy-lead.sh