Skip to main content
SaltStack Support

SaltStack High Availability Considerations

SaltStack High-Availability Architecture
SaltStack Enterprise 2.x/3.x  SaltStack Opensource 2014.1.x/2014.7.x

About this guide

SaltStack is a very powerful, flexible and widely used infrastructure automation system.  This flexibility combined with having an open-source code base allows SaltStack aka “Salt” to be used for a wide variety of applications and implementation models.  The intention of this guide is to identify the core considerations for achieving a highly available (HA) solution with SaltStack.  It is not the design of this document to identify every possible HA solution or architecture, nor is it to be entirely prescriptive on how high-availability should be achieved within each consideration.  We recommend working closely with SaltStack Professional Services to design a high-availability approach that is suitable for your enterprise.

SaltStack Fundamentals

SaltStack includes many features that are designed to support a highly available implementation.  That being said, as of this writing there is not a single configuration point for achieving high availability, nor are the existing features completely comprehensive.  There are elements that you will have to implement some level of HA support external to SaltStack.  Before we go into all of that it’s important that we get some fundamental concepts and definitions out of the way.

Salt-Master

A SaltStack Server or central control point is known as the salt-master.  The salt-master contains all of the central configurations for Salt.  It publishes the salt commands to the salt-minions, receives the corresponding event data, manages data connections or “Pillars”, and manages the trust relationship between master and minions.  

Salt-Syndic

A SaltStack Master that controls other masters is known as a Salt Syndic.  Think of it as a “master of masters”.  

Salt-Minions

A SaltStack client/agent or managed entity is known as a salt-minion.  The salt-minion is the workhorse in Salt.  It listens for commands from a permanent open connection to the salt-master, performs the work and sends the results to the master.

Salt-Keys

Minions authenticate to the salt-master using a public/private key.  There are no certificates in Salt.

Distributed Version Control Systems

SaltStack supports direct integration with a number of external distributed version control systems or DVCS.  Among the most commonly used among our user base in the modern enterprise is git and subversion.

Salt Pillars

A Pillar is simply a method of calling a data structure in Salt.  The typical application of pillars is to assign data to a minion for automation purposes such as usernames, passwords, or other configuration data.

HA Considerations from the Minion up

Minion Master / Multi-Master

The minion’s configuration file is typically found under /etc/salt/minion.  In this file, the hostname or ip of the salt-master is defined (default is “salt”).  The minion can connect simultaneously to multiple masters.  This is achieved by simply presenting the masters as a list in YAML (www.yaml.org) formatting.  So instead of Master: hostname it would look like the following example:

Master:

  • Master1
  • Master2

Note that the indentation is double-spaced, not tabular.  There are two spaces from the margin before the “-” and a single space between the “-” and the name or ip string.  The minion will actively connect simultaneously to all masters.

Master Configuration

The master’s central configuration settings file is typically found on the master under /etc/salt/master.  This file should be replicable among all masters as there are not typically master specific settings stored in this file.  Important behavioural aspects of the salt-master from application settings to security are all defined in this file.

As of this writing, there are no mechanisms to replicate this file amongst masters native to Salt.  A possible solution for consideration is to place this file under management by Salt.  Since all salt-commands are forked from the salt-minion and master processes, Salt can “salt” itself.  

Master Minion Trust

When a minion first connects to a salt-master, it sends a copy of its public key.  If the key has been accepted it will be moved to /etc/salt/pki/master.  If a minion is trusted by one master, it will be important to replicate that trust across all masters.  If the minion is no longer trusted by a master, that state must be replicated as well.

As of this writing, there are no mechanisms to replicate keys across salt-masters that are native to SaltStack.  Key Management is accessible from the Salt-API and will be included in a future release of the SaltStack Enterprise console.  As a workaround, using an external replication mechanism such as a scheduled “Rsync” may be the preferred option.  

Salt Pillars

Pillar definitions are stored either external to the salt-master as an “external pillar” or stored as a YAML data structure on the master directly.  In the case of external pillars, the only consideration would be replicating the salt master config (see Master Configuration).  In the case of native pillars, they are typically found in flat files under /srv/pillar.  

As of this writing there are no mechanisms to replicate pillars across salt-masters that are native to SaltStack; however, it is possible to store Pillar data external to the Salt Master in a DVCS.  By placing the pillar in a DVCS, you may take advantage of the distributed architecture of the system as well as obtain the additional benefits of version control.  

Salt States

Salt configuration data is typically stored in a YAML flat file called a Salt State.  The directory structure containing Salt States is known as a “state tree” and typically found under /srv/salt.  

Like Pillars, as of this writing there are no mechanisms to replicate state files across salt-masters that are native to SaltStack.  The recommended approach for most environments is to store the state files directly in the DVCS system.  

Salt Syndic

In any highly available SaltStack implementation, you will have multiple salt-master servers with the exception of running salt in “masterless mode” which is rather uncommon.  In order to maintain a central control point for all salt-masters, configuring a Salt Syndic is desirable.  The syndic is very simple to configure, see docs.saltstack.com - salt syndic http://docs.saltstack.com/en/latest/topics/topology/syndic.html.  Multiple syndics are not recommended as of this writing.  

Additional thoughts

This does not take into account customizations to SaltStack such as custom modules, runners, as well as salt-cloud profile and provider files.  Please contact us @ info@saltstack.com for additional information.

  • Was this article helpful?