Salt is a fast, flexible configuration management and orchestration tool that handles configuration, remote execution, and automation across fleets of servers. This article walks through practical steps to get a Salt master and minions running, explains how to author basic states and pillars, and covers useful operational commands and troubleshooting. The goal is a clean, repeatable setup you can extend as your environment grows.
Prerequisites and planning
Before installing, decide which machine will be the Salt master and which machines will act as minions. Salt uses ports 4505 and 4506 for the event bus and the request/response channel, so confirm firewalls allow those connections. Keep time synchronized across systems (chrony or ntp) to avoid issues with authentication and logs. If you plan to scale, sketch out environments (base, dev, prod), a repository structure for your state files, and whether you’ll use file server backends like GitFS for version control.
Installing Salt
Salt packages are available for major distributions. Below are the typical installation commands; adjust for your distribution and architecture. Using distribution packages keeps system integration simple.
Debian / ubuntu
sudo apt-get update
sudo apt-get install -y salt-master salt-minion
RHEL / centos / Alma
sudo yum install -y
sudo yum clean expire-cache
sudo yum install -y salt-master salt-minion
Notes on containers and single-node testing
For quick testing you can run salt-minion and salt-master on the same host. Use salt-call –local to test states locally without requiring a master, which is useful while developing SLS files.
Initial master configuration
The primary configuration file is /etc/salt/master. At minimum define where state files live, the environment structure, and any master-specific options such as external auth or git backends. A minimal file_roots setup looks like this:
# /etc/salt/master (excerpt)
file_roots:
base:
- /srv/salt
pillar_roots:
base:
- /srv/pillar
auto_accept: False
Keep auto_accept disabled in production to prevent unknown minions from being trusted automatically. If you use GitFS, configure the fileserver_backend and gitfs_remotes sections instead of local file_roots.
Initial minion configuration
Edit /etc/salt/minion to point to your master. Set the master IP or hostname and optionally an id for predictable naming. You can also define grains in this file to tag minions with roles, environments, or other attributes that will be useful for targeting states.
# /etc/salt/minion (excerpt)
master: salt-master.example.com
id: webserver-01
# Optional static grains
grains:
role: web
environment: production
Start services and accept keys
Start the master and minion services and then manage keys using salt-key. When a minion first connects it presents a public key on the master which must be accepted before commands will run. For safety, review keys before accepting.
sudo systemctl enable --now salt-master
sudo systemctl enable --now salt-minion
# On the master, list and accept keys
sudo salt-key -L # list accepted, unaccepted, rejected keys
sudo salt-key -a webserver-01 # accept a specific key
sudo salt-key -A # accept all pending keys (use cautiously)
After accepting a key, verify connectivity with a simple ping test: salt ‘*’ test.ping should return True for connected minions.
Basic state structure and applying a state
States live under the file_roots path (/srv/salt by default). Create a top.sls to map minions to the states they should receive, then add SLS files. The example below installs and ensures nginx is running on all minions with the role web.
# /srv/salt/top.sls
base:
'G@role:web':
- match: grain
- nginx
# /srv/salt/nginx.sls
nginx:
pkg.installed:
- name: nginx
nginx-service:
service.running:
- name: nginx
- require:
- pkg: nginx
Apply the state with a targeted highstate. Using grain targeting ensures only intended minions receive the state.
sudo salt -G 'role:web' state.apply
# or to force all configured states on each minion:
sudo salt '*' state.highstate
Pillars and sensitive data
Pillars provide per-minion data that is delivered securely from the master. Use /srv/pillar/top.sls to map pillar SLS files to minions, then reference pillar data from states. Pillars are never served to unauthenticated clients; they are targeted to minion IDs or via grains.
# /srv/pillar/top.sls
base:
'webserver-01':
- nginx
# /srv/pillar/nginx.sls
nginx:
port: 8080
Access pillar data from a state with {{ pillar[‘nginx’][‘port’] }} or use the jinja/sls pillar lookup. Test pillars with sudo salt ‘webserver-01’ pillar.items.
Grains, targeting, and environments
Grains are static facts from minions (OS, CPU, custom tags). They are ideal for targeting and for conditional logic inside states. You can set persistent grains in /etc/salt/grains or set them dynamically via salt ‘*’ grains.setval key value. Use environments to separate development and production state trees by configuring multiple file_roots keys (base, dev, prod) and running salt with saltenv if needed.
Salt-ssh and alternative workflows
If you cannot run a minion on target hosts, salt-ssh allows Salt to operate over SSH without persistent agents. It translates Salt states to commands executed via SSH and is useful for occasional management or environments with strict policies. To use salt-ssh, create a roster and run salt-ssh ‘target’ state.apply.
Operational tips and best practices
Organize states in a predictable layout, keep them in version control, and test locally prior to pushing changes. Use salt-call –local on a representative minion for debugging. Avoid auto_accept in production; manage keys explicitly. Implement CI for your state tree so changes are validated by linters and unit tests. Consider using the Reactor to trigger actions based on events and the Scheduler for recurring jobs. For large deployments, explore the Syndic pattern or run multiple masters with a REST/API gateway for scaling.
Troubleshooting common issues
If a minion does not show up, confirm network routes and that ports 4505/4506 are open between master and minion. Check /var/log/salt/master and /var/log/salt/minion for errors. Key mismatches often come from duplicated minion IDs; ensure each minion has a unique id in /etc/salt/minion or derived from the system hostname. If states do not apply, run salt-call –local state.apply on the minion to see local renderer errors; common causes are indentation or YAML syntax problems in SLS files. When using GitFS, ensure the master can access the repository and that credentials are configured if the repo is private.
Security considerations
Salt uses public/private keypairs for authentication between master and minion. Keep the master secure and limit access to its API. Use eauth (external authentication) for APIs, enable tls verification where applicable for Git backends, and avoid placing secrets in plain SLS files , use pillars or an external secrets backend. Review permission and ownership of /srv/salt and /srv/pillar so the Salt master process can read them but unauthorized users cannot.
Summary
Getting Salt running involves installing the master and minion packages, configuring file_roots and pillars on the master, setting the master address and grains on minions, accepting minion keys, and authoring states and top.sls mappings. Use pillars for secret or per-minion data, grains for targeting, and salt-call for local testing. Keep your state tree in version control, test changes before applying them broadly, and monitor logs to catch issues early.
FAQs
How do I test states without affecting production?
Use salt-call –local on a non-production or test minion to run states locally and debug rendering. Maintain a separate environment (dev) in your file_roots and run states there first. Use CI pipelines to run salt-lint and unit tests on SLS files before merging changes to production branches.
What should I do if a minion keeps reconnecting or losing keys?
Check that the minion ID is unique and consistent. inspect master and minion logs for messages about key rejections. Ensure network stability and that time synchronization is working. If a key was rotated or replaced, remove the old key on the master (salt-key -d
When should I use Salt-SSH instead of a minion?
Use salt-ssh when you cannot or do not want to run a persistent minion on target hosts, such as in locked-down environments or for occasional ad-hoc management. Salt-ssh works over SSH and does not require the Salt master/minion tcp ports, but it may not support every feature available to a full minion.
How can I manage secrets like database passwords?
Store secrets in pillars and limit pillar access through top.sls mappings so only intended minions receive those values. For stronger security, integrate Salt with a secrets backend (HashiCorp Vault, AWS Secrets Manager) or use the external pillars system to pull secrets dynamically.
What is the recommended way to structure a large Salt deployment?
For larger environments, adopt multiple environments (base, prod, dev), keep state files in a version-controlled repository and use GitFS, leverage the Syndic pattern or multi-master setup for scale and redundancy, and implement CI pipelines to validate state changes. Use roles and grains for consistent targeting and monitor master health and job return statuses regularly.



