Getting Started
- Last Updated 12/18/2023, 9:33:19 AM UTC
- About 20 min read
The monitoring stack comprises of the following components:
myrmex-sd
, the monitoring and alerting servermyrmex-ad
, the monitoring and action agent server which is installed on the target hosts or containersmyrmex-ts
, the timeseries server which stores telemetry and alertsmyrmex-ui
, the user interface servermyrmex
, the monitoring server CLI
All server components can be installed as operating system services. Each component CLI can create and install the appropriate configurations for linux or windows services:
- Linux, it supports
systemd
orupstart
service managers if they are available, and falls back tosysvinit
if neither is available. - FreeBSD, it supports
sysvinit
. - Windows, it supports the native
Windows Services Manager
# Get the polaris components
Installations and updates for all myrmex components are performed via the myrmex-installer
program. It will download, verify and install the latest version for each of the myrmex components.
All binaries are hosted as GitHub releases (opens new window) and are signed (SHA256 RSA PKCS1 V1.5) by Arisant.
The Arisant signing keys are held in an FIPS 140-2, Level 3-certified, hardware security module (HSM).
Binaries can be verified with our public key (opens new window). myrmex-installer
always verifies a binary before installing or after downloading.
Download, verify and install the installer for your platform. E.g. for linux-amd64
:
export PLAT="linux-amd64" && \
curl -L -o myrmex-installer https://github.com/arisant/myrmex-pub/releases/download/latest/myrmex-installer."$PLAT" && \
curl -L -o myrmex-installer."$PLAT".sha256.sig https://github.com/arisant/myrmex-pub/releases/download/latest/myrmex-installer."$PLAT".sha256.sig && \
curl -L -o polaris.pem https://polaris.arisant.com/keys/polaris.pem && \
base64 -d myrmex-installer."$PLAT".sha256.sig > /tmp/myrmex-installer."$PLAT".sha256.sig && \
openssl dgst -verify polaris.pem -signature /tmp/myrmex-installer."$PLAT".sha256.sig myrmex-installer && \
cp myrmex-installer /usr/local/bin
In order to be able to download myrmex components you should have:
- A github account with read access to the
arisant/myrmex-dist
github repo. - A github access token.
- set environment variable
$GH_TOKEN
to your github token
To create the access token, navigate to https://github.com/settings/tokens (opens new window) and hit Generate new token
. Select an appropriate description and repo
from scopes list. Once generated, note down the token value.
You can also use myrmex-installer to download components without installing them. The download consists of a zip package which can be delivered manually to the target hosts. You can choose to download either the full installation or an update (patch) for a component and target platform.
The available component names are:
myrmex-sd
myrmex-ui
myrmex-ad
myrmex-ts
myrmex
The available target platforms are:
linux-amd64
linux-arm64
windows-amd64
freebsd-amd64
darwin-amd64
darwin-arm64
To download a full install package:
myrmex-installer download --product component_name --platform platform_name install
To download an update package:
myrmex-installer download --product component_name --platform platform_name update
The package is downloaded in your current working directory. If you omit the --platform
flag the installer will download the component for the OS and Architecture that the installer is running on.
Example, download the full installation version of myrmex-ui for Linux:
myrmex-installer download --product myrmex-ui --platform linux-amd64 install
# Install the myrmex CLI
Install the CLI to a location that makes sense for your (e.g. /usr/local/bin
or $HOME/bin
) and set your $PATH
to the installation directory.
sudo -E myrmex-installer install --dir /usr/local/bin cli
# Bash auto completion
The CLI supports bash
auto completion for all its commands, subcommands and options so that you do not have to look up usage with --help
. To auto complete hit TAB
anywhere on the commands line after the myrmex
command. Enable this feature with:
sudo -- bash -c 'umask 22 && myrmex autocomplete install-bash'
# Minimum Requirements
# Operating Systems
RHEL
,OL
,CentOS
: 7 or higher for server, 6 or higher for agentsUbuntu
16 or higherFreeBSD
12 or higherWindows
Windows Server 2008R2 and higher or Windows 7 and higher
# Hardware
amd64
arm64
For the server side components (monitoring server, timeseries server, ui server) you can provision on a single server with the following properties:
- 8GB RAM or more
- 200GB SSD storage or more (this depends on the retention period of timeseries data)
- 4 CPU cores or more
# Prerequisites
- Docker
- Git server to store monitoring configuration. You can use any git SaaS provider such as Github, GitLab or host your own.
- PostgreSQL server to store the monitoring data
Git and PostgreSQL can be deployed as docker containers. Dockerfile and docker-compose files along with setup automation scripts are available from http://github.com/arisant/containers.git
. For details how to install and configure Docker, checkout Working with Docker
# Firewalls
Your firewall will need to allow traffic from agent nodes to the polaris server node. Additionally, you will require firewall exceptions or HTTP proxy forwarding for external services used by the polaris server. For details refer to Networking and Firewall Exceptions
# HTTP proxy configuration
If your environment uses an HTTP proxy, make sure that you have environment variables HTTP_PROXY
and/or HTTPS_PROXY
set accordingly. For example in bash:
export HTTP_PROXY='http://user:password@your_http_proxy:port'
export HTTPS_PROXY='http://user:password@your_http_proxy:port'
# Environment variables
No environment variables are required by the polaris services. However, throughout this guide we will use the following environment variables for brevity:
POLARIS_HOME: the home directory of the polaris user
# Polaris user account
IMPORTANT
This user should be low privileged. Do not grant more privileges than necessary.
You will need to create polaris user accounts on the node that will host the polaris server and all nodes that will host polaris agents. You can use any username, uid, gid and home directory you want, but by convention we encourage the following setup:
username: polaris
group: polaris
uid: 8000
gid: 8000
home: /opt/polaris
shell: bash
sudo groupadd -g 8000 polaris
sudo useradd -u 8000 -g 8000 -d /opt/polaris polaris
# Automation
We provide ansible playbooks to automate the creation of user accounts and agent installation/update on monitored nodes so that you do not have to do it manually. Checkout Ansible Automation for details.
# Containerized Services
We provide a git repository that will help you to setup various types of containerized services that are used by polaris. Start by cloning this directory in $POLARIS_HOME
on the node that will host the monitoring services:
cd $POLARIS_HOME
git clone http://github.com/arisant/containers.git
You can choose to build the container images yourself or pull them from the Arisant container repo.
# Build a container image
cd <container_name>
./docker-build-image.sh
# Pull a container image
Login to the Arisant container registry with your OCI account and a valid auth token:
docker login phx.ocir.io
Username: axyglfhm60ws/oracleidentitycloudservice/<your login id>
Password: <your auth token>
To pull an image
docker pull phx.ocir.io/axyglfhm60ws/<repository>:<tag>
docker tag phx.ocir.io/axyglfhm60ws/<repository>:<tag> <repository>:<tag>
The latest polaris container images are:
Repository | Tag |
---|---|
myrmex/git-server | 1.0.4 |
myrmex/tsdb | 2.0.5 |
myrmex/ui | 2.8.8 |
myrmex/ui-proxy | 1.0.5 |
# Customize containers for different environments
Polaris containerized services hold their base configuration in file docker-compose.yaml
.
The file docker-compose.override.yaml
in the same directory, allows you to override the base configuration according to your environment.
For example, if your docker host is sitting behind an SSL intercepting proxy and a container needs to reach out to the internet it will require the proxy's certificate.
Assuming that the host already has this certificate in its trust store you can bind mount it into the container using docker-compose.override.yaml
like so:
# IMPORTANT: do not apply selinux label :Z to host system directories (e.g. /usr and /etc)
# this will break your host node and you will need to relabel the host files manually
version: "3"
services:
myrmex-ui:
volumes:
- /etc/ssl/certs:/etc/ssl/certs:ro
- /etc/pki/ca-trust:/etc/pki/ca-trust:ro
# Local git server container
Github repo directory: git-server
# Volumes
Volume | Description |
---|---|
/var/lib/git/repos | git repositories data |
/usr/local/apache2/conf/ext | https certificates |
# Environment variables
Variable | Description |
---|---|
CONTAINER_UID | user id that the container will run as. this should usually be the value of id -u |
CONTAINER_GID | group id that the container will run as. this should usually be the value of id -g |
GITDATA_VOL | host base directory for container volumes |
DEFAULT_GIT_REPO | git repository name to create on container first start |
CONTAINER_NAME | container name. defaults to git-server |
EXP_HTTP_PORT | host port to expose git server http traffic |
EXP_HTTPS_PORT | host port to expose git server https traffic |
HTTP_SERVER_DNS_NAME | DNS name for the auto generated TLS certificate subjectAltName |
VOL_MOUNT_OPTS | if selinux is enabled in docker set to :z so that the container correctly relabels the volume mounts |
# Setup container
Create directory structure that will host the container services and volumes. For example:
mkdir -p ${POLARIS_HOME}/container-services/git-server mkdir -p ${POLARIS_HOME}/container-vols/git-server/
Create container services from template
cd ${POLARIS_HOME}/containers ./docker-service-from-template.sh -s git-server -d ${POLARIS_HOME}/container-services/git-server
Set the environment variables in
${POLARIS_HOME}/container-services/git-server/.env
Create volume directory structure
cd ${POLARIS_HOME}/container-services/git-server ./init-service.sh
Create and start the container services
cd ${POLARIS_HOME}/container-services/git-server docker-compose up --no-start docker-compose start
# HTTP authorization
Add users or modify user passwords
cd ${POLARIS_HOME}/container-services/git-server
./git-htpasswd.sh [username]
# Access git repo
# Locally (local file system)
Must be a member of the group of the git repo directory. Does not need be your primary group.
git clone ${GITDATA_VOL}/repos/<repo-name>.git
# SSH
Must be a member of the group of the git repo directory as above and must have an ssh key known to the server hosting the git repo.
git clone ssh://<username>@<git-hostname>/${GITDATA_VOL}/repos/<repo-name>.git
# HTTPS
- By username/password created with
htpasswd
- Add self signed cert PEMs to a file and let git know where to find this file
mkdir -p ~/.git-certs
cat ${GITDATA_VOL}/httpd/conf.d/server.crt >> ~/.git-certs/certs.pem
git config --global http.sslCAInfo $HOME/.git-certs/certs.pem
git clone https://<ht_user>@${HTTP_SERVER_DNS_NAME}:${EXP_HTTPS_PORT}/git/<repo-name>.git
# HTTP
- Only from localhost
- By username/password created by
htpasswd
git clone http://<ht_user>@localhost:${EXP_HTTP_PORT}/git/<repo-name>.git
# Replicate local git server
Git does not support multi-master replication, this means that you cannot have multiple master(hot) copies of the same repository behind a load balancer. You can however replicate and synchronize your master repository to multiple backup sites. When the primary site fails, you can switch your master repository to any of the backup sites.
# Replicate to a backup site
On each backup site:
- Create a git server with the default repository as described above. Changes from the master repository will be synced into this repository. It will be read only until the time you need to promote it into a master repository when the primary site fails.
- Mirror clone the master site repository into a staging directory so that it can fetch changes from the master site and push them to the replica repository created in the previous step.
For the purposes of demonstrating this process the primary repository URL is https://primary/git/private.git
and the replica is accessed locally at ${GITDATA_VOL}/repos/private.git
.
cd ${GITDATA_VOL}/repos/private.git
### if using self signed certificates
git config --global http.sslVerify false
### mirror clone the master repo
mkdir -p ${GITDATA_VOL}/repication/stage
cd ${GITDATA_VOL}/repication/stage
git clone --mirror https://primary/git/private.git private-replica
cd private-replica
### add the replica git repo as a remote
git remote add sync ${GITDATA_VOL}/repos/private.git
The cloned repo should now have remotes for the master repository (origin) and replica repository (sync):
cd ${GITDATA_VOL}/repication/stage/private-replica
git remote -v
To sync changes from the master repo into the replica repo:
cd ${GITDATA_VOL}/repication/stage/private-replica
git fetch -p origin
git push sync --mirror
# Automate replication
This can be done through a cron job. Before you set up the cron job you need to make sure that git can find the login credentials for the master repo without requiring user interaction:
cd ${GITDATA_VOL}/repication/stage/private-replica
git config credential.helper store
git fetch -p origin
Create a cron job on each replica site to sync from master into the replica repo every 1 minute:
echo "*/1 * * * * cd ${GITDATA_VOL}/repication/stage/private-replica && git fetch -p origin && git push sync --mirror" | crontab -
# Timeseries database container
Github repo location: myrmex-tsdb
# Volumes
Volume | Description |
---|---|
/var/lib/postgresql/data | postgres cluster configuration and data |
/var/lib/postgresql/conf.d | postgres configuration files included by /var/lib/postgresql/data/postgresql.conf . Custom or base setting overrides can go here |
/var/lib/pgbackrest | pgbackrest backups |
/etc/pgbackrest | pgbackrest configuration directory |
# Environment variables
Variable | Description |
---|---|
PG_BASE_VOL | host base directory for container volumes /var/lib/postgresql/data and /var/lib/postgresql/conf.d |
PG_BACKUP_BASE_VOL | host base directory for container volumes /var/lib/pgbackrest and /etc/pgbackrest |
PG_UID | user id that the container will run as. this should usually be the value of id -u |
POSTGRES_PASSWORD | the postgres db user password |
MYRMEX_PASSWORD | the myrmex db user password |
PG_ROLE | postgres server role. one of primary , standby |
PG_ARCHIVE_MODE | postgres WAL archive mode. set to on to enable archiving. Required by database backups and replication |
PGBACKREST_STANZA | the bgbackrest database backup identifier. usually set to tsdb |
VOL_MOUNT_OPTS | if selinux is enabled in docker set to :z so that the container correctly relabels the volume mounts |
# Important
- The postgres container requires the gid to be set to
root(0)
.
# Setup container
Create directory structure that will host the container services and volumes. For example:
mkdir -p ${POLARIS_HOME}/container-services/tsdb mkdir -p ${POLARIS_HOME}/container-vols/tsdb/pg mkdir -p ${POLARIS_HOME}/container-vols/tsdb/backup
Create container services from template
cd ${POLARIS_HOME}/containers ./docker-service-from-template.sh -s myrmex-tsdb -d ${POLARIS_HOME}/container-services/tsdb
Set the environment variables in
${POLARIS_HOME}/container-services/tsdb/.env
Create volume directory structure
cd ${POLARIS_HOME}/container-services/tsdb ./init-service.sh
Create and start the container services
cd ${POLARIS_HOME}/container-services/tsdb docker-compose up --no-start docker-compose start
# Install myrmex-sd
Install under the home directory of the polaris
user:
export GH_TOKEN=github_access_token
myrmex-installer install --dir $POLARIS_HOME/server server
The service is configured via file ${POLARIS_HOME}/server/conf/myrmex-sd.conf
. The configuration file contains working defaults and detailed comments for each of the parameters. At a minimum you should configure:
log_level
agent_listen_addr
admin_listen_addr
catalog
. See Integrate myrmex server to git for details.
# myrmex-sd service
Install the myrmex-sd
service
Service control
### start
sudo $POLARIS_HOME/server/myrmex-sd service start
# OR
sudo systemctl start myrmex-sd.service
### stop
sudo $POLARIS_HOME/server/myrmex-sd service stop
# OR
sudo systemctl stop myrmex-sd.service
### restart
sudo $POLARIS_HOME/server/myrmex-sd service restart
# OR
sudo systemctl restart myrmex-sd.service
# HTTP proxy environment variables for systemd
If your environment requires that you use an HTTP proxy to reach services such as github.com
or jira.com
, you must set HTTP proxy environment variables in the myrmex-sd.service
systemd unit file. Edit file /etc/systemd/system/myrmex-sd.service
and set Environement
entries inside the Service
section to match your environment. For example
[Service]
Environment='HTTP_PROXY=https://username:password@proxy.my.domain/'
Environment='NO_PROXY=172.16.0.0/12'
...
Then reload the service definition
sudo systemctl daemon-reload
# Install myrmex-ts
Install under the home directory of the polaris
user:
myrmex-installer install --dir $POLARIS_HOME/ts ts
The service is configured via file ${POLARIS_HOME}/ts/conf/myrmex-ts.conf
. The configuration file contains working defaults and detailed comments for each of the parameters that you can configure. At a minimum you should configure:
log_level
myrmex_address
db_url
Create a service account for the this server on myrmex-sd
:
# $POLARIS_HOME/server/myrmex-sd admin service-acc add ts \
| sed '/-----BEGIN PUBLIC KEY-----/,/-----END PUBLIC KEY-----/d' \
> $POLARIS_HOME/ts/conf/myrmex-creds.json
# chmod 0600 $POLARIS_HOME/ts/conf/myrmex-creds.json
# myrmex-ts service
Install the myrmex-ts
service
Service control
### start
sudo $POLARIS_HOME/ts/myrmex-ts service start
# OR
sudo systemctl start myrmex-ts.service
### stop
sudo $POLARIS_HOME/ts/myrmex-ts service stop
# OR
sudo systemctl stop myrmex-ts.service
### restart
sudo $POLARIS_HOME/ts/myrmex-ts service restart
# OR
sudo systemctl restart myrmex-ts.service
# Polaris UI container
Github repo location: myrmex-ui
# Volumes
Volume | Description |
---|---|
/usr/local/tomcat/webapps | UI web app |
/usr/local/tomcat/logs | UI web app log files |
/usr/local/tomcat/conf/Catalina/localhost | UI web app configuration files |
/usr/local/tomcat/conf/myrmex | UI web app configuration for myrmex-sd integration |
/usr/local/tomcat/temp | temporary data |
/usr/local/myrmex-installer | UI web app installer files |
/usr/local/apache2/vhosts | UI web app proxy configuration files and TLS certs |
# Environment variables
Variable | Description |
---|---|
APP_CONTAINER_NAME | web app container name. defaults to myrmex-ui |
PROXY_CONTAINER_NAME | web app http proxy container name. defaults to myrmex-ui-proxy |
DB_USERNAME | username to connect to tsdb . defaults to myrmex |
DB_PASSWORD | password for tsdb connection |
DB_URL | connection string to tsdb |
MYMREX_SD_HOST | hostname or ip address where myrmex-sd is running on |
MYMREX_SD_PORT | myrmed-sd admin port. defaults to 9856 |
GITHUB_ACCESS_TOKEN | github access token for downloading web app releases |
UIDATA_BASE_VOL | host base directory for container volumes /usr/local/tomcat/* and /usr/local/myrmex-installer |
UIPROXY_BASE_VOL | host base directory for container volumes /usr/local/apache2/vhosts |
CONTAINER_UID | user id that the container will run as. this should usually be the value of id -u |
CONTAINER_GID | group id that the container will run as. this should usually be the value of id -g |
EXP_APP_HTTP_PORT | host port to expose web app http port |
EXP_PROXY_HTTP_PORT | host port to expose web app proxy http port |
EXP_PROXY_HTTPS_PORT | host port to expose web app proxy https port |
HTTP_SERVER_DNS_NAME | DNS name for the auto generated TLS certificate subjectAltName |
CATALINA_OPTS | extra options to pass to web app catalina container |
VOL_MOUNT_OPTS | if selinux is enabled in docker set to :z so that the container correctly relabels the volume mounts |
# Setup containers
Create directory structure that will host the container services and volumes. For example:
mkdir -p ${POLARIS_HOME}/container-services/ui mkdir -p ${POLARIS_HOME}/container-vols/ui/app mkdir -p ${POLARIS_HOME}/container-vols/ui/proxy
Create container services from template
cd ${POLARIS_HOME}/containers ./docker-service-from-template.sh -s myrmex-ui -d ${POLARIS_HOME}/container-services/ui
Set the environment variables in
${POLARIS_HOME}/container-services/myrmex-ui/.env
Create volume directory structure
cd ${POLARIS_HOME}/container-services/myrmex-ui ./init-service.sh
Create and start the container services
cd ${POLARIS_HOME}/container-services/myrmex-ui docker-compose up --no-start docker-compose start
Create a service account for the UI on
myrmex-sd
and bounce the container.
# $POLARIS_HOME/server/myrmex-sd admin service-acc add ui \
| sed '/-----BEGIN PUBLIC KEY-----/,/-----END PUBLIC KEY-----/d' \
> $UIDATA_BASE_VOL/myrmex/obno/config/myrmex-creds.json
# chmod 0600 $UIDATA_BASE_VOL/myrmex/obno/config/myrmex-creds.json
# docker-compose restart
# Install myrmex-ad
You must install the polaris agent on each target node.
Automation with Ansible
You do not have to do this manually. Checkout Ansible Automation for how to automate this
Install the agent under the home directory of the polaris
user:
export GH_TOKEN=github_access_token
myrmex-installer install --dir $POLARIS_HOME/agent agent
Configured the agent using file $POLARIS_HOME/agent/conf/myrmex-ad.conf
. The configuration file contains working defaults and detailed comments for each of the parameters. At a minimum you should configure:
agent_log_level
agent_server_addr
# Agent integration with polaris cloud SaaS
If you are integrating with polaris cloud SaaS, you will be provided with:
- a tenant id
- a certificate chain file (
cert.pem
) - a certificate autority file (
polaris.arisant.com-ca.pem
) - a private key file (
key.pem
)
Store cert.pem
, polaris.arisant.com-ca.pem
and key.pem
under $POLARIS_HOME/agent/conf
and make sure that
the permissions for key.pem
are set to 0600
You must then setup the following configuration parameters in $POLARIS_HOME/agent/conf/myrmex-ad.conf
:
agent_server_addr = "infra.polaris.arisant.com:8443"
tenant_vhost = "_your_tenant_id_.polaris.arisant.com"
over_web_socket = true
[security]
use_ext_mtls = true
[security.tls_certs]
cert_chain = "cert.pem"
private_key = "key.pem"
trusted_ca = "polaris.arisant.com-ca.pem"
sni = "_your_tenant_id_.polaris.arisant.com"
If the agent is behind an HTTP forward proxy then you must configure myrmex-ad.service
to use this proxy.
Create or modify file /etc/sysconfig/myrmex-ad
and bounce myrmex-ad.service
:
$ cat /etc/sysconfig/myrmex-ad
HTTP_PROXY=http://__proxy_host__:__proxy_port__
HTTPS_PROXY=http://__proxy_host__:__proxy_port__
# Forward Proxy Configuration
To pass network traffic from an agent to the polaris SaaS controller through an HTTP proxy, the proxy MUST:
- allow
CONNECT
mehod toinfra.polaris.arisant.com:8443
- allow port
8443
- allow domain
infra.polaris.arisant.com
- firewall rules on proxy hosts MUST allow egress traffic to
infra.polaris.arisant.com:8443
For example a squid
proxy configuration should include:
acl SSL_ports port 443 8443
acl Safe_ports port 443 8443
acl CONNECT method CONNECT
# Deny requests to unsafe ports
http_access deny !Safe_ports
# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports
# polaris SaaS controller domain
acl polaris-infra dstdomain infra.polaris.arisant.com
http_access allow polaris-infra
# Test connectivity between agent and SaaS controller
Without proxy:
nc -zv infra.polaris.arisant.com 8443
With proxy:
nc -zv --proxy __proxy_host__:__proxy_port__ --proxy-type http infra.polaris.arisant.com 8443
# myrmex-ad service
Install the myrmex-ad
service
Service control
### start
sudo $POLARIS_HOME/agent/myrmex-ad service start
# OR
sudo systemctl start myrmex-ad.service
### stop
sudo $POLARIS_HOME/agent/myrmex-ad service stop
# OR
sudo systemctl stop myrmex-ad.service
### restart
sudo $POLARIS_HOME/agent/myrmex-ad service restart
# OR
sudo systemctl restart myrmex-ad.service
# Join agents to polaris server
Polaris cloud SaaS
Agents are joined and secured automatically. No action is required!
All communication between agents and polaris servers is performed over two way TLS (mTLS). That is, a controller must trust the TLS certificate presented by an agent, and at the same time an agent must trust the TLS certificate presented by a controller. The issuance, validation, rotation and black-listing of certificates is completely automated. The only manual interaction is required when an agent is first enrolled (post installation) with a controller cluster. This agent certificate bootstrap process is required because we need to explicitly instruct an agent to trust the controller cluster that we have configured, and at the same time we want to block unrestricted/unauthorized access to the controller. Certificate bootstrapping is performed through expirable join tokens which are issued by a controller. The bootstrap process has two steps:
- obtain agent join token from controller
- provide join token to agent
Both the agent and controller service must be running during this process.
Obtain agent join token
# myrmex-sd security tokens
agent: MRXTN-1-0-51qkdh4y2jq5zhfll9jy6erbmr6abvvt1zelu33e4oao0l792d-aaaaaaaaaaaaaaaaaaaaaaaaa
controller: MRXTN-1-0-51qkdh4y2jq5zhfll9jy6erbmr6abvvt1zelu33e4oao0l792d-ccccccccccccccccccccccccc
Provide the join token (the one tagged with agent) to the agent
# ./myrmex-ad admin join
✔ token: MRXTN-1-0-51qkdh4y2jq5zhfll9jy6erbmr6abvvt1zelu33e4oao0l792d-aaaaaaaaaaaaaaaaaaaaaaaaa
This process does not need to be repeated unless the agent certificate has been black listed by a controller.
# Rotating Join Tokens
If the join tokens have been leaked (e.g. posted to a git repo), you can rotate them using the following command
myrmex-sd security rotate-tokens
# Access Control
Create users and assign roles to them from the myrmex-sd
CLI. Use the user credentials created here to login to the UI.
# myrmex-sd admin --help
NAME:
myrmex-sd admin - myrmex-sd administration
USAGE:
myrmex-sd admin command [command options] [arguments...]
COMMANDS:
users lists users
roles lists available roles
useradd add a user
userdel delete a user
passwd change a user password
roleadd add roles to a user
roledel delete roles from a user
userroles list user roles
userunlock unlock a user
service-acc service accounts admin
OPTIONS:
--help, -h show help
create a user
myrmex-sd admin useradd <user name>
delete a user
myrmex-sd admin userdel <user name>
change a user password
myrmex-sd admin passwd <user name>
assign roles to users
Available roles areadmin
(full control) andmonitor
(view only)myrmex-sd admin roleadd <user name> roles...