DeviceHive Docker containers accept many environment variables which configure persistent storage in PostgreSQL, message bus support through Apache Kafka and scalable storage of device messages using Apache Cassandra.
Docker Compose puts all containers together and provides a way to tweak configuration for your environment.
DeviceHive service stack will start without any configuration. All containers are configured via environment variables and Docker Compose can pass variables from its environment to containers and read them from .env file. To make persistent configuration changes we will add parameters in the .env
file in current directory.
Released versions of devicehive-docker use stable DeviceHive images from DeviceHive Docker Hub repository. But if you want follow DeviceHive development add following parameters:
DH_TAG
- tag for DeviceHive Frontend, Backend and Hazelcast images. Can be set todevelopment
to track development version of DeviceHive.DH_ADMIN_TAG
- tag for DeviceHive Admin Console image. Can be set todevelopment
to track development version of Admin Console.
These variables are used by Frontend, Backend and PostgreSQL containers.
DH_POSTGRES_ADDRESS
- Address of PostgreSQL server instance. Defaults topostgres
, which is address of internal PostgreSQL container.DH_POSTGRES_PORT
- Port of PostgreSQL server instance, defaults to5432
if undefined.DH_POSTGRES_DB
- PostgreSQL database name for DeviceHive metadata. It is assumed that database already exists and either blank or has been initialized by DeviceHive. Defaults topostgres
.DH_POSTGRES_USERNAME
andDH_POSTGRES_PASSWORD
- login/password for DeviceHive user in PostgreSQL that have full access toDH_POSTGRES_DB
. Defaults arepostgres
andmysecretpassword
.
To enable DeviceHive to communicate over Apache Kafka message bus to scale out and interoperate with other componets, such us Apache Spark, or to enable support of Apache Cassandra for fast and scalable storage of device messages define the following environment variables:
DH_ZH_ADDRESS
- Comma-separated list of addressed of ZooKeeper instances. Defaults tozookeeper
, which is address of internal Zookeeper container.DH_ZK_PORT
- Port of ZooKeeper instances, defaults to2181
if undefined.DH_KAFKA_BOOTSTRAP_SERVERS
- Comma separated list of Kafka servers, i.e.host1:9092,host2:9092,host3:9092
. This parameter orDH_KAFKA_ADDRESS
is required to be set.DH_KAFKA_ADDRESS
- Address of Apache Kafka broker node. Mutually exclusive withDH_KAFKA_BOOTSTRAP_SERVERS
parameter.DH_KAFKA_PORT
- Port of Apache Kafka broker node, defaults to9092
if undefined. Ignored ifDH_KAFKA_ADDRESS
is undefined.DH_RPC_SERVER_REQ_CONS_THREADS
- Kafka request consumer threads in the Backend, defaults to3
if undefined.DH_RPC_SERVER_WORKER_THREADS
- Server worker threads in the Backend, defaults to3
if undefined. On machine with many CPU cores and high load this value must be raised. For example on machine with 8 core it must be set to6
.DH_RPC_CLIENT_RES_CONS_THREADS
- Kafka response consumer threads in the Frontend, defaults to3
.
DeviceHive uses JWT tokens for authentication of users and devices. For security reasons secret value that is used for signing JWT tokens is generated at first start of DeviceHive and stored in the database. If you want to make JWT tokens work across DeviceHive installations, change JWT_SECRET
parameter.
JWT_SECRET
- changes the randomly generated JWT signing secret.
By default DeviceHive writes minimum logs for better performance. Two configuration parameters are supported:
DH_LOG_LEVEL
- log verbosity for DeviceHive Java classes. Defaults toINFO
for both devicehive-frontend and devicehive-backend.ROOT_LOG_LEVEL
- log verbosity for external dependencies. Defaults toWARN
for devicehive-frontend andINFO
for devicehive-backend.
Possible values are: TRACE
, DEBUG
, INFO
, WARN
, ERROR
.
You can find more configurable parameters in frontend and backend startup scripts.
In order to run DeviceHive stack in Docker containers, define environment variables as per your requirements and run:
sudo docker-compose up -d
You can now access your DeviceHive API at http://devicehive-host-url/api and Admin Console at http://devicehive-host-url/admin.
DeviceHive Frontend service doesn't provides connection encryption and relies on external service for that.
For Docker Compose installation we will use Compose feature to read configuration from multiple Compose files. Second Compose file will start nginx reverse proxy with HTTPS support.
To configure secured HTTPS access to DeviceHive follow these steps.
- Generate key and certificate signing request for your domain, sign CSR with Certificate Authority. Resulting certificate and key files must be in the PEM format.
- Create
ssl
directory inside this directory and copy certificate, key and CA certificate chain files to it. - Generate dhparam file for nginx:
openssl dhparam -out ssl/dhparam.pem 2048
- Create nginx config file named
nginx-ssl-proxy.conf
. You can use provided example and edit certificate and key filenames:
cp nginx-ssl-proxy.conf.example nginx-ssl-proxy.conf
vi nginx-ssl-proxy.conf
Note that ./ssl
directory is mounted to /etc/ssl in container. So you need to edit only last part of path in ssl_certificate
, ssl_certificate_key
, ssl_trusted_certificate
parameters.
- Run DeviceHive with the following command:
sudo docker-compose -f docker-compose.yml -f nginx-ssl-proxy.yml up -d
Or add line COMPOSE_FILE=docker-compose.yml:nginx-ssl-proxy.yml
in .env
file.
You can now access your DeviceHive API at https://devicehive-host-url/api and Admin Console at https://devicehive-host-url/admin.
To enable optional DeviceHive Plugin service run DeviceHive with the following command:
sudo docker-compose -f docker-compose.yml -f dh_plugin.yml up -d
Or add line COMPOSE_FILE=docker-compose.yml:dh_plugin.yml
in .env
file.
You can start Kafka service with additional Prometheus metrics exporter. Necessary parameters for Kafka container are already configured in devicehive-metrics.yml
file. It will launch JMX exporter on tcp port 7071.
Run DeviceHive with the following command:
sudo docker-compose -f docker-compose.yml -f devicehive-metrics.yml
Or add line COMPOSE_FILE=docker-compose.yml:devicehive-metrics.yml
in .env
file.
Related Prometheus config for this exporter and link to Grafana Dashboard is in the Monitoring Kafka with Prometheus blog post by Prometheus developer.
Continuous Integration system uploads images built from every branch to devicehiveci repository on Docker Hub.
To use these images add ci-images.yml
to COMPOSE_FILE
parameter in .env
file. If you don't have this parameter in .env
file, add it like that:
COMPOSE_FILE=docker-compose.yml:ci-images.yml
DeviceHive Frontend and Backend services can be run with remote JMX connection enabled. TCP ports 9999-10002 must be open on a firewall.
- Create
jmxremote.password
andjmxremote.access
file in the current directory.jmxremote.password
must be readable by owner only. For example, if you want to grant JMX access for user 'developer' with password 'devpass', create these files like that:
echo "developer devpass" > jmxremote.password
echo "developer readwrite" > jmxremote.access
chmod 0400 jmxremote.password
- Open
jmx-remote.yml
file and replace<external hostname>
in _JAVA_OPTIONS env vars with actual hostname of DeviceHive server. - Run DeviceHive with the following command:
sudo docker-compose -f docker-compose.yml -f jmx-remote.yml
Or add line COMPOSE_FILE=docker-compose.yml:jmx-remote.yml
in .env
file.
You can launch Management Center to monitor Hazelcast usage and health. TCP port 9980 must be open on a firewall.
- Add
hazelcast-management-center.yml
toCOMPOSE_FILE
parameter in.env
file. If you don't have this parameter in.env
file, add it like that:
COMPOSE_FILE=docker-compose.yml:hazelcast-management-center.yml
- Run DeviceHive as usual.
- Open Hazelcast Management Center in browser via http://devicehive-server:9980/mancenter. You'll be required to configure authentication on the first launch.
To backup database use following command:
sudo docker-compose exec postgres sh -c 'pg_dump --no-owner -c -U ${POSTGRES_USER} ${POSTGRES_DB}' > dump_`date +%d-%m-%Y"_"%H_%M_%S`.sql
This will create dump_*.sql file in the current directory.
To restore database from SQL dump file delete existing database (if any), start only postgres container and pass dump file contents to psql utility in container:
sudo docker-compose down
sudo docker volume ls -q|grep devicehive-db| xargs sudo docker volume rm
sudo docker-compose up -d postgres
cat dump_*.sql | sudo docker exec -i rdbmsimage_postgres_1 sh -c 'psql -U ${POSTGRES_USER} ${POSTGRES_DB}'
sudo docker-compose up -d
Example configuration steps for CentOS 7.3 to became Docker host:
- Install CentOS 7.3, update it and reboot.
- Install docker-latest package:
sudo yum install -y docker-latest
-
Configure Docker to use LVM-direct storage backend. These steps are required for better disk IO performance:
- Add new disk with at least 10 GB of disk space. It will be used as physical volume for Docker volume group.
- Add following lines to
/etc/sysconfig/docker-latest-storage-setup
files. Change/dev/xvdb
for you device.
VG=docker DEVS=/dev/xvdb
- Run storage configuration utility
sudo docker-latest-storage-setup
-
Enable and start Docker service:
sudo systemctl enable docker-latest
sudo systemctl start docker-latest
-
Install docker-compose:
- Install and update python-pip package manager:
sudo yum install -y python2-pip sudo pip install -U pip
- Install docker-compose:
pip install docker-compose
Enjoy!