Linking Containers

Elasticsearch, Logstash & Kibana

Flocker provides functionality similar to Docker Container Linking. In this example you will learn how to deploy ElasticSearch, Logstash, and Kibana with Flocker, demonstrating how applications running in separate Docker containers can be linked together such that they can connect to one another, even when they are deployed on separate nodes.

The three applications are connected as follows:

  • Logstash receives logged messages and relays them to ElasticSearch.
  • ElasticSearch stores the logged messages in a database.
  • Kibana connects to ElasticSearch to retrieve the logged messages and present them in a web interface.

Create the Virtual Machines

You can reuse the Virtual Machines defined in the Vagrant configuration for the MongoDB tutorial. If you have since shutdown or destroyed those VMs, boot them up again:

[email protected]:~/flocker-tutorial$ vagrant up
Bringing machine 'node1' up with 'virtualbox' provider...
==> node1: Importing base box 'clusterhq/flocker-dev'...

Download the Docker Images

The Docker images used by this example are quite large, so you should pre-fetch them to your nodes.

[email protected]:~/flocker-tutorial$ ssh -t [email protected] docker pull clusterhq/elasticsearch
...
[email protected]:~/flocker-tutorial$ ssh -t [email protected] docker pull clusterhq/logstash
...
[email protected]:~/flocker-tutorial$ ssh -t [email protected] docker pull clusterhq/kibana
...
[email protected]:~/flocker-tutorial$
[email protected]:~/flocker-tutorial$ ssh -t [email protected] docker pull clusterhq/elasticsearch
...
[email protected]:~/flocker-tutorial$ ssh -t [email protected] docker pull clusterhq/logstash
...
[email protected]:~/flocker-tutorial$ ssh -t [email protected] docker pull clusterhq/kibana
...
[email protected]:~/flocker-tutorial$

Deploy on Node1

Download and save the following configuration files to your flocker-tutorial directory:

elk-application.yml

"version": 1
"applications":
  "elasticsearch":
    "image": "clusterhq/elasticsearch"
    "ports":
    - "internal": 9200
      "external": 9200
    "volume":
      "mountpoint": "/var/lib/elasticsearch/"
  "logstash":
    "image": "clusterhq/logstash"
    "ports":
    - "internal": 5000
      "external": 5000
    "links":
    - "local_port": 9200
      "remote_port": 9200
      "alias": "es"
  "kibana":
    "image": "clusterhq/kibana"
    "ports":
    - "internal": 8080
      "external": 80

elk-deployment.yml

"version": 1
"nodes":
  "172.16.255.250": ["elasticsearch", "logstash", "kibana"]
  "172.16.255.251": []

Run flocker-deploy to start the three applications:

[email protected]:~/flocker-tutorial$ flocker-deploy elk-deployment.yml elk-application.yml
[email protected]:~/flocker-tutorial$

Connect to Kibana

Browse to port 80 on Node1 (http://172.16.255.250:80) with your web browser. You should see the Kibana web interface but there won’t be any messages yet.

The Kibana web interface shows that in the last day there have been no events.

Generate Sample Log Messages

Use telnet to connect to the Logstash service running in the Virtual Machine and send some sample JSON data.

[email protected]:~/flocker-tutorial$ telnet 172.16.255.250 5000
{"firstname": "Joe", "lastname": "Bloggs"}
{"firstname": "Fred", "lastname": "Bloggs"}
^]

telnet> quit
Connection closed.
[email protected]:~/flocker-tutorial$

Now refresh the Kibana web interface and you should see those messages.

The Kibana web interface shows that there have been two events in the last five minutes.

Move ElasticSearch to Node2

Download and save the following configuration files to the flocker-tutorial directory:

"version": 1
"nodes":
  "172.16.255.250": ["logstash", "kibana"]
  "172.16.255.251": ["elasticsearch"]

Then run flocker-deploy to move the Elasticsearch application along with its data to the new destination host:

[email protected]:~/flocker-tutorial$ flocker-deploy elk-deployment.yml elk-application.yml
[email protected]:~/flocker-tutorial$

Now verify that the ElasticSearch application has moved to the other VM:

[email protected]:~/flocker-tutorial$ ssh [email protected] docker ps
CONTAINER ID        IMAGE                                 COMMAND                CREATED             STATUS              PORTS                              NAMES
894d1656b74d        clusterhq/elasticsearch:latest   /bin/sh -c 'source /   2 minutes ago       Up 2 minutes        9300/tcp, 0.0.0.0:9200->9200/tcp   elasticsearch
[email protected]:~/flocker-tutorial$

And is no longer running on the original host:

[email protected]:~/flocker-tutorial$ ssh [email protected] docker ps
CONTAINER ID        IMAGE                            COMMAND                CREATED             STATUS              PORTS                    NAMES
abc5c08557d4        clusterhq/kibana:latest     /usr/bin/twistd -n w   45 minutes ago      Up 45 minutes       0.0.0.0:80->8080/tcp     kibana
44a4ee72d9ab        clusterhq/logstash:latest   /bin/sh -c /usr/loca   45 minutes ago      Up 45 minutes       0.0.0.0:5000->5000/tcp   logstash
[email protected]:~/flocker-tutorial$

Now if you refresh the Kibana web interface, you should see the log messages that were logged earlier.

This concludes the Elasticsearch-Logstash-Kibana example. Read more about linking containers in our Configuring Flocker documentation.