segunda-feira, 20 de novembro de 2017

Apache Log Analysis in 5 Minutes with ELK & Docker

Apache Log Analysis in 5 Minutes with ELK & Docker

(ELK: Elasticsearch, Logstash & Kibana)
A simple demo to show how docker & docker-compose make it easy to run useful services.
In this demo we are going to spin up the following applications:
    • logstash
    • elasticsearch
    • kibana
    • drupal
with only a 12-line yaml file and one command!

Dependencies

    • docker
    • netcat, or one of it's ilk (nc, ncat, socat)
Not going to go into details here on docker installation, though.

1. Clone the repo

    1. git https://github.com/fmbento/apache-elk-in-five-minutes.git
    2. cd apache-elk-in-five-minutes

2. Make & activate a virtualenv (optional but recommended)

    1. virtualenv venv
    2. source venv/bin/activate

3. Install docker-compose

Either pip install -r requirements.txt or pip install docker-compose

4. Create a .env file

Create a file called .env with the following contents:
LOGSTASH_CONFIG_URL=https://raw.githubusercontent.com/fmbento/apache-elk-in-five-minutes/master/logstash.conf

5. Run the container

%> docker run -d --name elkapache --env-file=.env -p "5200:9200" -p "5601:9292" -p "3333:3333" pblittle/docker-logstash
If you want just test it with a ad-hoc service, you could try drupal with:
%> docker run -d -p "80:8080" drupal:latest
If your going to analyse big logs, better to disable logstash logging when creating and running the container (flag: --log-driver=none):
docker run -d --name elkapache --log-driver=none --env-file=.env -p "8200:9200" -p "5601:9292" -p "3333:3333" pblittle/docker-logstash

5a. Adjust ES port, Kibana config, if you are already using another ES (port 9200):

docker exec -it elkapache /bin/bash
root@21afd475079d:/opt/logstash# sed -i 's/9200/8200/g' ./vendor/kibana/config.js
root@21afd475079d:/opt/logstash# exit
docker restart elkapache

6. Check that the services are running

docker ps will give you a list of running containers. You should see 2.
Browse to...

7a. Just testing? Lets pipe our drupal apache logs into ELK!

    1. get the drupal container id from docker ps.
    2. run docker logs -f [container_id] 2>&1 | nc localhost 3333

7b. Or pipe an apache log file from somewhere else into logstash

cat /var/log/apache2/access.log | nc localhost 3333

8. Kibana

You should now be able to go back and forth between drupal and kibana and see the drupal apache log events populating the default dashboard.

9. Maintaince and used space

If you haven't disabled logging when launching the container:
docker inspect --format='{{.LogPath}}' elkapache | xargs sudo ls -lash
and if too big, prune it with
docker inspect --format='{{.LogPath}}' elkapache | xargs sudo tee

:: adapted from: https://github.com/lbjay/apache-elk-in-five-minutes 

sábado, 21 de outubro de 2017

Docker: PhpMyAdmin communicating with MySQL container with IPtables / UFW


Docker: PhpMyAdmin communicating with MySQL container with IPtables / UFW


Important:
localhost on PhpMyAdmin is really the localhost of the container, not of the server hosting docker

Two methods:

A) Via docker NAT (internal to the NAT):


a1) Find the IP of mysql container:

docker inspect mysql
 e.g.  "IPAddress": "172.17.0.3"

a2) Copy config.inc.php to the host filesystem to edit it

docker cp phpmyadmin:/etc/phpmyadmin/config.inc.php .

a3) Add the IP found on step a1 and add it

Refer to How to Add Multiple Hosts in phpMyAdmin
<https://tecadmin.net/add-multiple-hosts-in-phpmyadmin/#>

e.g.:

$i++;
$cfg['Servers'][$i]['host'] = '172.17.0.3'; //provide hostname and port if other than default
$cfg['Servers'][$i]['user'] = 'root';   //user name for your remote server
$cfg['Servers'][$i]['password'] = 'myrootpass';  //password
$cfg['Servers'][$i]['auth_type'] = 'config';       // keep it as config

a4) Copy it back to the container

docker cp config.inc.php phpmyadmin:/etc/phpmyadmin/

a5) Restart the container

docker restart phpmyadmin

B) Routing through the host (default, easier)


[haven't tested but this should work -- only needed if you have security tighten] 

If 172.17.0.1 the IP of docker's NAT gateway (get it also via docker inspect mysql)

sudo iptables -I DOCKER 7 -p tcp -s 172.17.0.1 -d 172.17.0.3 --dport 3306 -j ACCEPT 

(if there's already 7 rules in the docker part of iptables -- check it via sudo iptables -S)

segunda-feira, 20 de fevereiro de 2017

Docker: only commit to image if service URL returns HTTP code 200 (OK)

Docker: only commit to image if service URL returns HTTP code 200 (OK)

Safe backup image overwriting #2


With docker, there are several ways to backup your data or entire containers; one of those ways is to commit to an image, which can later be used to fire a container with the exact same contents the source container had when that image was created.

But if for some reason the container gone bad, you don't want to overwrite that image with its contents. If this is a service with a HTTP endpoint, one way is to check if its base URL returns HTTP code 200 (OK); if it does, everything is ok and we can go ahead and replace last image with current contents one:

#!/bin/bash
ret=$(curl -I -s "$2" -o /dev/null -w "%{http_code}\n")
((ret==200)) && docker commit $1 $1_fb

In  this case, I'm creating an image with the name of the running container plus a "_fb" suffix to identify that it's the one with my contents.

Call it via
./<script name>.sh <container name or id> <base URL>
If you want to make some basic clean-up, delete the untagged images by adding to the script
docker rmi $(docker images -a | grep "^<none>" | awk '{print $3}')
Enjoy!

domingo, 19 de fevereiro de 2017

Docker: only commit to image if container running


Docker: only commit to image if container running

Safe backup image overwriting #1



With docker, there are several ways to backup your data or entire containers; one of those ways is to commit to an image, which can later be used to fire a container with the exact same contents the source container had when that image was created.

But if for some reason the container gone bad, you don't want to overwrite that image with its contents. One way is to check if the container is running: if running then odds are that everything is ok, and we can go ahead and replace last image with current contents one:

#!/bin/bash
if $(docker inspect -f {{.State.Running}} $1);  
then
     docker commit $1 $1_fb
fi

In  this case, I'm creating an image with the name of the running container plus a "_fb" suffix to identify that it's the one with my contents.

Call it via
./<script name>.sh <container name or id>
If you want to make some basic clean-up, delete the untagged images by adding to the script
docker rmi $(docker images -a | grep "^<none>" | awk '{print $3}')
Enjoy! 

sábado, 18 de fevereiro de 2017

Simple yet smart backup script for ElasticSearch Data

Backup script for ElasticSearch Data

Simple yet smart


If you have a ES endpoint that for some reason got a severe DELETE, and you only noticed it a couple of days later (ok, we aren't talking about any production deployment, of course), here's a little script that checks if current backup file size is bigger than 50MB, and only replaces last backup if true. First it compares if it's bigger than current bigger one:
#!/bin/bash 
cd /DATA/backups 
tar cvfz ESdata-today.tar.gz ../ESdata/ 
today=$(stat -c %s ESdata-today.tar.gz) 
prev=$(stat -c %s ESdataBigger.tar.gz) 
if [ "$today" -gt "$prev" ];
then
mv ESdata-today.tar.gz ESdataBigger.tar.gz
else
if [ "$today" -gt 50000000 ];
   then
mv ESdata-today.tar.gz ESdata.tar.gz
fi
fi
 
The 50MB size would be the minimum expected for it, that if smaller chances are you've lost lots of data: adjust to your needs (over time).