segunda-feira, 20 de novembro de 2017

Apache Log Analysis in 5 Minutes with ELK & Docker

Apache Log Analysis in 5 Minutes with ELK & Docker

(ELK: Elasticsearch, Logstash & Kibana)
A simple demo to show how docker & docker-compose make it easy to run useful services.
In this demo we are going to spin up the following applications:
    • logstash
    • elasticsearch
    • kibana
    • drupal
with only a 12-line yaml file and one command!

Dependencies

    • docker
    • netcat, or one of it's ilk (nc, ncat, socat)
Not going to go into details here on docker installation, though.

1. Clone the repo

    1. git https://github.com/fmbento/apache-elk-in-five-minutes.git
    2. cd apache-elk-in-five-minutes

2. Make & activate a virtualenv (optional but recommended)

    1. virtualenv venv
    2. source venv/bin/activate

3. Install docker-compose

Either pip install -r requirements.txt or pip install docker-compose

4. Create a .env file

Create a file called .env with the following contents:
LOGSTASH_CONFIG_URL=https://raw.githubusercontent.com/fmbento/apache-elk-in-five-minutes/master/logstash.conf

5. Run the container

%> docker run -d --name elkapache --env-file=.env -p "5200:9200" -p "5601:9292" -p "3333:3333" pblittle/docker-logstash
If you want just test it with a ad-hoc service, you could try drupal with:
%> docker run -d -p "80:8080" drupal:latest
If your going to analyse big logs, better to disable logstash logging when creating and running the container (flag: --log-driver=none):
docker run -d --name elkapache --log-driver=none --env-file=.env -p "8200:9200" -p "5601:9292" -p "3333:3333" pblittle/docker-logstash

5a. Adjust ES port, Kibana config, if you are already using another ES (port 9200):

docker exec -it elkapache /bin/bash
root@21afd475079d:/opt/logstash# sed -i 's/9200/8200/g' ./vendor/kibana/config.js
root@21afd475079d:/opt/logstash# exit
docker restart elkapache

6. Check that the services are running

docker ps will give you a list of running containers. You should see 2.
Browse to...

7a. Just testing? Lets pipe our drupal apache logs into ELK!

    1. get the drupal container id from docker ps.
    2. run docker logs -f [container_id] 2>&1 | nc localhost 3333

7b. Or pipe an apache log file from somewhere else into logstash

cat /var/log/apache2/access.log | nc localhost 3333

8. Kibana

You should now be able to go back and forth between drupal and kibana and see the drupal apache log events populating the default dashboard.

9. Maintaince and used space

If you haven't disabled logging when launching the container:
docker inspect --format='{{.LogPath}}' elkapache | xargs sudo ls -lash
and if too big, prune it with
docker inspect --format='{{.LogPath}}' elkapache | xargs sudo tee

:: adapted from: https://github.com/lbjay/apache-elk-in-five-minutes 

sábado, 21 de outubro de 2017

Docker: PhpMyAdmin communicating with MySQL container with IPtables / UFW


Docker: PhpMyAdmin communicating with MySQL container with IPtables / UFW


Important:
localhost on PhpMyAdmin is really the localhost of the container, not of the server hosting docker

Two methods:

A) Via docker NAT (internal to the NAT):


a1) Find the IP of mysql container:

docker inspect mysql
 e.g.  "IPAddress": "172.17.0.3"

a2) Copy config.inc.php to the host filesystem to edit it

docker cp phpmyadmin:/etc/phpmyadmin/config.inc.php .

a3) Add the IP found on step a1 and add it

Refer to How to Add Multiple Hosts in phpMyAdmin
<https://tecadmin.net/add-multiple-hosts-in-phpmyadmin/#>

e.g.:

$i++;
$cfg['Servers'][$i]['host'] = '172.17.0.3'; //provide hostname and port if other than default
$cfg['Servers'][$i]['user'] = 'root';   //user name for your remote server
$cfg['Servers'][$i]['password'] = 'myrootpass';  //password
$cfg['Servers'][$i]['auth_type'] = 'config';       // keep it as config

a4) Copy it back to the container

docker cp config.inc.php phpmyadmin:/etc/phpmyadmin/

a5) Restart the container

docker restart phpmyadmin

B) Routing through the host (default, easier)


[haven't tested but this should work -- only needed if you have security tighten] 

If 172.17.0.1 the IP of docker's NAT gateway (get it also via docker inspect mysql)

sudo iptables -I DOCKER 7 -p tcp -s 172.17.0.1 -d 172.17.0.3 --dport 3306 -j ACCEPT 

(if there's already 7 rules in the docker part of iptables -- check it via sudo iptables -S)

segunda-feira, 20 de fevereiro de 2017

Docker: only commit to image if service URL returns HTTP code 200 (OK)

Docker: only commit to image if service URL returns HTTP code 200 (OK)

Safe backup image overwriting #2


With docker, there are several ways to backup your data or entire containers; one of those ways is to commit to an image, which can later be used to fire a container with the exact same contents the source container had when that image was created.

But if for some reason the container gone bad, you don't want to overwrite that image with its contents. If this is a service with a HTTP endpoint, one way is to check if its base URL returns HTTP code 200 (OK); if it does, everything is ok and we can go ahead and replace last image with current contents one:

#!/bin/bash
ret=$(curl -I -s "$2" -o /dev/null -w "%{http_code}\n")
((ret==200)) && docker commit $1 $1_fb

In  this case, I'm creating an image with the name of the running container plus a "_fb" suffix to identify that it's the one with my contents.

Call it via
./<script name>.sh <container name or id> <base URL>
If you want to make some basic clean-up, delete the untagged images by adding to the script
docker rmi $(docker images -a | grep "^<none>" | awk '{print $3}')
Enjoy!

domingo, 19 de fevereiro de 2017

Docker: only commit to image if container running


Docker: only commit to image if container running

Safe backup image overwriting #1



With docker, there are several ways to backup your data or entire containers; one of those ways is to commit to an image, which can later be used to fire a container with the exact same contents the source container had when that image was created.

But if for some reason the container gone bad, you don't want to overwrite that image with its contents. One way is to check if the container is running: if running then odds are that everything is ok, and we can go ahead and replace last image with current contents one:

#!/bin/bash
if $(docker inspect -f {{.State.Running}} $1);  
then
     docker commit $1 $1_fb
fi

In  this case, I'm creating an image with the name of the running container plus a "_fb" suffix to identify that it's the one with my contents.

Call it via
./<script name>.sh <container name or id>
If you want to make some basic clean-up, delete the untagged images by adding to the script
docker rmi $(docker images -a | grep "^<none>" | awk '{print $3}')
Enjoy! 

sábado, 18 de fevereiro de 2017

Simple yet smart backup script for ElasticSearch Data

Backup script for ElasticSearch Data

Simple yet smart


If you have a ES endpoint that for some reason got a severe DELETE, and you only noticed it a couple of days later (ok, we aren't talking about any production deployment, of course), here's a little script that checks if current backup file size is bigger than 50MB, and only replaces last backup if true. First it compares if it's bigger than current bigger one:
#!/bin/bash 
cd /DATA/backups 
tar cvfz ESdata-today.tar.gz ../ESdata/ 
today=$(stat -c %s ESdata-today.tar.gz) 
prev=$(stat -c %s ESdataBigger.tar.gz) 
if [ "$today" -gt "$prev" ];
then
mv ESdata-today.tar.gz ESdataBigger.tar.gz
else
if [ "$today" -gt 50000000 ];
   then
mv ESdata-today.tar.gz ESdata.tar.gz
fi
fi
 
The 50MB size would be the minimum expected for it, that if smaller chances are you've lost lots of data: adjust to your needs (over time).

segunda-feira, 29 de agosto de 2016

Testing your specific stacks solutions (NodeJS / Ruby / Python, Go, etc.) with Heroku

:: Some findings from this weekend ::
        Testing your specific stacks solutions with Heroku (NodeJS / Ruby / Python, Go, etc.)
              
Case you need to test something on the fly, some app that needs to have some specific server 
               / stack, you can use Heroku, for instance, right from GitHub.

              You just need to place this on the readme (or a simple link, https://heroku.com/deploy):


Like Docker Hub looks for a Dockerfile, also Heroku looks for a specific config file, app.json.

E.g., from the aforementioned tutorial (better, the fork I did  to my own GitHub account):

{
     
"name": "React Tutorial Server",
     
"description": "Code from the React tutorial",
     
"keywords": [ "react", "reactjs", "tutorial" ],
     
"repository": "https://github.com/fmbento/react-tutorial",
     
"logo": "https://facebook.github.io/react/img/logo.svg",
     
"website": "http://facebook.github.io/react/docs/tutorial.html",
     
"success_url": "/",
     
"env" : {
       
"BUILDPACK_URL": "https://github.com/heroku/heroku-buildpack-nodejs.git"
     
}
   
}

                Here the key is really the repository link and the BuildPack (in this case, we want to run it o
                Node.JS [and he knows that should be a server.js there, at the repository).

                At Heroku, you will be asked for an optional app name (must be unique in Heroku universe --
                else, leave it blank, it will randomly assign one), choose if you want it to run from US
                or 
Europe (AWS), click “Deploy for Free”, and you’re done.

                The example of this react tutorial, run at Heroku: https://react-gss.herokuapp.com/

                You have all sort of control and data over it, at the Dashboard:

                And yes, you can even fire an update in the App via Commits to GitHib:

Or even DropBox file changes from a sync'ed folder!


And that’s it, for now, hope you’ve enjoyed these series of 4 posts.

Enjoy!

.

Build and update Docker Containers automatically from GitHub

:: Some findings from this weekend ::
Build and update Docker Containers automatically from GitHub 
To have Docker Hub build and update containers automatically for you, triggered by any commit to GitHub:
a)      You’ll need to authorize Docker Hub to access you GitHub
       https://hub.docker.com/account/authorized-services/          
 
b)      Then you'll  be able to
c)       Fill or confirm the “Repository Namespace & Name”, and it will create it for you; 
d)      Next, go to 
And select where the Dockerfile file is located (inside that repo) – save changes; you can manually trigger it too from here:
(Note: you have to have a Dockerfile present there in the repo – see further notes bellow) 
e)      Check the build details to see when it has finished (may take minutes, or hours if your dockerfile has something wrong):
The tricky part is really the dockerfile – 4am+ last Saturday to get it right, namely the RUN and CMD (first: run a command – second, start a daemon, etc.)
Here are the files for react-tutorial mentioned in my prev message:

a)      Node.JS (the one that is more simple – npm install is necessary, because we need npm modules that aren’t there at the repo):

FROM node

MAINTAINER Filipe Bento
<fbento@ebsco.com>

RUN
mkdir -p /code

WORKDIR
/code

RUN git clone
--depth 1 --single-branch https://github.com/fmbento/react-tutorial.git /code

RUN npm install

EXPOSE
3000

CMD
["node", "server.js"]


b)      And Python:

FROM python

MAINTAINER Filipe Bento 
<fbento@ebsco.com>

ENV DEBIAN_FRONTEND noninteractive

RUN apt
-get update --quiet > /dev/null && \
  apt
-get install --assume-yes --force-yes -qq \
  git 
&& \
  apt
-get clean && \
 
rm -rf /var/lib/apt/lists/*

RUN
mkdir -p /code

WORKDIR
/code

RUN git clone
--depth 1 --single-branch https://github.com/fmbento/react-tutorial.git /code && \
    pip install
-r requirements.txt

RUN
sed -i "s/app.run(port=int(os.environ.get(\"PORT\",3000)))/app.run(debug=True, host='0.0.0.0', port=int(os.environ.get(\"PORT\",3000)))/g" server.py

EXPOSE
3000

CMD
["python", "server.py"]


Quite simple, once you master what’s doing.
Note that you can also compile containers right from your workspace using Docker Compose – here’s a good page about it (Dockerizing a PHP Application).

Next:  Testing your specific stacks solutions (NodeJS / Ruby / Python / Go, etc.) with Heroku
Enjoy!
.