sábado, 21 de outubro de 2017

Docker: PhpMyAdmin communicating with MySQL container with IPtables / UFW


Docker: PhpMyAdmin communicating with MySQL container with IPtables / UFW


Important:
localhost on PhpMyAdmin is really the localhost of the container, not of the server hosting docker

Two methods:

A) Via docker NAT (internal to the NAT):


a1) Find the IP of mysql container:

docker inspect mysql
 e.g.  "IPAddress": "172.17.0.3"

a2) Copy config.inc.php to the host filesystem to edit it

docker cp phpmyadmin:/etc/phpmyadmin/config.inc.php .

a3) Add the IP found on step a1 and add it

Refer to How to Add Multiple Hosts in phpMyAdmin
<https://tecadmin.net/add-multiple-hosts-in-phpmyadmin/#>

e.g.:

$i++;
$cfg['Servers'][$i]['host'] = '172.17.0.3'; //provide hostname and port if other than default
$cfg['Servers'][$i]['user'] = 'root';   //user name for your remote server
$cfg['Servers'][$i]['password'] = 'myrootpass';  //password
$cfg['Servers'][$i]['auth_type'] = 'config';       // keep it as config

a4) Copy it back to the container

docker cp config.inc.php phpmyadmin:/etc/phpmyadmin/

a5) Restart the container

docker restart phpmyadmin

B) Routing through the host (default, easier)


[haven't tested but this should work -- only needed if you have security tighten] 

If 172.17.0.1 the IP of docker's NAT gateway (get it also via docker inspect mysql)

sudo iptables -I DOCKER 7 -p tcp -s 172.17.0.1 -d 172.17.0.3 --dport 3306 -j ACCEPT 

(if there's already 7 rules in the docker part of iptables -- check it via sudo iptables -S)

segunda-feira, 20 de fevereiro de 2017

Docker: only commit to image if service URL returns HTTP code 200 (OK)

Docker: only commit to image if service URL returns HTTP code 200 (OK)

Safe backup image overwriting #2


With docker, there are several ways to backup your data or entire containers; one of those ways is to commit to an image, which can later be used to fire a container with the exact same contents the source container had when that image was created.

But if for some reason the container gone bad, you don't want to overwrite that image with its contents. If this is a service with a HTTP endpoint, one way is to check if its base URL returns HTTP code 200 (OK); if it does, everything is ok and we can go ahead and replace last image with current contents one:

#!/bin/bash
ret=$(curl -I -s "$2" -o /dev/null -w "%{http_code}\n")
((ret==200)) && docker commit $1 $1_fb

In  this case, I'm creating an image with the name of the running container plus a "_fb" suffix to identify that it's the one with my contents.

Call it via
./<script name>.sh <container name or id> <base URL>
If you want to make some basic clean-up, delete the untagged images by adding to the script
docker rmi $(docker images -a | grep "^<none>" | awk '{print $3}')
Enjoy!

domingo, 19 de fevereiro de 2017

Docker: only commit to image if container running


Docker: only commit to image if container running

Safe backup image overwriting #1



With docker, there are several ways to backup your data or entire containers; one of those ways is to commit to an image, which can later be used to fire a container with the exact same contents the source container had when that image was created.

But if for some reason the container gone bad, you don't want to overwrite that image with its contents. One way is to check if the container is running: if running then odds are that everything is ok, and we can go ahead and replace last image with current contents one:

#!/bin/bash
if $(docker inspect -f {{.State.Running}} $1);  
then
     docker commit $1 $1_fb
fi

In  this case, I'm creating an image with the name of the running container plus a "_fb" suffix to identify that it's the one with my contents.

Call it via
./<script name>.sh <container name or id>
If you want to make some basic clean-up, delete the untagged images by adding to the script
docker rmi $(docker images -a | grep "^<none>" | awk '{print $3}')
Enjoy! 

sábado, 18 de fevereiro de 2017

Simple yet smart backup script for ElasticSearch Data

Backup script for ElasticSearch Data

Simple yet smart


If you have a ES endpoint that for some reason got a severe DELETE, and you only noticed it a couple of days later (ok, we aren't talking about any production deployment, of course), here's a little script that checks if current backup file size is bigger than 50MB, and only replaces last backup if true. First it compares if it's bigger than current bigger one:
#!/bin/bash 
cd /DATA/backups 
tar cvfz ESdata-today.tar.gz ../ESdata/ 
today=$(stat -c %s ESdata-today.tar.gz) 
prev=$(stat -c %s ESdataBigger.tar.gz) 
if [ "$today" -gt "$prev" ];
then
mv ESdata-today.tar.gz ESdataBigger.tar.gz
else
if [ "$today" -gt 50000000 ];
   then
mv ESdata-today.tar.gz ESdata.tar.gz
fi
fi
 
The 50MB size would be the minimum expected for it, that if smaller chances are you've lost lots of data: adjust to your needs (over time).

segunda-feira, 29 de agosto de 2016

Testing your specific stacks solutions (NodeJS / Ruby / Python, Go, etc.) with Heroku

:: Some findings from this weekend ::
        Testing your specific stacks solutions with Heroku (NodeJS / Ruby / Python, Go, etc.)
              
Case you need to test something on the fly, some app that needs to have some specific server 
               / stack, you can use Heroku, for instance, right from GitHub.

              You just need to place this on the readme (or a simple link, https://heroku.com/deploy):


Like Docker Hub looks for a Dockerfile, also Heroku looks for a specific config file, app.json.

E.g., from the aforementioned tutorial (better, the fork I did  to my own GitHub account):

{
     
"name": "React Tutorial Server",
     
"description": "Code from the React tutorial",
     
"keywords": [ "react", "reactjs", "tutorial" ],
     
"repository": "https://github.com/fmbento/react-tutorial",
     
"logo": "https://facebook.github.io/react/img/logo.svg",
     
"website": "http://facebook.github.io/react/docs/tutorial.html",
     
"success_url": "/",
     
"env" : {
       
"BUILDPACK_URL": "https://github.com/heroku/heroku-buildpack-nodejs.git"
     
}
   
}

                Here the key is really the repository link and the BuildPack (in this case, we want to run it o
                Node.JS [and he knows that should be a server.js there, at the repository).

                At Heroku, you will be asked for an optional app name (must be unique in Heroku universe --
                else, leave it blank, it will randomly assign one), choose if you want it to run from US
                or 
Europe (AWS), click “Deploy for Free”, and you’re done.

                The example of this react tutorial, run at Heroku: https://react-gss.herokuapp.com/

                You have all sort of control and data over it, at the Dashboard:

                And yes, you can even fire an update in the App via Commits to GitHib:

Or even DropBox file changes from a sync'ed folder!


And that’s it, for now, hope you’ve enjoyed these series of 4 posts.

Enjoy!

.

Build and update Docker Containers automatically from GitHub

:: Some findings from this weekend ::
Build and update Docker Containers automatically from GitHub 
To have Docker Hub build and update containers automatically for you, triggered by any commit to GitHub:
a)      You’ll need to authorize Docker Hub to access you GitHub
       https://hub.docker.com/account/authorized-services/          
 
b)      Then you'll  be able to
c)       Fill or confirm the “Repository Namespace & Name”, and it will create it for you; 
d)      Next, go to 
And select where the Dockerfile file is located (inside that repo) – save changes; you can manually trigger it too from here:
(Note: you have to have a Dockerfile present there in the repo – see further notes bellow) 
e)      Check the build details to see when it has finished (may take minutes, or hours if your dockerfile has something wrong):
The tricky part is really the dockerfile – 4am+ last Saturday to get it right, namely the RUN and CMD (first: run a command – second, start a daemon, etc.)
Here are the files for react-tutorial mentioned in my prev message:

a)      Node.JS (the one that is more simple – npm install is necessary, because we need npm modules that aren’t there at the repo):

FROM node

MAINTAINER Filipe Bento
<fbento@ebsco.com>

RUN
mkdir -p /code

WORKDIR
/code

RUN git clone
--depth 1 --single-branch https://github.com/fmbento/react-tutorial.git /code

RUN npm install

EXPOSE
3000

CMD
["node", "server.js"]


b)      And Python:

FROM python

MAINTAINER Filipe Bento 
<fbento@ebsco.com>

ENV DEBIAN_FRONTEND noninteractive

RUN apt
-get update --quiet > /dev/null && \
  apt
-get install --assume-yes --force-yes -qq \
  git 
&& \
  apt
-get clean && \
 
rm -rf /var/lib/apt/lists/*

RUN
mkdir -p /code

WORKDIR
/code

RUN git clone
--depth 1 --single-branch https://github.com/fmbento/react-tutorial.git /code && \
    pip install
-r requirements.txt

RUN
sed -i "s/app.run(port=int(os.environ.get(\"PORT\",3000)))/app.run(debug=True, host='0.0.0.0', port=int(os.environ.get(\"PORT\",3000)))/g" server.py

EXPOSE
3000

CMD
["python", "server.py"]


Quite simple, once you master what’s doing.
Note that you can also compile containers right from your workspace using Docker Compose – here’s a good page about it (Dockerizing a PHP Application).

Next:  Testing your specific stacks solutions (NodeJS / Ruby / Python / Go, etc.) with Heroku
Enjoy!
.

Docker full windows and Mac native Apps

:: Some findings from this weekend ::
Docker's Windows and Mac fully native Apps
This one completely flew under the radar, and after trying to find a way to have docker running on the new Windows Bash, found that end of last month was released a stable, full native apps both for Mac and Windows 
Wrt to Windows, you need to have Hyper-V active (the setup will do that for you) and it will create a MobyLinuxVM virtual machine (you can later define the # of CPU, RAM, etc., available to it, NAT / IPs, etc.). This has a very little footprint CPU usage and memory, when idle, and it increases your containers by very little in terms of overall resources (docker management). 
Once having that install finished, you can go ahead and fire a Windows PowerShell and run all the excellent docker commands. If you prefer a GUI for it, you can get Kitematic via
(explode it to some temp folder, then rename it to Kitematic and move it to C:\Program Files\Docker\ (or equivalent)). 
You need a Docker Hub account (simple to create).
From here, not even the sky is the limit: WordPress, Drupal, DSpace, OJS, Koha, VuFind, BlackLight, …, or stacks like ElasticSearch, MySQL / MariaDB / PostGres, TomCat / Nginx, Catmandu, or even OS (CentOS, etc.), all in here. 
Even simple apps, like this one from Facebook’s official React tutorial. Made a couple of them: one container running on Python and the other one running on Node.JS: search for “react-tutorial”:
(don’t use the one from “georgeyord”, it won’t work)

Use NodeJS, for instance – both are auto-deployed from the fork I’ve made from the official repo to https://github.com/fmbento/react-tutorial

Note that you can have containers linked to each other, namely the ones that use MySQL or other DBs, etc.., instead of having them running inside the solution container (just deploy a mysql, and then link to it from others with “--link"). 

NEXT: Build and update Docker Containers automatically from GitHub (“AUTOMATED BUILD”) – another 4am+ (this time, Sat>Sun) until get it running with “Success”:

Enjoy, good development / exploit this to the max! 
Update: wrt to the above demo containers, I’ve just merged them into the same one, with two diff tags: latest = Node.JS and Python, running on Python. By default, “latest” (Node.JS) will be selected; to select “Python” you need to click the “…”

   then  and here you see all the possible tags  and then close and create.

Enjoy!


Next: