Przejdź do głównej zawartości

DOCKER Notes


1) Docker  overview

- Docker was released in 2013 as an open source project by a company known as dotCloud , which was hosting company that is not around anymore.  In fact, within a year of releasing that open source project , it became so big that they changed their company around and basically closed the old company, started new company called Docker , Inc.

- mascot of Docker is Gordon the turtle @gordonTheTurtle which lives in San Francisco




- Docker CE (Community Edition) (free) vs Docker Enterprose Edition (paid)
- three types
 *Linux (diffrent per distro) (do not use default package)
 *Docker for Windows (or legacy Docker Toolbox)
 *Docker for Mac (or legacy Docker Toolbox) (do not use brew_
 *Docker for AWS/Azure/Google

Stable vs edge versions
- edge (beta) released monthly, Stable querterly
- edge get new features first, but only supported for a month
- stable rolls in three months of Edge feautres, EE supported longer




2) Install Docker

Documentation: https://docs.docker.com/install/

There is option to play with docker online: https://labs.play-with-docker.com/



Docker on Windows Server 2016
- Windows Server 2016 support native Windows Containers
-"Docker Windows" runs on Win 2016 but not required - not production solution
-No options for previous Windows Server versions
-Hyper-V can still run Linux VM's (that can run Docker) just fine


Docker Toolbox on Windows
- Use the Docker Quickstart Terminal to start with
  * In background it auto-creates and auto-starts VM
  * Defaults to bash shell
- Code paths enabled for Bind Mounts work in C:\Users only
-Bind Mounts work for code (but often not database)
-Re-create Linux VM or create more with docker-machine

Docker on macOS
-Docker for Mac
  *Requires Yosemite 10.10.3 (2014 release)
  *Yosmite works with  2007-2008 Mac's and newer
-Docker Toolbox
  *for Snow Leopard, Lion, Mountain Lion (10.6-10.8)
-Docker in a Linux VM
-Docker in a Windows VM
  * Not usually possible, only works with Vmware Fusion
-Do not use homebrew (brew install docker), it is docker CLi only
- bash-completion is a tool usefull for docker on mac https://docs.docker.com/compose/completion/
- code paths enabled for Bind Mounts (/Users by default)
- Bind Mounts work for code and (usually) databases
- Run more nodes: docker-machine create --driver
  *Fusion, VirtualBox, Parallels, etc. https://docs.docker.com/machine/drivers/
- Greate info and troubleshooting FAQ
  * https://docs.docker.com/docker-for-mac/


Docker Linux
- easiest install/setup, best native expiriance
- three main ways to install: script, store, or docker-machine
- get.docker.com script (latest Edge release)
  *curl -sSL https://get.docker.com/ | sh
- store.docker.com has instruction for each distro
- RHEL officially only supports Docker EE (paid), but CentOS will work
- Installing in a VM, Cloud Instance, all are the same process
- May not work for unlisted distros (Amazon Linux, Linode Linux, etc.)
- do not use pre-installed setups (Digital Ocean, Linode)


sudo usermod -aG docker username # by default you can only run docker commands from root, this command add user to docker usergroup - it will not work on paid version of linux (red hat, fedora itp)

3) Docker commands

If you type docker into linux terminal and use tab 2x to display usage options.

- command: docker version - verified cli can talk to engine
- command: docker info - most config values of engine
- docker command line structure
  *old (still works): docker <command> (options)
  *new: docker <command> <sub-command> (options)



docker container run --publish 80:80 nginx

1. Downloaded image "nginx" from Docker Hub
2. Started a new container from that image
3. Opened port 80 on the host IP
4. Routes that traffic to the container IP, port 80

Note you'll get a "bind" error if the left number (host port) is being used by anything else, even another container. You can use any port you want on the left, like 8080:80 or 8888:80, then use localhost:8888 when testing.


docker container run --publish 80:80 --detach nginx

- flag --detach is used to run container in the backgroud

docker container ls

docker container stop container_id # so the primary process inside the container in the case of docker stop we send a sigterm message which is short for terminate signal it is a message that is goingto be received by the process telling it essentially to shutdown on it is own time. Sigterm is used any time that you wantto stop the process inside of your container and shut the container down. And you want to give that process inside there a little bit of time to shut itself down and do a little bit of cleanup. A lot id different programming languages have the ability for you  to listen  for these signals inside of your codebase. And as soon as you get that signal you could attempt to do a little bit of cleanup or maybe save some file  or emit some message or something like that.
If container does not automatically stop in 10 seconds then Dockers going to automatically fall back to issuing thw docker kill command.

docker container kill container_id # on the other hand the docker kill command issues a sigkill or kill signal to the primary running process inside the container.  Sick kill essentially means you have to shut down right now and you do not get to do any additional work.

docker container logs container_name

docker container top container_name

docker container rm -f container_id



What happens in "docker container run"

1. Looks for that image locally in image cache, does not fund anything
2. Then looks in remote image repository (defaults to Docker Hub)
3. Downloads the latest version (nginx:latest by default)
4. Creates new container based on that image and prepares to start
5. Gives it a virtual IP on a private network inside docker engine
6. Opens up port 80 on host and forwards to port 80 in container
7. Starts container by using the CMD in the image Dockerfile


docker container run --publish 8080:80 --name webhost -d nginx:1.11 nginx -T

docker container run --env MYSQL_RANDOM_ROOT_PASSWORD=run --publish 3306:3306 --detach mysql

-option --env is adding enviorment variable

-has to  use docker container logs on mysql to find the random password it created on startup

docker container stop and docker container rm (both accept multiple names and ID's)


docker container top # process list in one container

docker container inspect # details of one container config

docker container stats # performance stats for all containers

docker container run -it # start new container interactively

docker container run -rm # star new container

docker container run -it --name proxy nginx bash

docker container exec -it # run additional command in existing container

docker run busybox echo hi there


docker run = docker create + docker start

docker create hello-world

docker start -a 9271d26374ff # -a is what is going  to make docker actually watch for output from the container and print it out to your terminal.

difference between docker run and dicker start is that with docker run by default put data to stdout


docker system prune # this will remove:
-all stopped containers
-all networks not used by at least one container
-all dangling images
-all build cache

docker run redis #



4) Image vs Container

- An Image is the application we want to run
- A Container is an instance of that image running as a process
- You can have many containers running off the same image
- Docker's default image "registry" is called Docker hub (hub.docker.com)


Docker Images
-app binaries and dependencies
-metadata about the image data and how to run the image
-offical definition: "An Image is an ordered collection of root filesystem changes and the corresponding execution parameters for use within a container runtime."
-not a complete OS. no kernel, kernel modules (e.g. drivers)
-small as one file (your app binary) like a golang static binary
-big as a Ubuntu distro with apt, and Apache, PHP, and more installed

docker image ls

image layers
-docker history nginx:latest # this is an actually a history of the image layers. Every image starts from the very beginning with a blank layer known as scratch. Then every set of changes that happens after that on the file system , in the image , is another layer. You might have one layer , you might have dozens of layers and some layers maybe no change in terms of the file size. You will notice on here that we actually have a change here that was simply a metadata change about which command we are actually going to run - Dockerfile.

- when we are creating a new image, we are starting with one layer. Every layer gets its own unique SHA that helps the system identity. If that layer is need the same as another layer. Let is just say that at the beginning of one of yours, you might have ubuntu layer at the very bottom. That is the first layer in your image. then you create a Dockerfile, which adds some more files and that is another layer on top of that image. . Maybe use apt for that. Then in your Dockerfile. You might have a different image that starts from debian:jassie and then on that image you may also use apt to install some stuff. So each one of these changes that you usually make in the Dockerfile, but you can also make with the commit docker command. This is also another image and those are all bundled together. What happens if I have another image that is also using the same version jassie? That image can have it s owns changes on top of the same layer that I have in my cache. This is where the fundamental concept of the cache of image layers saves us a whole bunch of time and space. Because we do not need to download layers we already have, and remember it uses a unique SHa for each layer so it's guaranteed to be the exact layer it needs.. It knows how to mach them between Docker hub and our local cache. As we make changes to our images, they create more layers. If we decide that we want to have the same image be the base image for more layers, then it is only ever storing  one copy of each layer. In this system, really, one of the biggest benefits is that we are never storing the same image data more than once on our filesystem. It also means that when we are uploading and downloading we do not need to upload and download the same layers that we already have on the other side.

- docker image inspect # docker inspect (old way) returns JSON metadata about the image


-union file system


-copy on write - When you are running Containers and you are changing files that were coming through the image, let's say I started container 3 , and I actually went in and changed a file that was in this image in the running container. What that does is the file system will take that file out of the image and copy it into this differencing. and store a copy of that file in the container layer. So now the container is really only just running the process and those files that are different than they were in the  Apache image.


Docker image tags:
-docker image tag # docker tag (old way) assign one or more tags to an image
-by default tags/repository name heave format : <user>/<repo>:<tag>
-default tag is latest if not specify
-official repositories live ate the "root namespace" of the registry so they do not need account name in front of the repo name
- "latest" tag -  it is just the deafult  tag, but image owners should assign it to the newest stable version
-docker image tag old_image_name new_image_tag

docker login <server> #deafults to logging in Hub, but you can override by adding server url,

docker login is writing profile under .docker/config.json. Docker for Mac now stores this auth in Keychain for better security. Remember to remove it if you useing public machine

docker logout # always logout from shared machines or servers when done, to protect your account

docker image push # uploads changed layers to a image registry (default is Hub)


Dockerfile -> Docker Client -> Docker Server -> Usable Image

FROM alpine
RUN apk add --update redis
CMD ["redis-server"]

1. downloading alpine image
2. Creating intermediate container and image from alpine image and running process apk add -update redis
3. removing intermediate container
4. Creating intermediate image and container from image 2. and running process redis-server
5. removing intermediate container


FROM alpine
- Download alpine image

RUN  apk add -update redis
- Get image from previous step
- Create a container out of it -> Conteiner
- Run 'apk add --updae redis' in it -> Container with modified FS
-Take snapshot of that container's FS -> FS snapshot
-shout down that temporary container
-get image ready for next instruction

CMD ["redis-server"]
-Get image from last step
-Create a container out of it -> container
-Tell container it should run  'redis-server' when started -> container with modified primary command
-shut down that temporary container
-get image ready for next instruction

No more steps
Output is the image generated from previous step


How to do that from CLI


docker run -it alpine sh / # apk add --update redis

docker ps 

docker commit -c 'CMD ["redis-server"]' 39075447a383


Cache creating of intermediate images (layers). - when you change Dockerfile, only layers afterchanges will be rebuild, rest will come from cache




5) Container vs VM

- Containers are just a process
- Containers are limited to what resources they can access
- Containers are exit when process stops

use docker top container_name and it shows processes running on host
you can check from machine ps aux you can see process with conatiner name but PID is different



6) Dockerfile

Creating a Dockerfle
-specify a base image
-run some commands to install additional programs
-specify a command to run on container startup


docker build -f some-dockerfile # builds container from particular dockerfile

packer manager # PM's like apt and yum are one of the reasons to build containers from Debian, Ubuntu , Fedora or centOS

docker image build -t custom_image .

change you change the least should be on the top of dockerfile and things tha t changes the most on the botom. Because when you will be rebuilding the image layers from changed commend will be rebuild but prior will be take from cache

6.1) Entrypoint


-Entrypoint sets the command and parameters that will be executed first when container is run.

-any command line arguments passed to "docker run <image>" will be appended to the entrypoint command, and will override all elements specified using CMD. For exapmle, "docker run <image> bash" will add the command argument bash to the end entrypoint.

6.1.1) Syntax

-the exec form is where you specify commands and arguments as a JSON array. This means you need to use double quotes rather than single quotes.

ENTRYPOINT ["executable", "param1", "param2"]

-using this syntax, Docker will not use a command shell, which means that normal shell processing does not happen. If you need shell processing features, then you can start the JSON array with the shell command.

ENTRYPOINT ["sh", "-c", "echo $HOME"]


6.1.2) using an entrypoint script

Another option is to us a script to run entrypoint commands for the container. By convention, if ooften includes entrypoint in the name. In this script , you can setup the app as well as load any configuration and environment variabkles. Here is an example of how you can run it in a Dockerfile with the ENTRYPOINT exec syntax.

"COPY ./docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["postgres"]"

Example docker-entrypoint.sh

#!/bin/bash
set -e

if [ "$1" = 'postgres' ]; then
  chown -R postgres "$PGDATA"

  if [ -z "$(ls -A "$PGDATA")" ]; then
    gosu postgres initdb
  fi

  exec gosu postgres "$@"

fi

exec "$@"


6.1.3) Docker Compose entrypoint

The instruction that you use it your Docker Compose fieles is the same, execept you use lowercase letters

entrypoint: /code/entrypoint.sh


You can also define the entrypoint with liss in your docker-compose.yml


entrypoint:
   - php
   - -d
   - zend_extension=/usr/local/lib/php/xdebug.so
   - -d
   - memory_limit=-1
   -vendor/bin/phpunit


6.1.4) overiding entrypoint

You can override entrypoint instructions using the "docker run --entrypoint" or docker-compose run --entrypoint" flags.


6.2) CMD / command


The main purpose of a CMD (Dockerfiles) /command (Docker Compose files) is to provide defaults when executing container. These will be executed after the entrypoint.

For example, if you ran "docker run <image>, then the commands and parameters specified by CMD/command in your Dockerfiles would be executed.

CMD ["nginx", "-g", "daemon off;"]
- requier run this command when  container is launched.
- Only one CMD allowed, so if there are multiple, last one wins

6.2.1) Dockerfiles

In Dockerfiles, you can define CMD defaults that include an executable. For example:

CMD ["executable", "parm1", "param2"]

If they omit the executable, you must specify an ENTRYPOINT instruction as well.

CMD ["param1", "param2"] (as dfault parameters to ENTRYPOINT)


There can only be one CMD instruction in a Dockerfile. If you list more thsan one CMD, then only the last CMD will take effect.

6.2.2) Docker Compose command

When using Docker Compose, you can define the same instruction in your docker-compose.yml, but it is written in lowercase as the full word command.

command: ["bundle", "exec", "thin", "-p", "3000"]

6.2.3) Overriding CMD 

You can override the commands specified by CMD when you run a container.

"docker run rails_app rails console"

If the user specifies arguments to "docker run", then they will override the default specified in CMd.

6.2.4) Best practices

Although there are different ways to use these instructions, Docker gives some guidence on best practices for their use and syntax.

Docker recommends using E?NTRYPOiNT to set the image's main command, and then using CMD as the default flags. Here is an example Dockerfile that uses both instructions.

"
FROM ubuntu
ENTRYPOINT ["top", "-b"]
CMD ["-c"]


As well as the exec syntax, Docker allows shell syntax as another valid option for both ENTRYPOINT and CMD. This executes this command as a string and performs variable substitution

-ENTRYPOINT command param1 param2
-CMD command param1 param2

Exec syntax being seen as best practice.


CMD should almost always be used in the form of CMD ["executable", "param1", "param2"...]. Thus, if the image os for a service,, such as Apache and Rails, you would run something like CMD ["apache2", "-DFOREGROUND"]. Indeed, this form of the instruction is recommended for any service-based image.


The Dockerfile reference explains more about some of the issues.

The ENTRYPOINT shell form prevents any CMD or run command line arguments from being used, but has the disadvantage that your ENTRYPOINT will be started as a subcommand of /bin/sh -c, which does not pass signals. This means that the executable will not be the container's PID 1 - and will not receive Unix signals - so your executable will not receive a SIGTERM from "docker stop <container>"

If CMD is used to provide deafult arguments for the  ENTRYPOINT instruction, both the CMD and ENTRYPOINT instructions should be specified with the JSON array format.


6.3) CMD vs ENTRYPOINT


Both CMD and ENTRYPOINT instructions define what command gets executed when running a container. There are few rules that describe how they interact.

1. Dockerfiles should specify at least one of CMd or ENTRYPOINT commands.
2. ENTRYPOINT should be defined when using the container as an executable.
3. CMD should be used as a way odf defining default arguments for an ENTRYPOINT command or for executing an ad-hoc command in a container.
4, CMD will be overridden when running the container with alternative arguments.



https://medium.freecodecamp.org/docker-entrypoint-cmd-dockerfile-best-practices-abc591c30e21

https://stackoverflow.com/questions/47536536/whats-the-difference-between-docker-compose-and-kubernetes

6.4 ) enviorment variables

- one reason they were chosen ad preferred way to inject key/value is they work everywhere, on every OS and config

- ENV NGINX_VERSION 1.11.10-1~jessie


6.5 ) RUN command

-command to run at shell inside container at build time

6.6) EXPOSE

EXPOSE 80 443

- expose these ports on the docker virtual network. You still need to use -p or -P to open/forward these ports on host

6.7) WORKDIR

WORKDIR /usr/share/nginx/html
-change working directory to root of nginx webhost
-using WORKDIR is preferred to using 'RUN cd /some/path'

6.8) COPY

COPY index.html index.html





7) Logging

RUN ln -sf /dev/stdout /var/log/nginx/access.log \
         && ln -sf /dev/stderr /var/log/nginx/error.log
# forward request and error logs to docker log collector

The proper way to logging inside a container is to not log to a log file. And there is no syslogd or any other syslog service inside a container. Docker actually handles all of our logging for us . All we have to do inside  the container is make sure that everything want to be captured in the logs is spit to stdout and stderr and docker will handle the rest. There is actually logging  drivers that we can use in the Docker Engine itself to control all the logs fall the containers on our host.



8) Just a Process

9) Shell Into Containers

10) Docker Networking

- Each container connected to a private virtual network "bridge"
- Each virtual network through NAT firewall on host IP
- All containers on a virtual network can talk to each other without -p
- Best practice is to create a new virtual network for each app:
  * network "my_web_app" for mysql and php/apache containers
  * network "my_api" for mongo and nodejs containers
- "batteries included, but removable" - Defaults work well in many cases, but easy to swap out parts to customize it.
- Make new virtual networks
- Attach containers to more then one virtual network (or none)
- Skip virtual networks and use host IP (--net=host)
- Use different docker network drivers to gain new abilities


docker container port container_name

docker container inspect --format '{{ .NetworkSettings.IPAddress }}' container_name # it is showing ip address of container - if using Windows, you may need to use double quotes rather than single quotes for --format



Any traffic  that is coming out from my containers is going to be NATed by default. It is acting like aa pretty common edge firewall on a network.
When you star a new container, we call it C1, that container is attached to that network, that virtual network is automaticaly attached to the Enthernet interface on your host so that it can get out. In our case, when we just launched that Nginx, we gave it a -p 80:80. That told it over  here to open up port 80 on my Ethernet interface and forward anything coming into port 80 through that virtual network to port 80 in that container.
By default, when second container is created, it is put on that same bridge network. Those two container can talk freely back and forth on their exposed ports. Unless I specify the -p, no traffic coming into internal networks here is going to get to containers.
Many virtual networks can be created on one machine.

Commands:

docker network ls # show networks
docker network inspect # inspect a network
docker network create --driver # create a network
docker network connect # attach a network to container - Dynamically creates a NIC in a container on an existing virtual network
docker network disconnect # detach a network from container

--network host # It gains performance by skipping virtual networks but sacrifices security of container model. There is pros and cons to that because it prevents the security boundries of contenerization from protecting the interface of that container. But it also, in certain situations, can improve the performance of high throughput networking and get around a few other issues with specific special software out there.

--network none # removes eth0 and only leaves you with localhost interface in container


network driver - built-in 3rd party extensions that give you virtual network features

By default network driver is bridge.

To add network to the container:
- you can add --network to docker container run command
- run docker network connect command


-create your apps so frontend/backend sit on same Docker network
-their inter-communication never leaves host
-all externally exposed via -p, which is better default security!
-this gets even better later with swarm and overlay networks


DNS and How Containers Find Each Other
- Docker deamon has a built-in DNS server that containers use by default
- Virtual Network from all the other containers on that virtual network using their container names. If second container was created on the virtual network, they will be able to find each other , regardless or what the IP adress is, with the containers names.
- DNS Default Names - Docker default the hostname to the container's name, but you can also set aliases

docker container exec -it container1_name ping container2_name

--link # Default bridge network has one disadvantage here. It does not have  the DNS server built into it by default. You can use the --link. It is specify manual links between containers in the default bridge network.


docker network create sth
docker container run --name ES1 --network sth --network-alias search -d elasticsearch:2 
docker container run --name ES2 --network sth --network-alias search -d elasticsearch:2
docker run --rm --net sth alpine nslookup search


11) Use Docker Hub


12) Make Dockerfiles

13) Push Custom Images

14) Build Image

15) Contianer Lifetime & Persistent  Data

-Containers are usually immutable and ephermeral
-"immutable infrastracture": only re-deploy containers, never change
-This is the ideal scenrio, but what about databases, or unique data?
-docker gives us features to ensure these "separation of concerns"
-this is known as "persistent data"

 two ways:
- volumes - make special location outside of container UFS
- bind mounts - link container path to host path

16) Docker Volumes

docker volume ls

docker volume inspect

if we remove conainers with volumes. Volumes will be presisted

-named volumes - friendly way to assingn vols to conteiners with

name:/volume/dir/

docker container run -v name:/volume/dir/ # creating conteiner with volume

docker container run -v /volume/dir # put a bookmark on he dir folder


docker volume create # required to do this before "docker run" to use custom drivers and labels




17) Bind Mounts

- Maps a host file or directory to a container file or directory
- Basically just two locations pointing to the same file(s)
- Again, skip UFS, and host files overwrite any in container
- can not use in Dockerfile, must be at container run
- ... run -v /Users/user_name/stuff:/path/container (mac/linux)
- ... run -v //c/Users/user_name/stuff:/path/container (windows)

18) Do's and don'ts of docker compose

- why: configure relationships between containers
- why: save our docker container run settings in easy-to-read file
- why: create one-liner developer enviorment startups
- comprised of 2 separate but related things
  * 1. YAML - formatted file that describes our solution options for:
    ** conatiners
    ** networks
    ** volumens
  * 2. CLI tool docker-compose used for local dev/test automation with those YAML files
- compose YAML format has it is own versions: 1,2,2.1,3, 3.1
- YAML file can be used with docker-compose command for local docker automation or ...
- with docker directly in production with Swarm (as of v1.13)
- docker-compose --help
- docker-compose.yaml is deafult filename but any can be used with docker compose -f

19) docker-compose.yml

version: '3.1' # if no version is specificed then v1 is assumed. Recommend v2 minimum services: # containers. same as docker run servicename: # a friendly name. this is also DNS name inside network image: # Optional if you use build: command: # Optional, replace the default CMD specified by the image environment: # Optional, same as -e in docker run volumes: # Optional, same as -v in docker run servicename2: volumes: # Optional, same as docker volume create networks: # Optional, same as docker network create

'depends_on' -> it helps compose understand the relationship between these services
'image' -> add name to image which should be look for in cache




20) docker-compose cli

- CLI tool comes with docker for Windows/Mac, but separate download for linux
- not a production-grade tool but ideal for local development and test
- two most common commands are :
  * docker-compose up # setup volumes/networks and start all containers
  * docker-compose down # stop all containers and remove cont/vol/net
-if all your projects had a Dockerfile and docker-compose.yml then "new developer onboarding" would be :
  * git clone github.com/some/software
  * docker-compose up
-https://github.com/BretFisher/udemy-docker-mastery/tree/master/compose-sample-2; https://github.com/BretFisher/udemy-docker-mastery/tree/master/compose-sample-3
- compose can also build your custom images
- will build them with docker-compose up if not found in cache
- also rebuild with docker-compose build
- great for complex builds that have lots of vars or build args

21) Docker Swarm

https://github.com/docker/swarmkit

What are tools like Swarm and Kubernetes trying to do ?

They are trying to get a collection of nodes to behave like a single node.
-How does the system maintain state?
-How does work get scheduled?

Containers issues solved by Docker Swarm:

- How do we automate container lifecycle?
- How can we easily scale out/in/up/down?
- How can we ensure our containers are re-created if they fail?
- How can we replace containers without downtime (blue/green deploy)?
- How can we control/track where containers get started?
- How can we create cross-node virtual networks?
- How can we ensure only trusted servers run our containers?
- How can we store secrets, keys, passwords and get them to the right container (and only that container)?

Swarm Mode:

- Swarm is a clustering solution built inside Docker
- Not related to Swarm "classic" for pre-1.12 versions (it was just a container)
-Added in 1.12 (summer 2016) via SwarmKit toolkit
-Enhanced in 1.13 (January 2017) via Stacks and Secrets
-Not enabled by default, new commands once enabled
  * docker swarm
  * docker node
  * docker service
  * docker stack
  * docker secret


blue boxes - Manager Nodes which have a database locally on them known as the Raft Database (it stores their configuration and gives them all the information they need to have to be the authority inside swarm.  All Manager Nodes keep a copy of that database and encrypt their traffic in order to ensure integirity and guarantee the trust that they're able to managed this swarm securly.

green boxes - Worker Nodes

Control Plane - how ordersget sent around the swarm, part taking actions




Managers can be workers as well. Manager are workers with permission to control the swarm

docker service command allow us to add extra feature  to our conteiner when we run it, such as replicas to tell us how many of those it wants to run. Those are knowns as tasks . A single service can jave multiple tasks, and each  one of those tasks will launch a container.


Checking if swarm mode is enabled
docker info | grep swarm

docker swarm init # initialized swarm
-lots of PKI and security automation
  * Root Signing Certificate created for our Swarm
  * Certificate is issued for first Manager node
  * join tokens are created
-Raft database created to store root CA, config and secrets
  * encrypted by efault on disk (1.13+)
  * no need for another key/value system to hold orchestration/secrets
  * replicates log amongst Managers via mutual TLS in"control plane"

docker node ls

docker service create alpine ping 8.8.8.8

docker service ls

docker service ps <name>

docker service update <ID> --replicas 3

docker update

docker container rm -f <name>.1.<ID>




22)Docker Swarm - build a Cluster

a) Host options

-play-with-docker.com - only needs a browser, but resets after 4 hours
-docker-machine + VirtualBox - free and runs locally, but requires a machine with 8GB memory
-Digital Ocean + Docker Install - most like a production setup, but costs $5-10/node/month while learning
-Roll your own - docker-machine can provision machines for Amazon, Azure, DO, Google etc.


docker-machine create node1

docker-machine ssh node1

docker-machine env node1

docker swarm init --advertise-addr <IP address>

docker node update --role manager node2

docker swarm join-token manager

docker service create --replicas 3 alpine ping 8.8.8.8

docker service ps <service name>




23) Docker Swarm - Overlay Networks

-just choose --driver overlay when creating network
-for container-to-container traffic inside a single Swarm
-Optional IPSec (AES) encryption on network creation
-each service can be connected to multiple networks (e.g. frontend, backend)


docker network create --driver overlay mydrupal

docker service create --name psql --network mydrupal -e POSTGRES_PASSWORD=mypass postgres

docker service create --name drupal --network mydrupal -p 80:80 drupal


24) Docker Swarm - Routing Mesh

-Routes ingress (incoming) packets for a Service to proper Task
-Spans all nodes in Swarm
-Uses IPVS from Linux Kernel
-Load balances Swarm Services across their Tasks
-Two ways this works:
  *Container-to-container in a Overlay network (uses VIP)
  *External traffic incoming to published ports (all nodes listen)
-This is statless load balancing - this is if you have to use session cookies on application , ot it expects a consistent container to be talking to a consistent client , then you may need to add some other thing to be able to slove that problem.
- This LB is at OSI Layer 3 (TCP) , not Layer 4 (DNS)
  * Both limitation can be overcome with:
    ** Nginx or HAProxy LB proxy
    ** Docker Enterprise Edition, which comes with built-in L4 web proxy

docker service create --name search --replicas 3 -p 9200:9200 elasticsearch:2

docker service ps search

curl localhost:9200



docker network create -d overlay backend

docker network create -d overlay frontend

docker service create --name vote -p 80:80 --network frontend --replicas 2 <image>

docker service create --name redis --network frontend --replicas 1 redis:3.2

docker service create --name worker --network frontend --network backend <image>

docker service create --name db --network backend --mount type=volume, source=db-data, target=/var/lib/postgresql/data

# in docker service volumes are created/add by --mount

docker service create --name result --network backend -p 5001:80 dockersample/examplevotingapp_result:before

docker service ls

docker service logs





25) Docker Swarm - Swarm Services

26) Docker Swarm - Stacks

-In 1.13 Docker adds a new layer of abstraction to Swarm called Stacks
-Stacks accept Compose files as their declarative definition for services, networks , and volumes
-We use docker stack deploy rather then docker service create
-Stacks manages all those objects for us, including  overlay network per stack. Adds stack name to start of their name
-New deploy: key in Compose file. Can't do build:
-Compose now ignores deploy:, Swarm ignores build:
-docker-compose cli not needed on Swarm server

It would create  three tasks in the orchestrator, and those tasks would find certain servers, or nodes, to put them on and they would create containers. In the new stacks,  there are multiple services, one or more . In the single yaml file. We can have also networks and volumes in  the compose file . and the results in what we would call a stack.

Stacks can control secrets as well.

stack-file.yml

version: "3"
services:
redis:
image: redis:alpine
ports:
- "6379"
networks:
- frontend
deploy:
replicas: 1
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
db:
image: postgres:9.4
volumes:
- db-data:/var/lib/postgresql/data
networks:
- backend
deploy:
placement:
constraints: [node.role == manager]
vote:
image: bretfisher/examplevotingapp_vote
ports:
- 5000:80
networks:
- frontend
depends_on:
- redis
deploy:
replicas: 2
update_config:
parallelism: 2
restart_policy:
condition: on-failure
result:
image: bretfisher/examplevotingapp_result
ports:
- 5001:80
networks:
- backend
depends_on:
- db
deploy:
replicas: 1
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
worker:
image: bretfisher/examplevotingapp_worker:java
networks:
- frontend
- backend
depends_on:
- db
- redis
deploy:
mode: replicated
replicas: 1
labels: [APP=VOTING]
restart_policy:
condition: on-failure
delay: 10s
max_attempts: 3
window: 120s
placement:
constraints: [node.role == manager]
visualizer:
image: dockersamples/visualizer
ports:
- "8080:8080"
stop_grace_period: 1m30s
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
frontend:
backend:
volumes:
db-data:

docker stack deploy -c stack-file.yml appname #for creating and applying changes

docker stack ls

docker stack ps appname

docker stack services appname



27) Docker Swarm - Secrets

- Easiest "secure" solution for storing secrets in Swarm
-What is a Secret?
  * Username and passwords
  * TLS certificates and keys
  * SSH keys
  * any data you would prefer not be "on front page of news"
-Supports generic strings or binary content up to 500Kb in size
-Does not require apps to be rewritten

Secrets Storage Cont.
-As of Docker 1.13.o Swarm Raft DB is encrypted on disk
-Only stored on disk on Manager nodes
-Default is Managers and Workers "control plane" is TLS + Mutual Auth
-Secrets are first stored in Swarm, then assigned to a Service(s)
-Only containers in assigned Service(s) can see them
-They look like files in container but are actually in-memory fs
-/run/secrets/<secret_name> or /run/secrets/<secret_alias>
-local docker-compose can use file-based, but not secure

docker secret create psql_user psql_user.txt

echo "myDBpassword" | docker secret create psql_pass - 

docker secret ls

docker secret inspect secretname

docker service create --name psql --secret psql_user --secret psql_pass -e POSTGRES_PASSWORD_FILE=/run/secrets/psql_pass -e POSTGRESS_USER_FILE=/run/secrets/psql_user postgress

docker service update --secret-rm # remove secrets from container, it would redeploy the containr because secrets are a part of the immutable design of services.

Secrets with Stacks

version: "3.1"

services:
  psql:
    image: postgres
    secrets:
      - psql_user
      - psql_password
    enviorment:
       POSTGRES_PASSWORD_FILE: /run/secrets/psql_password
       POSTGRES_USER_FILE: /run/secrets/psql_user

secrets:
   psql_user: 
      file: ./psql_user.txt
   psql_password:
       file: ./psql_password.txt


docker stack deploy -c docker-compose.yml mydb #

docker secret ls

docker secret rm # to remove secret , but can be removed with whole stack

Using this same docker compose with secrets but without swarm still work. ( in file base secret in docker compose 11 or higer)


28) Docker Swarm - App Deploy Lifecycle

-Single set of compose files for:
  *local docker-compose up development enviorment
  *remote docker-compose up CI enviorment
  *remote docker stack deploy  production nviorment

https://github.com/BretFisher/udemy-docker-mastery/tree/master/swarm-stack-3

docker-compose.yml

version: '3.1'
services:
drupal:
image: custom-drupal:latest
postgres:
image: postgres:9.6

docker-compose.test.yml



version: '3.1'
services:
drupal:
image: custom-drupal
build: .
ports:
- "80:80"
postgres:
environment:
- POSTGRES_PASSWORD_FILE=/run/secrets/psql-pw
secrets:
- psql-pw
volumes:
# NOTE: this might be sample data you host in your CI server
# so you can do integration testing with sample data
# this may not work on Docker for Windows/Mac due to bind-mounting
# database data across OS's, which doesn't always work
# in those cases you should use named volumes
- ./sample-data:/var/lib/postgresql/data
secrets:
psql-pw:
file: psql-fake-password.txt


docker-compose.prod.yml


version: '3.1'
services:
drupal:
ports:
- "80:80"
volumes:
- drupal-modules:/var/www/html/modules
- drupal-profiles:/var/www/html/profiles
- drupal-sites:/var/www/html/sites
- drupal-themes:/var/www/html/themes
postgres:
environment:
- POSTGRES_PASSWORD_FILE=/run/secrets/psql-pw
secrets:
- psql-pw
volumes:
- drupal-data:/var/lib/postgresql/data
volumes:
drupal-data:
drupal-modules:
drupal-profiles:
drupal-sites:
drupal-themes:
secrets:
psql-pw:
external: true


docker-compose.override.yml



version: '3.1'
services:
drupal:
build: .
ports:
- "8080:80"
volumes:
- drupal-modules:/var/www/html/modules
- drupal-profiles:/var/www/html/profiles
- drupal-sites:/var/www/html/sites
- ./themes:/var/www/html/themes
postgres:
environment:
- POSTGRES_PASSWORD_FILE=/run/secrets/psql-pw
secrets:
- psql-pw
volumes:
- drupal-data:/var/lib/postgresql/data
volumes:
drupal-data:
drupal-modules:
drupal-profiles:
drupal-sites:
drupal-themes:
secrets:
psql-pw:
file: psql-fake-password.txt

Dockerfile


FROM drupal:8.6
RUN apt-get update && apt-get install -y git \
&& rm -rf /var/lib/apt/lists/*
# this next part was corrected in 2018 to be more clear on how you'd typically
# customize your own theme. first you need to clone the theme into this repo
# with something like downloading the lastest theme for bootstrap
# https://www.drupal.org/project/bootstrap and extract into themes dir on host.
# then you'll COPY it into image here:
WORKDIR /var/www/html/core
COPY ./themes ./themes
WORKDIR /var/www/html

psql-fake-password.txt

mypasswd


29) Swarm - Service Updates

-Provides rolling replecement of tasks/container
-Limits downtime (be careful with "prevents" downtime)
-Willl replace containers for most changes
-Has many, many cli options to control the update
-Create options will usually change, adding -add or -rm to them
-Includes rollback and healthcheck options
-also has scale & rollback subcommand for quicker access - docker service web=4 and docker service rollback web
-A stack deploy, when pre-existing, will issue service updates

-Just update the image used to a newer version
docker service update --image myapp:1.2.1 <servicename>
-Adding an environment variable and remove a port
docker service update --env-add NODE_ENV=production --publish-rm 8080
-change number of replicas of two services
docker service scale web=8 api=6

- Swarm Updates in Stack Files. Same command, just edit the YAML file, then
docker stack deploy -c file.yml <stackname>

docker service update --force web #it is going to roll through and completely replace  the tasks. . it will use the schedule's default of looking for nodes with the least number containers and the least number of resources used. that is a trick to  get around an uneven amount of work on your nodes.

30) Docker Healthchecks

-healthcheck was added in 1.12
-Supported in Docker file, compose yaml, docker run, and Swarm Services
-Docker engine will exec's the command in the container (e.g. curl localhost)
-It exects exit 0 (OK) or exit 1 (error)
-Three container states: starting, healthy, unhealthy
-much better then "is binary still running?"
-not a external monitoring replacement
-helthcheck status shows up in docker container ls
-check last 5 healthchecks with docker container inspect
-docker run does nothing with healthchecks
-services will replace tasks if they fail healthcheck
-service updates wait for them before continuing

-example
docker run \
  --health-cmd="curl -f localhost:9200/_cluster/health || false"\
  --health-interval=5s \
  --health-retries=3 \
  --health-timeout=2s \
  --health-start-period=15s \
elasticsearch:2

-options for healthcheck command
  *--interval=DURATION (default: 30s) # how long it is going to run this health check
  *--timeout=DURATION (default: 30s) # how long it is going to wait before it errors out and returns a bad code, if maybe the app is slow.
  *--start-period=DURATION (default: 0s) (17.09+) # is a new feature that allows us now in 17.09 and newer, to give a longer wait period than the  first 30 seconds of the duration.
  *--retries=N (default: 3) # we will try this health check x number of times before we consider it unhealthy.

-Basic command using default options
HEALTHCHECK curl -f http://localhost/ || false

-Custom options with the command
HEALTHCHECK --timeout=2s --interval=3s --retrives=3
CMD curl -f http://localhost/ || exit 1

-Healthcheck in Nginx Dockerfile  - Static website running in Nginx, just test default URL

FROM nginx:1.13

HEALTHCHECK --interval=30s --timeout=3s \
CMD curl -f http://localhost/ || exit 1

-PHP-FPM running behind Nginx, test the Nginx and FPM status URLs

FROM your-nginx-php-fpm-combo-image

# dont do this if php-fpm is another container
# must enable php-fpm ping/status in pool.ini
# must forward /ping and /status urls from nginx to php-fpm

HEALTHCHECK --interval=5s --timeout=3s \
CMD curl -f http://localhost/ping || exit 1

- Use a PostgreSQL utility to test for ready state

FROM postgres

# specify real user with -U to prevent errors in log

HEALTHCHECK --interval=5s --timeout=3s \
  CMD pg_isready -U postgres || exit 1

- healthcheck in Compose/Stack Files

version: "2.1" (minimum for healthchecks)
services:
  web:
   image: nginx
   healthcheck:
       test: ["CMD", "curl", "-f", "http://localhost"]
       interval: 1m30s
       timeout : 10s
       retries: 3
       start_period: 1m # version 3.4 minimum

docker container run --name p2 -d --health-cmd="pg_isready -U postgres || exit 1" postgres

docker service create --name p1 postgres

docker container run --name p2 -d --health-cmd="pg_isready -U postgres || exit 1" postgres




31) Container Registries: Image storage and Distribution

-An image registry needs to be part of your container plan
-more Docker Hub details including auto-build
-how Docker Store (store.docker.com) is different then Hub
-how Docker Cloud (cloud.docker.com) is different then Hub
-use new Swarms feature in Cloud to connect mac/Win to Swarm
-install and use  Docker Registry as private image store
-3rd Party registry options

Docker Hub: Digging Deeper

-The most popular public image registry
-it is really Docker Registry plus lightweight image building
-let is  explore more of the features of Docker Hub
-Link Github/BitBucket to Hub and auto-build images on commit
-Chain image building together
- private repository - you can get one for free.
- when you create repositories , and on each one of your images, whetver they are public or private, you can choose to give permissions to people.
-webhooks
-organization
-create autometed build


Running Docker registry

-A private image registry for your network
-Part of the docker/distribution GitHub repo
-The de facto in private container registries
-Not as fullfeatured as Hub or others, no web UI, basic auth only
-At its core: a web API and storage system, written in Go
-Storage  supports local, S3/Azure/Alibaba/Google Cloud, and OpenStack Swift

Run a Private Docker Registry

-Run the registry image on default port  5000
-Re-tag an existing image and push it to your new registry
-Remove that image from local cache and pull it from new registry
-Re-created registry using a bind mount and see how it stores data


Registry and Proper TLS

-"Secure by Default": Docker won't talk to registry without HTTPS
-Except, localhost (127.0.0.0/8)
-For remote self-signed TLS, enabled "insecure-registry" in engine

docker container kill registry

docker container rm registry

docker container run -d -p 5000:5000 --name registry -v $(pwd)/registry-data:/var/lib/registry registry

ll registry-data

-run the registry image

docker container run -d -p 5000:5000 --name registry registry

-re-tag an existing image and push it to your new registry

docker tag hello-world 127.0.0.1:5000/hello-world

docker push 127.0.0.1:5000/hello-world

-remove that image from local cache and pull it from new registry

docker image remove hello-world

docker image remove 127.0.0.1:5000/hello-world

docker pull 127.0.0.1:5000/hello-world

-re-create registry using a bind mount and see how it stores data

docker conteiner run -d -p 5000:5000 --name registry -v $(pwd)/registry-data:/var/lib/registry registry


Create local registry

You can run your own registry using the open-source Docker Registry, which is a Go application in a Alpine Linux container.
- run a local registry in a container and configure your Docker engine to use the registry
- generate SSL certificates (using Docker!) and run a secure local registry with a friendly domain name
-generate encrypted passwords (using Docker!) and run an authenticated, secure local registry over HTTPS with basic auth.
-The open-source registry does not have a Web UI, so there’s no interface like Docker Hub or Docker Store. Instead there is a REST API you can use to query the registry. For a local registry which has a Web UI and role-based access control, Docker, Inc. has the Trusted Registry product.
















































































- There are several ways to run a registry container. The simplest is to run an insecure registry over HTTP, but for that we need to configure Docker to explicitly allow insecure access to the registry. Docker expects all registries to run on HTTPS. The next section of this lab will introduce a secure version of our registry container, but for this part of the tutorial we will run a version on HTTP. When registering a image, Docker returns an error message like this:
http: server gave HTTP response to HTTPS client
The Docker Engine needs to be explicitly setup to use HTTP for the insecure registry. For this sample it has already been done, 127.0.0.1:5000 has already been added to the daemon.
** Running on your own Linux machine instead of in this browser window ** Edit or create /etc/docker/docker file:
vi /etc/docker/docker

# add this line
DOCKER_OPTS="--insecure-registry 127.0.0.1:5000"
Close and save the file, then restart the docker daemon.
service docker restart

-Testing the Registry Image
  *First we’ll test that the registry image is working correctly, by running it without any special configuration:
docker run -d -p 5000:5000 --name registry registry:2

-Pushing and Pulling from the Local Registry

  *Docker uses the hostname from the full image name to determine which registry to use. We can build images and include the local registry hostname in the image tag, or use the docker tag command to add a new tag to an existing image.
  *These commands pull a public image from Docker Store, tag it for use in the private registry with the full name 127.0.0.1:5000/hello-world, and then push it to the registry:
docker tag hello-world 127.0.0.1:5000/hello-world
docker push 127.0.0.1:5000/hello-world
  *When you push the image to your local registry, you’ll see similar output to when you push a public image to the Hub:
The push refers to a repository [127.0.0.1:5000/hello-world]
a55ad2cda2bf: Pushed
cfbe7916c207: Pushed
fe4c16cbf7a4: Pushed
latest: digest: sha256:79e028398829da5ce98799e733bf04ac2ee39979b238e4b358e321ec549da5d6 size: 948
On your machine, you can remove the new image tag and the original image, and pull it again from the local registry to verify it was correctly stored:
docker rmi 127.0.0.1:5000/hello-world
docker rmi hello-world
docker pull 127.0.0.1:5000/hello-world
That exercise shows the registry works correctly, but at the moment it’s not very useful because all the image data is stored in the container’s writable storage area, which will be lost when the container is removed. To store the data outside of the container, we need to mount a host directory when we start the container.

- Running a Registry Container with External Storage

  *Remove the existing registry container by removing the container which holds the storage layer. Any images pushed will be deleted:
docker kill registry
docker rm registry
  *in this example, the new container will use a host-mounted Docker volume. When the registry server in the container writes image layer data, it appears to be writing to a local directory in the container but it will be writing to a directory on the host.
   *Create the registry:
mkdir registry-data
docker run -d -p 5000:5000 --name registry -v $(pwd)/registry-data:/var/lib/registry registry
    *Tag and push the container with the new IP address of the registry.
docker pull hello-world
docker tag hello-world 127.0.0.1:5000/hello-world
docker push 127.0.0.1:5000/hello-world
   *Repeating the previous docker push command uploads an image to the registry container, and the layers will be stored in the container’s /var/lib/registry directory, which is actually mapped to the $(pwd)/registry-data directory on your machine. Storing data outside of the container means we can build a new version of the registry image and replace the old container with a new one using the same host mapping - so the new registry container has all the images stored by the previous container
   *Using an insecure registry isn’t practical in multi-user scenarios. Effectively there’s no security so anyone can push and pull images if they know the registry hostname. The registry server supports authentication, but only over a secure SSL connection. We’ll run a secure version of the registry server in a container next.

-Generating the SSL Certificate in Linux

  *The Docker docs explain how to generate a self-signed certificate on Linux using OpenSSL:
mkdir -p certs 
openssl req -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key -x509 -days 365 -out certs/domain.crt
Generating a 4096 bit RSA private key
........++
............................................................++
writing new private key to 'certs/domain.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) []:
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Docker
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:127.0.0.1
Email Address []:
   *If you are running the registry locally, be sure to use your host name as the CN.
   *To get the docker daemon to trust the certificate, copy the domain.crt file.
mkdir /etc/docker/certs.d
mkdir /etc/docker/certs.d/127.0.0.1:5000 
cp $(pwd)/certs/domain.crt /etc/docker/certs.d/127.0.0.1:5000/ca.crt
   8Make sure to restart the docker daemon.
pkill dockerd
dockerd > /dev/null 2>&1 &
   *The /dev/null part is to avoid the output logs from docker daemon.
  *Now we have an SSL certificate and can run a secure registry.

-Running the Registry Securely

  *The registry server supports several configuration switches as environment variables, including the details for running securely. We can use the same image we’ve already used, but configured for HTTPS.
   *For the secure registry, we need to run a container which has the SSL certificate and key files available, which we’ll do with an additional volume mount (so we have one volume for registry data, and one for certs). We also need to specify the location of the certificate files, which we’ll do with environment variables:
mkdir registry-data
docker run -d -p 5000:5000 --name registry \
  --restart unless-stopped \
  -v $(pwd)/registry-data:/var/lib/registry -v $(pwd)/certs:/certs \
  -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
  -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
  registry
The new parts to this command are:
  • --restart unless-stopped - restart the container when it exits, unless it has been explicitly stopped. When the host restarts, Docker will start the registry container, so it’s always available.
  • -v $pwd\certs:c:\certs - mount the local certs folder into the container, so the registry server can access the certificate and key files;
  • -e REGISTRY_HTTP_TLS_CERTIFICATE - specify the location of the SSL certificate file;
  • -e REGISTRY_HTTP_TLS_KEY - specify the location of the SSL key file.
  *We’ll let Docker assign a random IP address to this container, because we’ll be accessing it by host name. The registry is running securely now, but we’ve used a self-signed certificate for an internal domain name.

-Accessing the Secure Registry

  *We’re ready to push an image into our secure registry.
docker pull hello-world
docker tag hello-world 127.0.0.1:5000/hello-world
docker push 127.0.0.1:5000/hello-world
docker pull 127.0.0.1:5000/hello-world
  *We can go one step further with the open-source registry server, and add basic authentication - so we can require users to securely log in to push and pull images.
- Usernames and Passwords
  *The registry server and the Docker client support basic authentication over HTTPS. The server uses a file with a collection of usernames and encrypted passwords. The file uses Apache’s htpasswd.
  *Create the password file with an entry for user “moby” with password “gordon”;
mkdir auth
docker run --entrypoint htpasswd registry:latest -Bbn moby gordon > auth/htpasswd
The options are:
  • –entrypoint Overwrite the default ENTRYPOINT of the image
  • -B Use bcrypt encryption (required)
  • -b run in batch mode
  • -n display results
  *We can verify the entries have been written by checking the file contents - which shows the user names in plain text and a cipher text password:
cat auth/htpasswd
moby:$2y$05$Geu2Z4LN0QDpUJBHvP5JVOsKOLH/XPoJBqISv1D8Aeh6LVGvjWWVC

-Running an Authenticated Secure Registry

  *Adding authentication to the registry is a similar process to adding SSL - we need to run the registry with access to the htpasswd file on the host, and configure authentication using environment variables.
  *As before, we’ll remove the existing container and run a new one with authentication configured:
docker kill registry
docker rm registry
docker run -d -p 5000:5000 --name registry \
  --restart unless-stopped \
  -v $(pwd)/registry-data:/var/lib/registry \
  -v $(pwd)/certs:/certs \
  -v $(pwd)/auth:/auth \
  -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
  -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
  -e REGISTRY_AUTH=htpasswd \
  -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
  -e "REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd" \
  registry
The options for this container are:
  • -v $(pwd)/auth:/auth - mount the local auth folder into the container, so the registry server can access htpasswd file;
  • -e REGISTRY_AUTH=htpasswd - use the registry’s htpasswd authentication method;
  • -e REGISTRY_AUTH_HTPASSWD_REALM='Registry Realm' - specify the authentication realm;
  • -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd - specify the location of the htpasswd file.
  *Now the registry is using secure transport and user authentication.

-Authenticating with the Registry

  *With basic authentication, users cannot push or pull from the registry unless they are authenticated. If you try and pull an image without authenticating, you will get an error:
docker pull 127.0.0.1:5000/hello-world
Using default tag: latest
Error response from daemon: Get https://127.0.0.1:5000/v2/hello-world/manifests/latest: no basic auth credentials
  *The result is the same for valid and invalid image names, so you can’t even check a repository exists without authenticating. Logging in to the registry is the same docker login command you use for Docker Store, specifying the registry hostname:
docker login 127.0.0.1:5000
Username: moby
Password:
Login Succeeded
  8If you use the wrong password or a username that doesn’t exist, you get a 401 error message:
Error response from daemon: login attempt to https://registry.local:5000/v2/ failed with status: 401 Unauthorized
  *Now you’re authenticated, you can push and pull as before:
docker pull 127.0.0.1:5000/hello-world
Using default tag: latest
latest: Pulling from hello-world
Digest: sha256:961497c5ca49dc217a6275d4d64b5e4681dd3b2712d94974b8ce4762675720b4
Status: Image is up to date for registry.local:5000/hello-world:latest
 The open-source registry does not support the same authorization model as Docker Store or Docker Trusted Registry. Once you are logged in to the registry, you can push and pull from any repository, there is no restriction to limit specific users to specific repositories.

-Docker Registry is a free, open-source application for storing and accessing Docker images. You can run the registry in a container on your own network, or in a virtual network in the cloud, to host private images with secure access. For Linux hosts, there is an official registry imageon Docker Hub.
-We’ve covered all the options, from running an insecure registry, through adding SSL to encrypt traffic, and finally adding basic authentication to restrict access. By now you know how to set up a usable registry in your own environment, and you’ve also used some key Docker patterns - using containers as build agents and to run basic commands, without having to install software on your host machines.
-There is still more you can do with Docker Registry - using a different storage driver so the image data is saved to reliable share storage, and setting up your registry as a caching proxy for Docker Store are good next steps.


Private Docker Registry with Swarm

-works the same way as localhost
-because of Routing Mesh, all nodes can see 127.0.0.1:5000
-Remember to decide how to store images (volume driver)

docker service create --name registry --publish 5000:5000 registry

curl http://localhost/v2/_catalog

-all nodes must be able to access images
-use a hosted SaaS registry if possible


Third Party Registries
a)Docker Hub
b) Docker Enterprise Edition DTR (Docker Trusted Registry)
c) Docker Registry
d) Quay.io - it is popular choice, and is very comaprable to Docker Hub as a cloud-based image registry. Sysydig a Docker Usage Report in April 2017 based off their users that shows Quay as the most popular cloud-based choice
e) hosted on AWS, Azure, GCP
f) self hosted option - Docker EE, Quay Enterprise, and also Github which comes with GitLab Container Registry, among others.
g) Awesome Docker list -

32) Docker in Production

Dockerfiles
-more important than fancy orchestration
-it is your new build and enviorment documentation
-study Dockerfiles/ENTRYPOINT of hub Officials
-FROM Official distros that are most familiar

Dockerfile Maturity Model

-Make it start
-Make it log all things to stdout/stderr
-Make it documented in file
-Make it work for others
-Make it lean
-Make it scale

Dockerfile anti-pattern: trapping data
-Problem: Stroing unique data in conatiner
-Solution: Define VOLUME for each location

VOLUME /var/lib/mysql

ENTRYPOINT ["docker-entrypoint.sh"]

CMD ["mysqld"]

-Problem: Image builds pull FROM latest
-Solution: Use specific FROM tags
-Problem: Image builds install latest packages
-Solution: Specify version for critial apt/yum/apk packages
-Problem: Not changing app defaults, or bindly copying VM conf e.g. php.ini, mysql.conf.d, java memory
-Solution: Update default configs via ENV, RUN, and ENTRYPOINT

Containers on VM or containers on bare metal

- do either , or both . Lots of pros/cons to either
-stick with what you know at first
-do some basic performance testing. You will learn lots
-2017 Docker Inc. and HPE whitepaper on MySQL benchmark bretfisher.com/dockercon17eu


Os Linux Distribution/Kernel Matters

-Docker is very kernel and storage driver dependent
-Inovations/fixes are still happening here
-"Minimum" version != "best" version
-no pre-existing opinion? Ubuntu 16.04 LTS
  *popular, well-tested with Docker
  *4.x Kernel and wide storage driver support
-Or Infra and LinuxKit
-Get correct Docker for your distro from store.docker.com

Container Base Distribution: Which One?

-Which FROM image should you use?
-Don't make a decision based on image size (remember it is Single Instance Storage)
-At first: match your existing deployment process
-Consider changing to Alpine later, maybe much later


Good Defaults: Swarm Architectures

-Simple sizing guidelines based off:
  *Docker internal testing
  *Docker reference architectures
  *Real world deployments
  *Swarm3k lessons learned

Baby swarm: 1-Node:
-"docker swarm init" and done
-SoloVM's do it , so can Swarm
-Gives you more features then docker run


HA Swarm: 3-Node:
-Minimum for HA
-All managers
-One node can fail
-Use when very small budget
-Pet projects or Test/CI


Biz Swarm: 5-Node:
-Better high-availability
-All managers
-Two nodes can fail
-my minimum for uptime that  affecrs $$$

Flexy Swarm: 10+ Nodes
-5 dedicated Managers
-Workers in DMZ
-Anything beyond 5 nodes, stick with 5 Managers and rest Workers
-Control container placement with labels + constraints


Swole Swarm: 100+ Nodes
- 5 dedicated managers
- Resize Managers as you grow
- Multiple Worker subnet on Private/DMZ
- Control container placement with labels + constraints

Do not Turn Cottle into Pets
-Assume nodes will be replaced
-Assume containers will be recreated
-Docker for (AWS/Azure) does this
-LinuxKit and InfraKit expect it

Reasons for Multiple Swarms - Bad Reasons
-Different hardware configurations (for OS)
-Different subnets or security groups
-Different availability zones
-Security boundries for compliance

Reasons for Multiple Swarms - Good Reasons
-Learning Run Stuff on Test Swarm
-Geographical boundries
-Managment boundries using Docker API (or Docker EE RBAC, or other auth plugin)


Windows Server 2016 Swarm 
-Hard to be "Windows Only Swarm:, mix with Linux nodes
-Much of those tools are Linux only
-Windows = Less choice, but easier path
-My recommendation:
   *Managers on Linux
   *Reserve Windows for Windows-exclusive workloads

Outsource Well-Defined Plumbing
-Beware the "not implemented here" syndrome
-If challenge to implement and maintain
-+ SaaS/commercial market is mature
-= Opportunities for outsourcing

Outsourcing: For Your Consideration
-Image registry
-Logs
-Monitoring and alerting


Pure Open Source Self-Hosted Tech Stack

HW/OS                     InfraKit | Terraform
Runtime                    Docker
Orchestration            Docker Swarm
Networking               Docker Swarm
Storage                      REX-Ray
CI/CD                        Jenkins
Registry                     Docker Distribution + Portus
Layer 7 Proxy            Flow-Proxy | Traefik
Central Logging         ELK
Central Monitoring    Prometheus + Grafana
Swarm GUI                Portainer

Functions As A Service: OpenFaaS


Docker fo X: Cheap and Easy Tech Stack


HW/OS                     Docker for AWS/Azure
Runtime                    Docker
Orchestration            Docker Swarm
Networking               Docker Swarm
Storage                      Docker for AWS/Azure
CI/CD                        Codeship | TravisCI
Registry                     Docker Hub | Quay
Layer 7 Proxy            Flow-Proxy | Traefik
Central Logging         Docker for AWS/Azure
Central Monitoring    Librato | Sysdig
Swarm GUI                Portainer


Docker Enterprise Edition + Docker for X

HW/OS                     Docker for AWS/Azure
Runtime                    Docker EE
Orchestration            Docker Swarm
Networking               Docker Swarm
Storage                      Docker for AWS/Azure
CI/CD                        Codeship | TravisCI
Registry                     Docker EE (DTR)
Layer 7 Proxy            Docker EE (UCP)
Central Logging         Docker for AWS/Azure
Central Monitoring    Librato | Sysdig
Swarm GUI                Docker EE (UCP)

Image Security Scanning Role-Based Access Cont, Image Promotion, Content Trust

One Container Per VM: Not New
-Windows is doing it with Hyper-V Containers
-Linux is doing it with Intel Clear Containers
-LinuxKit will make this easier: Immutable OS
-Watch out for Windows "LCOW" using LinuxKit




33) Docker Swarm - Managing Distributed State with Raft

raft consensus group

a) Quorum 

-the minimum number of votes that a consensus group needs in order to be allowed to perform an operation.
Without quorum, your system can not do work.

(n/2) + 1


Managers             Quorum          Fault Tolarance
1                            1                    0
2                            2                    0
3                            2                    1
4                            3                    1
5                            3                    2
6                            4                    2
7                            4                    3

Even number of managers is highly ineffisent!

Having two managers insted of one actually doubles your chances of loosing quorum.


-Quorum With Multiple Regions - Pay attention to datacenter topology when placing managers.

Managers Nodes                  Distribution across 3 Regions
3                                            1-1-1
5                                            1-2-2
7                                            3-2-2
9                                            3-3-3

b) Raft

Raft is responsible for
-Log replication
-Leader election
-Safety
-being easier to understand

-Raft is used everywhere
  *when eicd is used
  *orchestration systems typically use a key/value store backend by a consensus algorithm - in a lot of cases , that algorithm is Raft
-SwarmKit implements Raft algorithm directly


-In most cases, you do not want to run work on your manager nodes. Participating in a Raft consensus group is work, too. make your manager nodes unavailable for tasks:

docker node update --availability drain <NODE>

Raft is draining lot of resources.

b) Leader Election - Raft algorithm

Manager leader

Manager conadidate

Manager follower

Manager offline

c) Log Replication  - Raft algorithm

-In the context of distributing computing , a log is an apppend-only, time-based record of data.

first entry 2|10|30|2  append entry here

https://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying  // bit.ly/logging-post


-watch the Raft logs!  - Monitor via inotifywait or just read them directly


/var/lib/docker/swarm


swarm-rafttool -d swarmlogscopy dump-wal // raft log, parsed



34) Docker Swarm - Service Scheduling

- Scheduling problems ∩ HA application problems = orchestrator problems

- Scheduling constraints
  * Restrict services to specific nodes, such as specific architectures, security levels, or types

docker service create \
--constraint 'node.labels.type==web' my-app 


- Topology-aware scheduling (new in 17.04.0-ce)

--placement-pref 'spread=node.labels.dc'

Implements a spread strategy over nodes that belong to a certain category.

unlike --constraint, this is a "soft" preference


Swarm will not rebalance healthy tasks when a new node comes online!


35) Docker Swarm - Failure Recovery

36) Docker Ecosystem

a) Docker Client

b) Docker Machine

c) Docker Hub

d) Docker Server

e) Docker Images

f) Docker Compose

- separate CLI that gets installed along with Docker
- used to start up multiple Docker containers at the same time
- autometes some of the long winded arguments we were passing to 'docker run'

docker-compose up  = docker run myimage


docker-compose up --build = docker build .  + docker run myimage

docker-compose down # stop containers



37) Q&A

a) Alpine Base Image. Are they Really More Secure?

- Container Scanning Comparisonhttps://kubedex.com/follow-up-container-scanning-comparison/

http://cve.mitre.org/

https://alpinelinux.org/

- Alpine - is small ~ 5 MG, minimal ubuntu image,

- Theory :  better security through less patching, so less potential vulnerabilities

-it is really hard if not impossible to scan for security  vulnerabilities in the CV known database that databse of common vulnerabilities - so if you are someone who's going to use a security scanner Alpine is actually a bad thing for you

- another thing i have noticed recently is that alpine sometimes has sneaky problems that sneak up on you in part and in ways you wouldn't expect. - trying to get  Alpine working with node Mohn has known problems.


b) Dealing with non-root users in containers and file permissions


https://blog.mornati.net/docker-images-and-chown/

Changing file permmisions:
- using --chown flag in command
- in entrypoint script - if you need to change permissions of directories at runtime lie you need to have a volume set to certain permissions


-Phil Estes - is someone who implemented the user namespace. The user namespace feature is  inside of docker which has not enabled by default. But if you wanted to have all your containers run as non root you would enabled that.

c) Apache Web Server Design. Many sites in one container, or many containers?

-Apache Web Servers  have the ability to run many websites in one deamon

Isolation many websites into different containers
-in the theory that means we are running more Apache daemons and that is little bit less efficient.
-lot more executable is running that is taking more Ram and potentially a little bit of wasted resources. - but it is really significant with larger number of containers
- you can scale every website independently
-if you need to change something on container - you can do it only containers which is necessary


-it is not so easy question for db - because efficiency is more important there





d) Docker Network IP Subnet conflicts with outside networks

-docker network default is bridge
-swarm network default is overlay

-docker network by default is bridge so that means it is going to NAT itself outside of that private subnet into your company network. but even though it won't technically conflict with your net physical networks you can't have the same subnets that are used outside your host. And that is due to just standard TCP IP routing. Ao if you follow the IP protocol routing details essentially if I'm on let's say a 10.0.0.0 subnet on my docker container inside of Docker and then a 10.0.0.1 subnet is used somewhere in my corporate network then my container on my local machine won't know how to route to that network because it thinks that network is actually already a part of my local network subnet.

-docker settings -> advanced and deamon you have to change subnets

-It may allow you to get packets in because technically the packets coming in know how to get to your machine because you are needed but when you try to send packets back out your machine might say I don't need  he city's packets out because that particular IP address is in my local subnet  or at least it thinks it is.

https://serverfault.com/questions/916941/configuring-docker-to-not-use-the-172-17-0-0-range/942176#942176


e) Rasberry PI Development in Docker


- Architecture right the CPA architecture on a Raspberry Pi is not X86 like an Intel.It is an ARM 64

- is easier to develop your docker images locally on your mac os or windows machineand the way that all works by the way is Qe IMU. QE IMU is an emulation function feature basically that you can download and use QE you with other  thins but it's built into Docker with the desktop edition so it alows you to run not just arm but also other types of architectures and processors and it will run those inside your Docker Linux VM in the background transparently and so they will act like regular x86  containers and then  once you get things  rolling in the correct then you would upload your image to a registry that your Pi would have access to. Then you could pull those images and run them on your PI.

https://blog.alexellis.io/tag/raspberry-pi/

https://docs.docker.com/docker-for-mac/multi-arch/


f) Windows 10 Containers Get Process Isolation

- you can run Docker Desktop on can be run on Windows 10 pro and enterprise
-Windows containers - bainaries launching inside of Windows in a container . Starting container it is going to spin up a small little core server or a Nanos server. Which are different types of Windows Server 2016 or 2019 that essenially are vary small and it launches those and launches target container.You can get that full server expiriance. You get all IRS features and things that you would get out of server that are different than what you get on Windows 10 however is because this known as Hyper-V isolation and the negative of that it takes up more resources.
-You have got to spin up the whole kernel for a new operating system. And ideally if you are just wanting to run a simple EMC maybe it is you know it is a .NET EMCmaybe is just a command line you see it is not actually an eye as Web site or anything or if you just need  to run some sort of background process in a container or Windows conatiner you do not really need that full VM Hyper-V isolation.
-now process isolation has finaly come to Windows 10 . Once you turn on a couple od things and enable it essentially Docker desktop will let you run a Windows binary on Windows 10 natively without  ever having to spin up high hypervisor container.
-docker desktop needs this all hyper v running in order to install and get that working but it is possible to do this process as prices lots of process isolation.

https://stefanscherer.github.io/how-to-run-lightweight-windows-containers-on-windows-10/

https://github.com/moby/moby/pull/38000

g) Should You move Postgres to containers

https://github.com/BretFisher/sysbench-docker-hpe

https://d.pr/f/Zjv65z/1BuMbZ8p

h) Using supervisor to run multiple apps in a container

- assumtion: putting supervisor in the container and running  it as PID one or the root proccess right in that container.
-docker best practices are : never more than one app per container

https://docs.docker.com/config/containers/multi-service_container/

https://github.com/BretFisher/php-docker-good-defaults/blob/master/supervisord.conf

i) Should you use Docker Compose or Swarm  for a single server?



  • It only takes a single command to create a Swarm from that docker host docker swarm init.
  • It saves you from needing to manually install/update docker-compose on that server. Docker engine is installable and updatable via common Linux package managers (apt, yum) via https://store.docker.com but docker-compose is not.
  • When you're ready to become highly-available, you won't need to start from scratch. Just add two more nodes to a well-connected network with the 1st node. Ensure firewall ports are open between them. Then use docker swarm join-token manager on 1st node and run that output on 2nd/3rd. Now you have a fully redundant raft log and managers. Then you can change your compose file for multiple replicas of each of your services and re-apply with docker stack deploy again and you're playin' with the big dogs!
  • You get a lot of extra features out-of-the-box with Swarm, including secrets, configs, auto-recovery of serivces, rollbacks, and healtchecks.
  • Healthchecks, healthchecks, healthchecks. docker run and docker-compose won't re-create containers that failed a built-in healthcheck. You only get that with Swarm, and it should always be used for production on all containers.
  • Rolling updates. Swarm's docker service update command (which is also used by docker stack deploy when updating yaml changes) has TONS of options for controlling how you replace containers during an update. If you're running your own code on a Swarm, updates will be often, so you want to make sure the process is smooth, depends on healthchecks for being "ready", maybe starts a new one first before turning off old container, and rolls back if there's a problem. None of that happens without Swarm's orchestration and scheduling.
  • Local docker-compose for development works great in the workflow of getting those yaml files into production Swarm servers.
  • Docker and Swarm are the same daemon, so no need to worry about version compatibility of production tools. Swarm isn't going to suddenly make your single production server more complex to manage and maintain.

https://github.com/BretFisher/ama/issues/8

j) Docker enviorment configs, variables, and entrypoints



https://12factor.net/config

https://github.com/BretFisher/php-docker-good-defaults/blob/master/Dockerfile

https://github.com/BretFisher/php-docker-good-defaults/blob/master/docker-compose.yml

https://github.com/BretFisher/php-docker-good-defaults/blob/master/docker-php-entrypoint

https://github.com/docker-library/mysql/blob/a7a737f1eb44db467c85c8229df9d886dd63460e/8.0/docker-entrypoint.sh#L21-L41

https://www.oreilly.com/ideas/3-docker-compose-features-for-improving-team-development-workflow

https://github.com/BretFisher/ama/issues/7

k) Java and JBoss in Container. One .war file per container?

Split in to multiple containers -> long term solution-you can make changes in one of the packeges without redeploying rest

l) TLS in dev and prod with Docker

https://letsencrypt.org/docs/certificates-for-localhost/

https://github.com/BretFisher/dogvscat

https://traefik.io/

m) Multiple Docker images from one git repo

https://docs.docker.com/engine/reference/commandline/build/

n) Docker + ARM, using Raspberry PI or AWS A1 instances with Docker

https://www.theregister.co.uk/2019/04/24/docker_arm_collaberation/

https://www.bretfisher.com/docker-mastery-for-nodejs/

https://aws.amazon.com/blogs/aws/new-ec2-instances-a1-powered-by-arm-based-aws-graviton-processors/

https://www.qemu.org/

o) Docker and Swarm RBAC Options

p) Entrypoint vs cmd, what difference in Dockerfiles?

https://docs.docker.com/engine/reference/builder/#entrypoint

https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#entrypoint

http://www.johnzaccone.io/entrypoint-vs-cmd-back-to-basics/

r) How to use external storage in Docker?

s) Can I turn a VM into a container?

https://github.com/docker/communitytools-image2docker-win

https://github.com/docker/communitytools-image2docker-linux

https://www.youtube.com/watch?v=YVfiK72Il5A

t) Startup order with multi-container apps

https://12factor.net/


38) Restart policies

a) "no" - never attempt to restart this. Container if it stops or crashes

b) always - If this container stops "for any reason" always attempt to restart it
https://alpinelinux.org/
c) on-failure - only restart if the container stops with an error code

d) unless-stopped - always restart unless we (the developers) forcibly stop it


39) Sample pipline/setup 

- DEV
  * create/change features
  * make changes on a non-master branch
- push to github
- create  Pull Request to merge with master
-TEST
  * code pushed to Travis CI
  * tests run
-Merge PR with master
-PROD
  * code pushed to Travis CI
  * tests run
  * deploy to AWS Elastic Beanstalk



docker build -f Dockerfile.dev .
docker run CMD ["npm", "run", "start"]
docker run b8cb8d8217 npm run test



1. use node:alpine
2. copy the package.json file
3. install dependencies - dependencies only needed to execute "npm run build'
4. run 'npm run build'
5. start nginx - where is ngix comming from?


Build phase
1. use node:alpine
2.copy the package.json file
3. install dependencies
4. run 'npm run build'

Run Phase
1. use nginx
2. copy over the result of 'npm run build'
3. start nginx

40) Multi-stage builds / Dockerfiles

https://docs.docker.com/develop/develop-images/multistage-build/


41) Docker compose - templates, environment variables, compose command scope

https://www.oreilly.com/ideas/3-docker-compose-features-for-improving-team-development-workflow

42) Docker - Node.js - good practices

https://github.com/BretFisher/node-docker-good-defaults

a) Node.js Dockerfile best practices

Node Sample Dockerfile

FROM node:12
EXPOSE 3000
WORKDIR /app
COPY package.json package-lock*.json ./
RUN npm install && npm cache

b) Real-world multistage Dockerfile


#Dockerfile

FROM node:alpine as builder
WORKDIR '/app'
COPY package.json .
RUN npm install && npm cache clean -- force
COPY . .
CMD ["npm", "start"]

FROM nginx
COPY --from=builder /app/build /usr/share/nginx/html


c) Build with auditing and sec scans

d) Proper Node shutdown

e) Node HTTP connection managment

43) Docker - PHP - good practices

https://github.com/BretFisher/php-docker-good-defaults

44) Docker Certification

https://www.bretfisher.com/docker-certified-associate/

45) Benefits (pros) of Docker

- isolation: you ship a binary with all the depencides - no more it works on my machine, but not in production
-closer parity between dev, qa,, and production enviorments
-Docker makes development teams able to ship faster
-you can run the same docker image, unchanged, on loptops, data center Vms, and Cloud providers
-Docker uses Linux Containers (a kernel feature) for operating system-level isolation

46) Pushing an image to DockerHub

docker login
docker tag imageid your_login/docker_image
docker push your_login/docker_image


47) ELK (ElasticSearch, Logstash, Kibana) on Docker

https://github.com/deviantony/docker-elk


48)
49)
50)


Komentarze

Popularne posty z tego bloga

Kubernetes

Helm

Ansible Tower / AWX