Don't wanna be here? Send us removal request.
Text
Docker Compose
For running multiple containers, we use docker compose with a YAML file. This file specifies the services which we need to run as containers.
docker-compose is the command that we use.
It can be used for any environment - staging, production, development
You can define volumes & services inside the YAML file
docker-compose up brings set of containers up and running. docker-compose down stops those containers.
Example: version: '2.0' services: web: build: . ports: - "5000:5000 redis: image: redis volumes: someVolume: {}
Use a env file (.env) to pass environment flags.
docker-compose up -d for detached mode.
docker-compose ps for listing down processes.
0 notes
Text
Kubernetes (K8s)
In ideal scenario, AWS scales using dockerrun.aws.json i.e. if we have 3 services defined inside it, it will create 3 versions of each which is not the optimal result. Also, we might want to optimize some of things like: scaling, rollback, canary deployment etc. We also want to store some critical info like passwords, keys etc. So we use some orchestration system like K8s or Docker swarm or Mesos.
We have a master which controls the number of nodes running. Kubernetes is helpful if we have different types of container running together. Minikube is used to run k8s locally.
0 notes
Text
Multi Docker - EB
If you want to deploy a multi-docker app to AWS, you need following things:
Add dockerrun.aws.json & set AWSEBDockerrunVersion to 2
While creating an application under Elastic Beanstalk, when you select platform as Docker, select platform branch to multi-container.
Similar to docker-compose, add your services to dockerrun.aws.json.
If you are using Travis (which I am using coz it works with Github), add deploy section with app, env, access keys etc.
If you have application like redis, PostgreSQL etc, you need to enable connection between them using security rules. I basically added them all to one security group to make my app work.
You can obviously add environment variables to your EB application.
0 notes
Text
RDS and EC
Relational database service - (MySQL, PostgreSQL, SQL Server etc) & Elastic Cache (Redis & Memcached) Managed version of these are provided by AWS out of the box. So on local environments, you might use docker containers to spin up a Redis container or mysql container or whatever but with production environments, use these managed versions.
Note:
You can always choose not to go with these but using a managed version reduces your work.
Redis supports persistence storage capability. Memcached doesn’t
0 notes
Text
VPC & Subnets
Subnets: subnets or sub (small) networks created to achieve security of data within a sub-network and increased performance of the network as smaller networks are faster. If VPC is a house, subnet is a room inside the house.
VPC - Virtual Private Cloud (AWS). By default in cloud setup, all instances (Subnets) are disconnected from each other. With VPC you have inbound and outbound rules (defined by security group) which state what all traffic can go outside the VPC and what all traffic is allowed to enter the VPC.
Security Group: A security group acts as a firewall for your instance to control inbound and outbound traffic. A security group is stateful which means incoming response of a particular outgoing request is always allowed.
0 notes
Text
Deployment to AWS (single container)
I tried CI/CD using Travis & AWS today.
Adding a .travis.yml file to your codebase. (These are the steps used by travis to build your project)
Go to travis dashboard and connect your Github account to Travis. Once done you will be able to use your Github repo as project in travis
Go to AWS and create a Elastic Beanstalk (EB) application. Use `Docker` as platform while creating this. Also create a user and give required roles/permissions (deploy to s3 bucket and EB environment) to it.
FYI: EB provides auto-scaling and load balancer out of the box.
Once you create user at step 3, you get access_key_id and secret_access_key which can be stored in Travis environment variables. Not these will be used by Travis while initiating deploying on AWS.
`deploy:` section can be used in travis file to instruct Travis to try and deploy to AWS. This is the code I used: deploy: provider: elasticbeanstalk access_key_id: ${key_id} secret_access_key: ${access_key} region: "ap-south-1" app: "aws-integration" env: "AwsIntegration-env" bucket_name: "${s3_bucket}"
Provider will be hard-coded. `app` and `env` are from step 3 when you created a new EB application.
Note: In Dockerfile, having aliases don’t work. So you might want to change that.
You can also check environment logs.
0 notes
Text
ONBUILD
The ONBUILD instruction adds to the image a trigger instruction to be executed at a later time, when the image is used as the base for another build.
The trigger will be executed in the context of the downstream build, as if it had been inserted immediately after the FROM instruction in the downstream Dockerfile.
Say there is a base image - base:app and there is a child image - app Now, you might want to automate somethings in base:app only. Example: Paths that are being defined in base image can be used and some further steps can be automated (before having actual content copied in the child image).
0 notes
Text
Best practices for Docker
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#add-or-copy
If this link doesn’t open, please do DM me to [email protected]. I might want to update/delete the post.
0 notes
Text
Multi-image Dockerfile (Sample)
FROM node:alpine as node-container WORKDIR /tmp/app COPY ./test.js ./ # Copying from node-container FROM nginx EXPOSE 3000 COPY --from=node-container /tmp/app /app
COPY --from will copy from one image to other. Note: This can also be achieved using volumes
0 notes
Text
Detached container
You can use -d to run docker container in detached mode to execute other commands.
docker run -d whatever
Now this will give you the containerId and run in the background.
If in case you want to re-attach, use
docker attach <containerId>
0 notes
Text
docker run
docker run = docker create + docker start In most of the scenarios, we will use docker run to create a container from an image but there could be scenarios where you would want to create a Docker container and then start it manually. For example: If there was a container which was stopped. There is no need to create another one in this case.
Though when we are using restart-policy, it will automatically restart.
docker start -a -i containerId
-a and -i are optional flags. -i connects STDIN and -a attaches STDOUT and STDERR.
In production, you won’t need them. Flag: -i is applicable for docker run command as well.
0 notes
Text
Get docker info
docker inspect <containerId>
This will print result in the form of JSON.
docker inspect <volumeId/volumeName>
Displays info about the docker volume.
0 notes
Text
volume - refer host altogether
Using volumes, you can refer host for a specific directory altogether (instead of referring the container). This is an anonymous volume.
docker run -v ./location -it someImage
Without colon (:), it will refer host system for that particular folder.
Note: Using a volume driver, you can store data on external storages like S3 or GCS (Google Cloud).
0 notes
Text
CI/CD
Your Jenkins can build docker image and then push it to docker hub (public or private docker registry).
Then you can tell AWS (I prefer AWS for development) about this image and AWS pull image from docker hub/docker registry and build an environment and deploy this env to production.
0 notes
Text
Hypervisor & VMs
Hypervisor is used for running VMs. Hypervisor focuses on better utilisation of resources but VMs run in isolation and don’t share anything. VMs don’t share kernel as well.
Hypervisor lets you run multiple VMs on same instance. System resources are shared between VMs and hypervisor controls that but they always run in isolation. Hypervisor provides machine-level virtualization
Note: Docker are light weight alternatives to multiple VMs running each micro-service. Docker uses resources on demand. You can either run multiple services inside one container or use docker compose to start multiple containers - each running one service. Containers provide OS-level virtualisation.
0 notes
Text
volumes
volumes are mapping of storage spaces to host system. This can be used when multiple services wants to share data and also when you want data to persist. (When the container exits, the storage space inside that also gets deleted)
docker run -v myVolume:/path/inside/container -it someImage .
- myVolume can be create before-hand using:
docker volume create myVolume
- You can list down all volumes using:
docker volume ls
- Deleting a volume:
docker volume rm volumeName
- Removing all volumes
docker volume prune
0 notes
Text
Listing/removing images
Listing all the images: docker images
Removing a specific image: docker rmi imageId
Pulling the image (doesn’t run it) docker pull imageName
0 notes