#postgres docker
Explore tagged Tumblr posts
codeonedigest · 1 year ago
Video
youtube
Run Nestjs Microservices in Docker Container with Postgresql Database Full Video Link -      https://youtu.be/HPvpxzagsNg Check out this new video about Running Nesjs Microservices in Docker Container with Postgresql DB on the CodeOneDigest YouTube channel! Learn to setup nestjs project with dependencies. Learn to create docker image of nestjs project. Learn to connect nestjs application with postgresql database. #postgresql #nestjs #docker #dockerfile #microservices #codeonedigest@nestframework @nodejs @typescript @Docker @PostgreSQL @typeormjs @JavaScript @dotenvx @npmjs @vscodetips @getpostman
1 note · View note
mutter-official · 6 months ago
Text
I highly recommend doing it via Docker. It makes the whole system so much more modular and extensible and (for the most part) easier to maintain. Also, if you use the AIO image, it's really easy to set up and maintain (though you do lose some of the modularity)
I also highly recommend setting it up with Postgres and Redis if you want speed. Postgres in my experience is just a little bit faster than MariaDB, and it'll use Redis as a cache that'll make it waaaaaaayyyy faster.
It did used to be pretty slow, but it's improved a LOT over the last couple of years. It's how I manage every single file I use or ever have used with the exception of my git repos (which live on GitHub)
The nextcloud apps are really cool. You can just super easily add in extra services without needing to configure anything. I found "deck" yesterday and now we have a neat little kanban board for all the things we need to do for the move
33 notes · View notes
debian-official · 5 months ago
Text
chat I'm testing a few systems & need a simple application with postgres database & ideally docker compose ready to go. any recommendations?
8 notes · View notes
molsno · 1 year ago
Text
if my goal with this project was just "make a website" I would just slap together some html, css, and maybe a little bit of javascript for flair and call it a day. I'd probably be done in 2-3 days tops. but instead I have to practice and make myself "employable" and that means smashing together as many languages and frameworks and technologies as possible to show employers that I'm capable of everything they want and more. so I'm developing apis in java that fetch data from a postgres database using spring boot with authentication from spring security, while coding the front end in typescript via an angular project served by nginx with https support and cloudflare protection, with all of these microservices running in their own docker containers.
basically what that means is I get to spend very little time actually programming and a whole lot of time figuring out how the hell to make all these things play nice together - and let me tell you, they do NOT fucking want to.
but on the bright side, I do actually feel like I'm learning a lot by doing this, and hopefully by the time I'm done, I'll have something really cool that I can show off
8 notes · View notes
sun-praiser · 7 months ago
Text
When you attempt to validate that a data pipeline is loading data into a postgres database, but you are unable to find the configuration tables that you stuffed into the same database out of expediency, let alone the data that was supposed to be loaded, dont be surprised if you find out after hours of troubleshooting that your local postgres server was running.
Further, dont be surprised if that local server was running, and despite the pgadmin connection string being correctly pointed to localhost:5432 (docker can use the same binding), your pgadmin decides to connect you to the local server with the same database name, database user name, and database user password.
Lessons learned:
try to use unique database names with distinct users and passwords across all users involved in order to avoid this tomfoolery in the future, EVEN IN TEST, ESPECIALLY IN TEST (i dont really have a 'prod environment, homelab and all that, but holy fuck)
do not slam dunk everything into a database named 'toilet' while playing around with database schemas in order to solidify your transformation logic, and then leave your local instance running.
do not, in your docker-compose.yml file, also name the database you are storing data into, 'toilet', on the same port, and then get confused why the docker container database is showing new entries from the DAG load functionality, but you cannot validate through pgadmin.
3 notes · View notes
reasonableapproximation · 2 years ago
Text
A thing I've been looking into at work lately is collation, and specifically sorting. We want to compare in-memory implementations of things to postgres implementations, which means we need to reproduce postgres sorting in Haskell. Man it's a mess.
By default postgres uses glibc to sort. So we can use the FFI to reproduce it.
This mostly works fine, except if the locale says two things compare equal, postgres falls back to byte-comparing them. Which is also fine I guess, we can implement that too, but ugh.
Except also, this doesn't work for the mac user, so they can't reproduce test failures in the test suite we implemented this in.
How does postgres do sorting on mac? Not sure.
So we figured we'd use libicu for sorting. Postgres supports that, haskell supports it (through text-icu), should be fine. I'm starting off with a case-insensitive collation.
In postgres, you specify a collation through a string like en-u-ks-level2-numeric-true. (Here, en is a language, u is a separator, ks and numeric are keys and level2 and true are values. Some keys take multiple values, so you just have to know which strings are keys I guess?) In Haskell you can do it through "attributes" or "rules". Attributes are type safe but don't support everything you might want to do with locales. Rules are completely undocumented in text-icu, you pass in a string and it parses it. I'm pretty sure the parsing is implemented in libicu itself but it would be nice if text-icu gave you even a single example of what they look like.
But okay, I've got a locale in Haskell that I think should match the postgres one. Does it? Lolno
So there's a function collate for "compare these two strings in this locale", and a function sortKey for "get the sort key of this string in this locale". It should be that "collate l a b" is the same as "compare (sortKey l a) (sortKey l b)", but there are subtle edge cases where this isn't the case, like for example when a is the empty string and b is "\0". Or any string whose characters are all drawn from a set that includes NUL, lots of other control codes, and a handful of characters somewhere in the Arabic block. In these cases, collate says they're equal but sortKey says the empty string is smaller. But pg gets the same results as collate so fine, go with that.
Also seems like text-icu and pg disagree on which blocks get sorted before which other blocks, or something? At any rate I found a lot of pairs of (latin, non-latin) where text-icu sorts the non-latin first and pg sorts it second. So far I've solved this by just saying "only generate characters in the basic multilingual plane, and ignore anything in (long list of blocks)".
(Collations have an option for choosing which order blocks get sorted in, but it's not available with attributes. I haven't bothered to try it with rules, or with the format pg uses to specify them.)
I wonder how much of this is to do with using different versions of libicu. For Haskell we use a nix shell, which is providing version 72.1. Our postgres comes from a docker image and is using 63.1. When I install libicu on our CI images, they get 67.1 (and they can't reproduce the collate/sortKey bug with the arabic characters, so fine, remove them from the test set).
(I find out version numbers locally by doing lsof and seeing that the files are named like .so.63.1. Maybe ldd would work too? But because pg is in docker I don't know where the binary is. On CI I just look at the install logs.)
I wonder if I can get 63.1 in our nix shell. No, node doesn't support below 69. Fine, let's try 69. Did you know chromium depends on libicu? My laptop's been compiling chromium for many hours now.
7 notes · View notes
souhaillaghchimdev · 2 months ago
Text
Using Docker in Software Development
Tumblr media
Docker has become a vital tool in modern software development. It allows developers to package applications with all their dependencies into lightweight, portable containers. Whether you're building web applications, APIs, or microservices, Docker can simplify development, testing, and deployment.
What is Docker?
Docker is an open-source platform that enables you to build, ship, and run applications inside containers. Containers are isolated environments that contain everything your app needs—code, libraries, configuration files, and more—ensuring consistent behavior across development and production.
Why Use Docker?
Consistency: Run your app the same way in every environment.
Isolation: Avoid dependency conflicts between projects.
Portability: Docker containers work on any system that supports Docker.
Scalability: Easily scale containerized apps using orchestration tools like Kubernetes.
Faster Development: Spin up and tear down environments quickly.
Basic Docker Concepts
Image: A snapshot of a container. Think of it like a blueprint.
Container: A running instance of an image.
Dockerfile: A text file with instructions to build an image.
Volume: A persistent data storage system for containers.
Docker Hub: A cloud-based registry for storing and sharing Docker images.
Example: Dockerizing a Simple Python App
Let’s say you have a Python app called app.py: # app.py print("Hello from Docker!")
Create a Dockerfile: # Dockerfile FROM python:3.10-slim COPY app.py . CMD ["python", "app.py"]
Then build and run your Docker container: docker build -t hello-docker . docker run hello-docker
This will print Hello from Docker! in your terminal.
Popular Use Cases
Running databases (MySQL, PostgreSQL, MongoDB)
Hosting development environments
CI/CD pipelines
Deploying microservices
Local testing for APIs and apps
Essential Docker Commands
docker build -t <name> . — Build an image from a Dockerfile
docker run <image> — Run a container from an image
docker ps — List running containers
docker stop <container_id> — Stop a running container
docker exec -it <container_id> bash — Access the container shell
Docker Compose
Docker Compose allows you to run multi-container apps easily. Define all your services in a single docker-compose.yml file and launch them with one command: version: '3' services: web: build: . ports: - "5000:5000" db: image: postgres
Start everything with:docker-compose up
Best Practices
Use lightweight base images (e.g., Alpine)
Keep your Dockerfiles clean and minimal
Ignore unnecessary files with .dockerignore
Use multi-stage builds for smaller images
Regularly clean up unused images and containers
Conclusion
Docker empowers developers to work smarter, not harder. It eliminates "it works on my machine" problems and simplifies the development lifecycle. Once you start using Docker, you'll wonder how you ever lived without it!
0 notes
learning-code-ficusoft · 4 months ago
Text
Using Docker for Full Stack Development and Deployment
Tumblr media
1. Introduction to Docker
What is Docker? Docker is an open-source platform that automates the deployment, scaling, and management of applications inside containers. A container packages your application and its dependencies, ensuring it runs consistently across different computing environments.
Containers vs Virtual Machines (VMs)
Containers are lightweight and use fewer resources than VMs because they share the host operating system’s kernel, while VMs simulate an entire operating system. Containers are more efficient and easier to deploy.
Docker containers provide faster startup times, less overhead, and portability across development, staging, and production environments.
Benefits of Docker in Full Stack Development
Portability: Docker ensures that your application runs the same way regardless of the environment (dev, test, or production).
Consistency: Developers can share Dockerfiles to create identical environments for different developers.
Scalability: Docker containers can be quickly replicated, allowing your application to scale horizontally without a lot of overhead.
Isolation: Docker containers provide isolated environments for each part of your application, ensuring that dependencies don’t conflict.
2. Setting Up Docker for Full Stack Applications
Installing Docker and Docker Compose
Docker can be installed on any system (Windows, macOS, Linux). Provide steps for installing Docker and Docker Compose (which simplifies multi-container management).
Commands:
docker --version to check the installed Docker version.
docker-compose --version to check the Docker Compose version.
Setting Up Project Structure
Organize your project into different directories (e.g., /frontend, /backend, /db).
Each service will have its own Dockerfile and configuration file for Docker Compose.
3. Creating Dockerfiles for Frontend and Backend
Dockerfile for the Frontend:
For a React/Angular app:
Dockerfile
FROM node:14 WORKDIR /app COPY package*.json ./ RUN npm install COPY . . EXPOSE 3000 CMD ["npm", "start"]
This Dockerfile installs Node.js dependencies, copies the application, exposes the appropriate port, and starts the server.
Dockerfile for the Backend:
For a Python Flask app
Dockerfile
FROM python:3.9 WORKDIR /app COPY requirements.txt . RUN pip install -r requirements.txt COPY . . EXPOSE 5000 CMD ["python", "app.py"]
For a Java Spring Boot app:
Dockerfile
FROM openjdk:11 WORKDIR /app COPY target/my-app.jar my-app.jar EXPOSE 8080 CMD ["java", "-jar", "my-app.jar"]
This Dockerfile installs the necessary dependencies, copies the code, exposes the necessary port, and runs the app.
4. Docker Compose for Multi-Container Applications
What is Docker Compose? Docker Compose is a tool for defining and running multi-container Docker applications. With a docker-compose.yml file, you can configure services, networks, and volumes.
docker-compose.yml Example:
yaml
version: "3" services: frontend: build: context: ./frontend ports: - "3000:3000" backend: build: context: ./backend ports: - "5000:5000" depends_on: - db db: image: postgres environment: POSTGRES_USER: user POSTGRES_PASSWORD: password POSTGRES_DB: mydb
This YAML file defines three services: frontend, backend, and a PostgreSQL database. It also sets up networking and environment variables.
5. Building and Running Docker Containers
Building Docker Images:
Use docker build -t <image_name> <path> to build images.
For example:
bash
docker build -t frontend ./frontend docker build -t backend ./backend
Running Containers:
You can run individual containers using docker run or use Docker Compose to start all services:
bash
docker-compose up
Use docker ps to list running containers, and docker logs <container_id> to check logs.
Stopping and Removing Containers:
Use docker stop <container_id> and docker rm <container_id> to stop and remove containers.
With Docker Compose: docker-compose down to stop and remove all services.
6. Dockerizing Databases
Running Databases in Docker:
You can easily run databases like PostgreSQL, MySQL, or MongoDB as Docker containers.
Example for PostgreSQL in docker-compose.yml:
yaml
db: image: postgres environment: POSTGRES_USER: user POSTGRES_PASSWORD: password POSTGRES_DB: mydb
Persistent Storage with Docker Volumes:
Use Docker volumes to persist database data even when containers are stopped or removed:
yaml
volumes: - db_data:/var/lib/postgresql/data
Define the volume at the bottom of the file:
yaml
volumes: db_data:
Connecting Backend to Databases:
Your backend services can access databases via Docker networking. In the backend service, refer to the database by its service name (e.g., db).
7. Continuous Integration and Deployment (CI/CD) with Docker
Setting Up a CI/CD Pipeline:
Use Docker in CI/CD pipelines to ensure consistency across environments.
Example: GitHub Actions or Jenkins pipeline using Docker to build and push images.
Example .github/workflows/docker.yml:
yaml
name: CI/CD Pipeline on: [push] jobs: build: runs-on: ubuntu-latest steps: - name: Checkout Code uses: actions/checkout@v2 - name: Build Docker Image run: docker build -t myapp . - name: Push Docker Image run: docker push myapp
Automating Deployment:
Once images are built and pushed to a Docker registry (e.g., Docker Hub, Amazon ECR), they can be pulled into your production or staging environment.
8. Scaling Applications with Docker
Docker Swarm for Orchestration:
Docker Swarm is a native clustering and orchestration tool for Docker. You can scale your services by specifying the number of replicas.
Example:
bash
docker service scale myapp=5
Kubernetes for Advanced Orchestration:
Kubernetes (K8s) is more complex but offers greater scalability and fault tolerance. It can manage Docker containers at scale.
Load Balancing and Service Discovery:
Use Docker Swarm or Kubernetes to automatically load balance traffic to different container replicas.
9. Best Practices
Optimizing Docker Images:
Use smaller base images (e.g., alpine images) to reduce image size.
Use multi-stage builds to avoid unnecessary dependencies in the final image.
Environment Variables and Secrets Management:
Store sensitive data like API keys or database credentials in Docker secrets or environment variables rather than hardcoding them.
Logging and Monitoring:
Use tools like Docker’s built-in logging drivers, or integrate with ELK stack (Elasticsearch, Logstash, Kibana) for advanced logging.
For monitoring, tools like Prometheus and Grafana can be used to track Docker container metrics.
10. Conclusion
Why Use Docker in Full Stack Development? Docker simplifies the management of complex full-stack applications by ensuring consistent environments across all stages of development. It also offers significant performance benefits and scalability options.
Recommendations:
Encourage users to integrate Docker with CI/CD pipelines for automated builds and deployment.
Mention the use of Docker for microservices architecture, enabling easy scaling and management of individual services.
WEBSITE: https://www.ficusoft.in/full-stack-developer-course-in-chennai/
0 notes
xmanoel · 5 months ago
Text
Podman: Exportar/Importar volumenes
A ver, esto que explico es para Podman (supongo que en Docker tienen algo parecido, pero como ahora es de pago, ya no le dedico tiempo ni lo uso).
Aqui me estoy refiriendo a mover los datos del contenedor que no están en un volumen, para exportar el fileystem del contenedor se hace con podman export. Pero los datos del contenedor (sobre todo si es una BBDD esta alojada en un volumen externo). Obtener la información de cual es el volumen de ese contenedor:
podman container inspect nonprod "Type": "volume", "Name": "96ba830cfdc14f4758df5c7a06de5b716f4a415fecf1abdde3a27ebd989bd640", "Source": "/home/user/.local/share/containers/storage/volumes/2d062d3174a4a694427da5c102edf1731c5ca9f20e8ee7b229e04d4cb4a5bc69/_data", "Destination": "/var/lib/postgresql/data",
Vale, el volumen se llama "96ba830cfdc14f4758df5c7a06de5b716f4a415fecf1abdde3a27ebd989bd640".
Entramos por ssh en la máquina:
podman machine ssh --username user Connecting to vm podman-machine-default. To close connection, use ~. or exit Last login: Fri Jan 31 13:45:48 2025 from ::1 [user@MYCOMPUTER ~]$
Exportamos el contenido del volumen a un tar:
podman volume export 96ba830cfdc14f4758df5c7a06de5b716f4a415fecf1abdde3a27ebd989bd640 -o volume.tar
Como lo voy a hacer en local y no hay problemas de velocidad ni espacio, lo dejo así sin comprimir. Así me ahorro pasos. Pero si quereis compartirlo por red con otro ordenador o compañero pues le pasais un gzip o bzip2.
Me salgo.
exit
Ahora obtengo los datos de conexión de la máquina de podman para poder copiar el fichero al exterior usando SCP.
podman system connection list Name URI Identity Default ReadWrite podman-machine-default ssh://[email protected]:56086/run/user/1000/podman/podman.sock C:\Users\xmanoel.local\share\containers\podman\machine\machine true true podman-machine-default-root ssh://[email protected]:56086/run/podman/podman.sock C:\Users\xmanoel.local\share\containers\podman\machine\machine false true
De aqui lo importante es el fichero donde estan las idendidades y el puerto para hacer el SCP.
scp -i C:\Users\xmanoel.local\share\containers\podman\machine\machine -P 56086 user@localhost:~/volume.tar .
Ya esta, ya lo tenemos en la máquina local. Aquí lo puedes compartir con otro equipo, enviarlo por la red o lo que quieras. Ahora para crear otro contenedor vacío y usarlo para recibir los datos de este volumen...
podman run -d --name nonprod -p 5432:5432 postgres podman stop nonprod
Como veis lo creo pero inmediatamente lo detengo. No croe que sea buena idea tener ejecutando el contenedor cuando vamos a sobreescribir los datos.
Vemos cual era el volumen que se creó este contenedor: podman container inspect nonprod
"Type": "volume", "Name": "2d062d3174a4a694427da5c102edf1731c5ca9f20e8ee7b229e04d4cb4a5bc69", "Source": "/home/user/.local/share/containers/storage/volumes/2d062d3174a4a694427da5c102edf1731c5ca9f20e8ee7b229e04d4cb4a5bc69/_data", "Destination": "/var/lib/postgresql/data",
En este caos el volumen se llama "2d062d3174a4a694427da5c102edf1731c5ca9f20e8ee7b229e04d4cb4a5bc69". Pues ahora a la inversa. Una vez más vemos cual es el puerto y el fichero de identidad del podman:
podman system connection list Name URI Identity Default ReadWrite podman-machine-default ssh://[email protected]:56086/run/user/1000/podman/podman.sock C:\Users\xmanoel.local\share\containers\podman\machine\machine true true podman-machine-default-root ssh://[email protected]:56086/run/podman/podman.sock C:\Users\xmanoel.local\share\containers\podman\machine\machine false true
En este caso este paso no era necesario porque como veis estoy copiando de vuelta en mi propia máquina. Pero bueno, es que yo lo estoy haciendo de ejemplo. En vuestro caso esto lo tendríais que hacer en la otra máquina y os saldrían cosas diferentes. Copiamos el tar adentro. Que es la inversa de lo de antes:
scp -i C:\Users\xmanoel.local\share\containers\podman\machine\machine -P 56086 user@localhost:~/volume.tar . Y entramos otra vez con ssh.
podman machine ssh --username user Connecting to vm podman-machine-default. To close connection, use ~. or exit Last login: Fri Jan 31 16:21:41 2025 from ::1 [user@HEREWEARE ~]$
Y simplemente ahora es importar el contenido del tar en el volumen. Cuidado aquí porque claro, lo que va a pasar es que se va a cargar lo que haya en el volumen de antes. Por eso, si recordais hace un rato hemos creado un contenedor nuevo, para no fastidiar nada de lo existente. Si vosotros quereis expresamente reutilizar un volumen ya existente, pues ya sabeis ahí todo vuestro.
podman volume import 2d062d3174a4a694427da5c102edf1731c5ca9f20e8ee7b229e04d4cb4a5bc69 volume.tar
Ahora ya podemos salirnos:
exit
Y levantar el contenedor que habíamos creado. Ese contenedor ahora leerá el volumen que hemos importado, por tanto los datos que estaban en el contenedor inicial estarán dentro.
podman start nonprod
Y nada, espero que os sea util.
Tumblr media
0 notes
jay3thearduinohobbyist · 7 months ago
Video
youtube
Postgres and pgAdmin in Docker
0 notes
er-10-media · 9 months ago
Text
Как быстрее стать мидлом в Java: советы разработчика
New Post has been published on https://er10.kz/read/kak-bystree-stat-midlom-v-java-sovety-razrabotchika/
Как быстрее стать мидлом в Java: советы разработчика
На Open day в DAR University Senior software engineer Мади Кинжеев поделился советами для начинающих IT-специалистов о том, как быстрее повысить свой грейд, то есть профессиональный уровень. Сам Мади владеет технологиями:
Java 8, 11, 17
Spring Framework
PostgreSQL
MongoDB
Redis
Git
Docker
Kubernetes
Kafka
RabbitMQ
Он разрабатывал backend для мобильных приложений Sber Kz, Jusan Business, HalykMarket. Сейчас принимает участие в разработке ERP-системы для автоматизации бизнеса Darlean. Предупреждаем: это личные рекомендации героя статьи, основанные на его персональном опыте.
Чем отличаются джуны, мидлы и сеньоры друг от друга?
Junior-специалист работает при поддержке других разработчиков и владеет основами языка, синтаксисом Java, фреймворком Spring Boot;
Middle-специалист работает самостоятельно, иногда обращается за советом или помощью к Senior-специалистам. Пишет читаемый и достаточно оптимизированный код, знает и использует тонкости языка, умеет работать с базой данных и писать сложные запросы;
Senior-специалист умеет работать в команде, продумывает рабочие процессы, определяет стандарты написания кода и следит за его качеством. Проводит код-ревью, продумывает архитектуру приложений и в совершенстве владеет технологиями.
Итак, как быстрее стать мидлом?
Какие технологии изучать для начала
Java 8 — одна из наиболее популярных версий языка в Казахстане;
Spring boot — фреймворк для разработки приложений;
Postgres — реляционная база данных;
MongoDB — нереляционная база данных.
Системы логирования для отладки приложений:
Grafana — это инструмент для визуализации данных. Он помогает строить графики и диаграммы на основе данных, включая логи, что позволяет легче анализировать проблемы;
ElasticSearch — это поисковый движок, который помогает быстро находить нужные данные среди логов. Он особенно полезен, когда логов много, и нужно быстро найти конкретную информацию;
GIT — система управления версиями;
Docker — система для развертывания приложений на удаленном сервере.
Как получить первую работу?
Пять с половиной лет назад я был в поиске первой работы, и мне помогло прохождение стажировок. Если придется, то даже рекомендую поработать бесплатно. Это самый просто способ узнать обо всем, что актуально на рынке технологии. Основной целью на этом этапе должно стать получение опыта.
Чтобы получить оффер, нужно выделить время на написание сопроводительного письма. Писать стоит именно под выбранную вами компанию. Изучите ее. Не бойтесь тратить время на сопроводительные письма: сейчас рынок переполнен джунами, которые отправляют во все компании резюме, сгенерированные ИИ. Продемонстрируйте свое отличие от других кандидатов.
Берите как можно больше задач
Выполняйте разные задачи для получения максимального опыта. На этом этапе стоит не углубляться в каждую из них, а осваивать как можно больше инструментов. Уходить в глубины технологий будет иметь смысл, когда вы уже станете мидлом.
В разработке очень много разных технологи, и никто не требует знать каждую в совершенстве. Последовательно, от технологии к технологии, изучайте самые востребованные инструменты.
Будьте инженером, а не разработчиком
Действительно, Docker зачастую пользуются DevOps-инженеры, специалисты, ответственные за поддержания софта в рабочем состоянии и автоматизацию каждого этапа разработки. Но это заблуждение, что девопсы выполнят всю работу за вас.
Вы должны абстрагироваться от понятия Java-разработчик и стремиться стать инженером. Как минимум, стоит освоить удаленное подключение к серверам и получить навыки проверки логов (журналов действий) в микросервисах. Это позволит вам самостоятельно понимать, почему программа не запускается или перезапускается, иначе все эти процессы будут казаться фантастикой.
Изучайте больше одного языка программирования
Вы должны уметь адаптироваться, потому что компании способны менять языки программирования. Даже если ваша компания не сменит язык разработки, она может решить написать новый сервис на другом языке, например, Go.
Пишите эмпатичный код
Старайтесь писать такой код, который поймут другие люди. Процентов семьдесят своего рабочего времени разработчик читает, а не пишет код. Придется читать много как хорошего, так и плохого кода: запаситесь терпением и учитесь у лучших.
Не бойтесь галер
В разработке существует страшный миф о галерах, аутсорс-компаниях, создающих софт для других организаций. Таким образом, на рынок продается не итоговый продукт, а труд разработчиков. Галерами такие компании называют из-за кабальных условий труда.
Я считаю, что это миф. В аутсорсе больше возможностей развиваться, есть больше свободы, чем, например, в крупных компаниях. Вы можете допускать ошибки, это поможет понимать, чего делать не стоит.
Развивайте гибкие навыки
Мидлу, в отличие от джуна, гораздо важнее развивать гибкие навыки. Это означает умение взаимодействовать с другими специалистами (не только разработчиками). В команду входят продакт-менеджеры, бизнес и системные аналитики, дизайнеры, тес��ировщики и другие специалисты.
Сейчас я участвую в разработке функционала для проекта Darlean.kz. Эта платформа для управления бизнесом состоит из более чем 30-ти инструментов. В числе модулей есть цифровой офис, управление проектами и процессами, электронный документооборот. Работа над таким большим проектом требует четкой координации между всеми участниками команды, чтобы гарантировать, что решения, принимаемые в рамках одного инструмента, не противоречат другим. Поэтому гибкие навыки, такие как эффективная коммуникация, способность слушать и предлагать конструктивные идеи, играют важную роль в достижении общей цели: принести технологии крупного бизнеса малому и среднему бизнесу.
0 notes
codeonedigest · 2 years ago
Text
youtube
0 notes
suncloudvn · 9 months ago
Text
Sử dụng Docker Compose cấu hình Netbox + Nginx làm Reverse Proxy
Tumblr media
Docker Compose là một công cụ mạnh mẽ giúp triển khai các ứng dụng đa container một cách dễ dàng và hiệu quả. Trong bài viết này, chúng ta sẽ tìm hiểu cách cấu hình ứng dụng Netbox kết hợp với Nginx làm Reverse Proxy khi sử dụng Docker Compose. Mô hình triển khai bao gồm nhiều container, như Netbox, PostgreSQL, Redis và Nginx, giúp ứng dụng hoạt động mượt mà với hiệu suất tối ưu. Cùng khám phá từng bước thực hiện chi tiết và các tùy chỉnh cần thiết để cài đặt thành công trên Docker Host.
1. Mô hình docker compose mà tôi build
Mô hình này biểu thị cách cấu trúc của một ứng dụng NetBox triển khai trên Docker, với sự phối hợp giữa nhiều container khác nhau (NetBox, PostgreSQL, Redis, Nginx) và mạng nội bộ để cung cấp một ứng dụng quản lý mạng đầy đủ chức năng.
Docker Host:
Đây là máy chủ vật lý hoặc máy ảo nơi Docker được cài đặt và chạy. Tất cả các container sẽ hoạt động bên trong máy chủ này.
Netbox_net (Network):
Đây là mạng Docker nội bộ kết nối các container với nhau. Các container sẽ giao tiếp qua mạng này.
Container NetBox:
Đây là container chính chứa ứng dụng NetBox (một công cụ quản lý mạng). NetBox sẽ sử dụng các dịch vụ từ những container khác như PostgreSQL và Redis để hoạt động.
Container PostgreSQL:
Đây là container chứa cơ sở dữ liệu PostgreSQL. NetBox sẽ lưu trữ dữ liệu của nó trong cơ sở dữ liệu này.
Container Redis:
Redis là hệ thống lưu trữ bộ nhớ tạm (cache). Container Redis sẽ được sử dụng bởi NetBox để cải thiện hiệu năng, lưu trữ dữ liệu tạm thời.
Container Nginx:
Nginx là một máy chủ web được sử dụng để xử lý yêu cầu từ phía người dùng đến NetBox. Container này sẽ lắng nghe và chuyển tiếp các yêu cầu HTTP/HTTPS đến các thành phần NetBox tương ứng.
Expose Ports:
Cổng 80 và 443 của container Nginx được mở ra cho mạng bên ngoài, cho phép người dùng có thể truy cập vào ứng dụng NetBox từ trình duyệt web thông qua HTTP hoặc HTTPS.
Cổng ens160 là cổng mạng của máy chủ Docker host, cho phép các container kết nối với mạng bên ngoài.
2. Hướng dẫn sử dụng Docker Compose
Đầu tiên bạn sẽ cần download repo này về. Lưu ý bắt buộc phải di chuyển đến thư mục /opt nếu không file active sẽ có thể bị lỗi.
cd /opt/
git clone https://github.com/thanhquang99/Docker
Tiếp theo ta sẽ chạy file docker compose
cd /opt/Docker/netbox/
docker compose up
Ta có thể tùy chỉnh biến trong file docker compose để thay đổi user và password của netbox hay postgres
vi /opt/Docker/netbox/docker-compose.yml
Đợi thời gian khoảng 5 phút để docker compose chạy xong ta sẽ tạo thêm 1 terminal mới ctrl +shirt +u để tiến hành active bao gòm tạo super user và cấu hình nginx làm reverse proxy.
cd /opt/Docker/netbox/
chmod +x active.sh
. active.sh
Bây giờ ta cần nhập thông tin từ màn hình vào (yêu cầu đúng cú pháp được gợi ý), thông tin sẽ bao gồm tên miền của netbox, gmail, user và password của netbox.
Bây giờ chỉ cần đợi cho quá trình hoàn tất. Sau khi quá trình hoàn tất nhưng mà bạn quên thông tin thì vẫn có thể xem lại.
root@Quang-docker:~# cat thongtin.txt
Sửa file hosts thành 172.16.66.41 quang.netbox.com
Link truy cập netbox: https://quang.netbox.com
Netbox User: admin
Netbox password: fdjhuixtyy5dpasfn
netbox mail: [email protected]
Sửa file hosts thành 172.16.66.41 quang.netbox.com
Link truy cập netbox: https://quang.netbox.com
Netbox User: fdjhuixtyy5dpasfn
Netbox password: fdjhuixtyy5dpasfn
netbox mail: [email protected]
Tổng kết
Trên đây là hướng dẫn của mình cho các bạn sử dụng docker netbox kết hợp với nginx để build một cách nhanh chóng với chỉ một vài lệnh mà ai cũng có thể làm được. Bài viết trước là hướng dẫn các bạn build theo nhà phát triển mà không cần nginx, bạn có thể xem lại ở đây. Nhưng trong bài viết này mình đã build thêm nginx và ssl. Việc có thêm ssl chính là để có thể mã hóa dữ liệu, chính việc mã hóa dữ liệu là bước bảo mật đầu tiên để phòng tránh tấn công mạng.
Nguồn: https://suncloud.vn/su-dung-docker-compose-cau-hinh-netbox-nginx-lam-reverse-proxy
0 notes
4nild0 · 1 year ago
Text
Aprenda uma linguagem nova!
Eu tenho trabalho há um tempo com a clássica pilha web LAMP ou WAMP. E eu me sinto muito confortável com ela, o PHP tem evoluído exponencialmente Composer é um gerenciador de dependências que não te deixa na mão, os frameworks Laravel, Symfony e Swoole (são os que eu mais uso, não estou desmerecendo os outros) dão uma aula de como acompanhar a linguagem e escrever uma documentação descente que pensa do dev.
Todavia, por diversão, estou explorando novas ferramentas .
Meus últimos empregos foram na área de logística então resolvi criar EcoSistema de comércio de laranjas, que se resume em criar vários robôs que gere dados desde a coleta, comércio e descarte de resíduos.
Para isso estou usando Java, Clojure, Dart, PostgreSQL e NGINX. Pense em uma aventura ousada.
Comecei com Java para criar robôs de geração de coleta de laranjas a cada uma hora. Em Java você tem Maven e Gradle para escolher(eu escolhi Maven). Quando fui subir a aplicação para o Docker demorei horas até descobri que as dependências do JDBC/Postgres não estavam sendo instaladas na pasta do meu projeto e tive que configurar um plugin Maven para isso, coisa que não precisa mudar em PHP. E eu acabei tendo uma dificuldade com o host do Postgres, não sei o porquê mas o Postgre funcionava com um host dentro do Docker e outro fora e eu não podia usar apenas "localhost", enfim no final deu certo.
Quando a linguagem em si, não há dificuldade de migração, é só eliminar os cifrões das variáveis e colocar tipo em tudo. Meu código código ficou mais pobre, porém com mais classe, mas tudo bem eu aceito a mudança.
Aprender uma pilha nova está me fazendo pensar o quão difícil pode ser coisas simples, me trazendo o sentimento de progresso de aprendizado novamente e mostrando quão bom é evoluir.
Estou muito empolgado!
#php #java
1 note · View note
qcs01 · 1 year ago
Text
Managing Containerized Applications Using Ansible: A Guide for College Students and Working Professionals
As containerization becomes a cornerstone of modern application deployment, managing containerized applications effectively is crucial. Ansible, a powerful automation tool, provides robust capabilities for managing these containerized environments. This blog post will guide you through the process of managing containerized applications using Ansible, tailored for both college students and working professionals.
What is Ansible?
Ansible is an open-source automation tool that simplifies configuration management, application deployment, and task automation. It's known for its agentless architecture, ease of use, and powerful features, making it ideal for managing containerized applications.
Why Use Ansible for Container Management?
Consistency: Ensure that container configurations are consistent across different environments.
Automation: Automate repetitive tasks such as container deployment, scaling, and monitoring.
Scalability: Manage containers at scale, across multiple hosts and environments.
Integration: Seamlessly integrate with CI/CD pipelines, monitoring tools, and other infrastructure components.
Prerequisites
Before you start, ensure you have the following:
Ansible installed on your local machine.
Docker installed on the target hosts.
Basic knowledge of YAML and Docker.
Setting Up Ansible
Install Ansible on your local machine:
pip install ansible
Basic Concepts
Inventory
An inventory file lists the hosts and groups of hosts that Ansible manages. Here's a simple example:
[containers] host1.example.com host2.example.com
Playbooks
Playbooks define the tasks to be executed on the managed hosts. Below is an example of a playbook to manage Docker containers.
Example Playbook: Deploying a Docker Container
Let's start with a simple example of deploying an NGINX container using Ansible.
Step 1: Create the Inventory File
Create a file named inventory:
[containers] localhost ansible_connection=local
Step 2: Create the Playbook
Create a file named deploy_nginx.yml:
name: Deploy NGINX container hosts: containers become: yes tasks:
name: Install Docker apt: name: docker.io state: present when: ansible_os_family == "Debian"
name: Ensure Docker is running service: name: docker state: started enabled: yes
name: Pull NGINX image docker_image: name: nginx source: pull
name: Run NGINX container docker_container: name: nginx image: nginx state: started ports:
"80:80"
Step 3: Run the Playbook
Execute the playbook using the following command:
ansible-playbook -i inventory deploy_nginx.yml
Advanced Topics
Managing Multi-Container Applications
For more complex applications, such as those defined by Docker Compose, you can manage multi-container setups with Ansible.
Example: Deploying a Docker Compose Application
Create a Docker Compose file docker-compose.yml:
version: '3' services: web: image: nginx ports: - "80:80" db: image: postgres environment: POSTGRES_PASSWORD: example
Create an Ansible playbook deploy_compose.yml:
name: Deploy Docker Compose application hosts: containers become: yes tasks:
name: Install Docker apt: name: docker.io state: present when: ansible_os_family == "Debian"
name: Install Docker Compose get_url: url: https://github.com/docker/compose/releases/download/1.29.2/docker-compose-uname -s-uname -m dest: /usr/local/bin/docker-compose mode: '0755'
name: Create Docker Compose file copy: dest: /opt/docker-compose.yml content: | version: '3' services: web: image: nginx ports: - "80:80" db: image: postgres environment: POSTGRES_PASSWORD: example
name: Run Docker Compose command: docker-compose -f /opt/docker-compose.yml up -d
Run the playbook:
ansible-playbook -i inventory deploy_compose.yml
Integrating Ansible with CI/CD
Ansible can be integrated into CI/CD pipelines for continuous deployment of containerized applications. Tools like Jenkins, GitLab CI, and GitHub Actions can trigger Ansible playbooks to deploy containers whenever new code is pushed.
Example: Using GitHub Actions
Create a GitHub Actions workflow file .github/workflows/deploy.yml:
name: Deploy with Ansible
on: push: branches: - main
jobs: deploy: runs-on: ubuntu-lateststeps: - name: Checkout code uses: actions/checkout@v2 - name: Set up Ansible run: sudo apt update && sudo apt install -y ansible - name: Run Ansible playbook run: ansible-playbook -i inventory deploy_compose.yml
Conclusion
Managing containerized applications with Ansible streamlines the deployment and maintenance processes, ensuring consistency and reliability. Whether you're a college student diving into DevOps or a working professional seeking to enhance your automation skills, Ansible provides the tools you need to efficiently manage your containerized environments.
For more details click www.qcsdclabs.com
0 notes
archaeopath · 1 year ago
Text
Updating a Tiny Tiny RSS install behind a reverse proxy
Tumblr media
Screenshot of my Tiny Tiny RSS install on May 7th 2024 after a long struggle with 502 errors. I had a hard time when trying to update my Tiny Tiny RSS instance running as Docker container behind Nginx as reverse proxy. I experienced a lot of nasty 502 errors because the container did not return proper data to Nginx. I fixed it in the following manner: First I deleted all the containers and images. I did it with docker rm -vf $(docker ps -aq) docker rmi -f $(docker images -aq) docker system prune -af Attention! This deletes all Docker images! Even those not related to Tiny Tiny RSS. No problem in my case. It only keeps the persistent volumes. If you want to keep other images you have to remove the Tiny Tiny RSS ones separately. The second issue is simple and not really one for me. The Tiny Tiny RSS docs still call Docker Compose with a hyphen: $ docker-compose version. This is not valid for up-to-date installs where the hyphen has to be omitted: $ docker compose version. The third and biggest issue is that the Git Tiny Tiny RSS repository for Docker Compose does not exist anymore. The files have to to be pulled from the master branch of the main repository https://git.tt-rss.org/fox/tt-rss.git/. The docker-compose.yml has to be changed afterwards since the one in the repository is for development purposes only. The PostgreSQL database is located in a persistent volume. It is not possible to install a newer PostgreSQL version over it. Therefore you have to edit the docker-compose.yml and change the database image image: postgres:15-alpine to image: postgres:12-alpine. And then the data in the PostgreSQL volume were owned by a user named 70. Change it to root. Now my Tiny Tiny RSS runs again as expected. Read the full article
0 notes