#postgres docker image
Explore tagged Tumblr posts
codeonedigest · 2 years ago
Text
0 notes
reasonableapproximation · 2 years ago
Text
A thing I've been looking into at work lately is collation, and specifically sorting. We want to compare in-memory implementations of things to postgres implementations, which means we need to reproduce postgres sorting in Haskell. Man it's a mess.
By default postgres uses glibc to sort. So we can use the FFI to reproduce it.
This mostly works fine, except if the locale says two things compare equal, postgres falls back to byte-comparing them. Which is also fine I guess, we can implement that too, but ugh.
Except also, this doesn't work for the mac user, so they can't reproduce test failures in the test suite we implemented this in.
How does postgres do sorting on mac? Not sure.
So we figured we'd use libicu for sorting. Postgres supports that, haskell supports it (through text-icu), should be fine. I'm starting off with a case-insensitive collation.
In postgres, you specify a collation through a string like en-u-ks-level2-numeric-true. (Here, en is a language, u is a separator, ks and numeric are keys and level2 and true are values. Some keys take multiple values, so you just have to know which strings are keys I guess?) In Haskell you can do it through "attributes" or "rules". Attributes are type safe but don't support everything you might want to do with locales. Rules are completely undocumented in text-icu, you pass in a string and it parses it. I'm pretty sure the parsing is implemented in libicu itself but it would be nice if text-icu gave you even a single example of what they look like.
But okay, I've got a locale in Haskell that I think should match the postgres one. Does it? Lolno
So there's a function collate for "compare these two strings in this locale", and a function sortKey for "get the sort key of this string in this locale". It should be that "collate l a b" is the same as "compare (sortKey l a) (sortKey l b)", but there are subtle edge cases where this isn't the case, like for example when a is the empty string and b is "\0". Or any string whose characters are all drawn from a set that includes NUL, lots of other control codes, and a handful of characters somewhere in the Arabic block. In these cases, collate says they're equal but sortKey says the empty string is smaller. But pg gets the same results as collate so fine, go with that.
Also seems like text-icu and pg disagree on which blocks get sorted before which other blocks, or something? At any rate I found a lot of pairs of (latin, non-latin) where text-icu sorts the non-latin first and pg sorts it second. So far I've solved this by just saying "only generate characters in the basic multilingual plane, and ignore anything in (long list of blocks)".
(Collations have an option for choosing which order blocks get sorted in, but it's not available with attributes. I haven't bothered to try it with rules, or with the format pg uses to specify them.)
I wonder how much of this is to do with using different versions of libicu. For Haskell we use a nix shell, which is providing version 72.1. Our postgres comes from a docker image and is using 63.1. When I install libicu on our CI images, they get 67.1 (and they can't reproduce the collate/sortKey bug with the arabic characters, so fine, remove them from the test set).
(I find out version numbers locally by doing lsof and seeing that the files are named like .so.63.1. Maybe ldd would work too? But because pg is in docker I don't know where the binary is. On CI I just look at the install logs.)
I wonder if I can get 63.1 in our nix shell. No, node doesn't support below 69. Fine, let's try 69. Did you know chromium depends on libicu? My laptop's been compiling chromium for many hours now.
7 notes · View notes
souhaillaghchimdev · 2 months ago
Text
Using Docker in Software Development
Tumblr media
Docker has become a vital tool in modern software development. It allows developers to package applications with all their dependencies into lightweight, portable containers. Whether you're building web applications, APIs, or microservices, Docker can simplify development, testing, and deployment.
What is Docker?
Docker is an open-source platform that enables you to build, ship, and run applications inside containers. Containers are isolated environments that contain everything your app needs—code, libraries, configuration files, and more—ensuring consistent behavior across development and production.
Why Use Docker?
Consistency: Run your app the same way in every environment.
Isolation: Avoid dependency conflicts between projects.
Portability: Docker containers work on any system that supports Docker.
Scalability: Easily scale containerized apps using orchestration tools like Kubernetes.
Faster Development: Spin up and tear down environments quickly.
Basic Docker Concepts
Image: A snapshot of a container. Think of it like a blueprint.
Container: A running instance of an image.
Dockerfile: A text file with instructions to build an image.
Volume: A persistent data storage system for containers.
Docker Hub: A cloud-based registry for storing and sharing Docker images.
Example: Dockerizing a Simple Python App
Let’s say you have a Python app called app.py: # app.py print("Hello from Docker!")
Create a Dockerfile: # Dockerfile FROM python:3.10-slim COPY app.py . CMD ["python", "app.py"]
Then build and run your Docker container: docker build -t hello-docker . docker run hello-docker
This will print Hello from Docker! in your terminal.
Popular Use Cases
Running databases (MySQL, PostgreSQL, MongoDB)
Hosting development environments
CI/CD pipelines
Deploying microservices
Local testing for APIs and apps
Essential Docker Commands
docker build -t <name> . — Build an image from a Dockerfile
docker run <image> — Run a container from an image
docker ps — List running containers
docker stop <container_id> — Stop a running container
docker exec -it <container_id> bash — Access the container shell
Docker Compose
Docker Compose allows you to run multi-container apps easily. Define all your services in a single docker-compose.yml file and launch them with one command: version: '3' services: web: build: . ports: - "5000:5000" db: image: postgres
Start everything with:docker-compose up
Best Practices
Use lightweight base images (e.g., Alpine)
Keep your Dockerfiles clean and minimal
Ignore unnecessary files with .dockerignore
Use multi-stage builds for smaller images
Regularly clean up unused images and containers
Conclusion
Docker empowers developers to work smarter, not harder. It eliminates "it works on my machine" problems and simplifies the development lifecycle. Once you start using Docker, you'll wonder how you ever lived without it!
0 notes
learning-code-ficusoft · 4 months ago
Text
Using Docker for Full Stack Development and Deployment
Tumblr media
1. Introduction to Docker
What is Docker? Docker is an open-source platform that automates the deployment, scaling, and management of applications inside containers. A container packages your application and its dependencies, ensuring it runs consistently across different computing environments.
Containers vs Virtual Machines (VMs)
Containers are lightweight and use fewer resources than VMs because they share the host operating system’s kernel, while VMs simulate an entire operating system. Containers are more efficient and easier to deploy.
Docker containers provide faster startup times, less overhead, and portability across development, staging, and production environments.
Benefits of Docker in Full Stack Development
Portability: Docker ensures that your application runs the same way regardless of the environment (dev, test, or production).
Consistency: Developers can share Dockerfiles to create identical environments for different developers.
Scalability: Docker containers can be quickly replicated, allowing your application to scale horizontally without a lot of overhead.
Isolation: Docker containers provide isolated environments for each part of your application, ensuring that dependencies don’t conflict.
2. Setting Up Docker for Full Stack Applications
Installing Docker and Docker Compose
Docker can be installed on any system (Windows, macOS, Linux). Provide steps for installing Docker and Docker Compose (which simplifies multi-container management).
Commands:
docker --version to check the installed Docker version.
docker-compose --version to check the Docker Compose version.
Setting Up Project Structure
Organize your project into different directories (e.g., /frontend, /backend, /db).
Each service will have its own Dockerfile and configuration file for Docker Compose.
3. Creating Dockerfiles for Frontend and Backend
Dockerfile for the Frontend:
For a React/Angular app:
Dockerfile
FROM node:14 WORKDIR /app COPY package*.json ./ RUN npm install COPY . . EXPOSE 3000 CMD ["npm", "start"]
This Dockerfile installs Node.js dependencies, copies the application, exposes the appropriate port, and starts the server.
Dockerfile for the Backend:
For a Python Flask app
Dockerfile
FROM python:3.9 WORKDIR /app COPY requirements.txt . RUN pip install -r requirements.txt COPY . . EXPOSE 5000 CMD ["python", "app.py"]
For a Java Spring Boot app:
Dockerfile
FROM openjdk:11 WORKDIR /app COPY target/my-app.jar my-app.jar EXPOSE 8080 CMD ["java", "-jar", "my-app.jar"]
This Dockerfile installs the necessary dependencies, copies the code, exposes the necessary port, and runs the app.
4. Docker Compose for Multi-Container Applications
What is Docker Compose? Docker Compose is a tool for defining and running multi-container Docker applications. With a docker-compose.yml file, you can configure services, networks, and volumes.
docker-compose.yml Example:
yaml
version: "3" services: frontend: build: context: ./frontend ports: - "3000:3000" backend: build: context: ./backend ports: - "5000:5000" depends_on: - db db: image: postgres environment: POSTGRES_USER: user POSTGRES_PASSWORD: password POSTGRES_DB: mydb
This YAML file defines three services: frontend, backend, and a PostgreSQL database. It also sets up networking and environment variables.
5. Building and Running Docker Containers
Building Docker Images:
Use docker build -t <image_name> <path> to build images.
For example:
bash
docker build -t frontend ./frontend docker build -t backend ./backend
Running Containers:
You can run individual containers using docker run or use Docker Compose to start all services:
bash
docker-compose up
Use docker ps to list running containers, and docker logs <container_id> to check logs.
Stopping and Removing Containers:
Use docker stop <container_id> and docker rm <container_id> to stop and remove containers.
With Docker Compose: docker-compose down to stop and remove all services.
6. Dockerizing Databases
Running Databases in Docker:
You can easily run databases like PostgreSQL, MySQL, or MongoDB as Docker containers.
Example for PostgreSQL in docker-compose.yml:
yaml
db: image: postgres environment: POSTGRES_USER: user POSTGRES_PASSWORD: password POSTGRES_DB: mydb
Persistent Storage with Docker Volumes:
Use Docker volumes to persist database data even when containers are stopped or removed:
yaml
volumes: - db_data:/var/lib/postgresql/data
Define the volume at the bottom of the file:
yaml
volumes: db_data:
Connecting Backend to Databases:
Your backend services can access databases via Docker networking. In the backend service, refer to the database by its service name (e.g., db).
7. Continuous Integration and Deployment (CI/CD) with Docker
Setting Up a CI/CD Pipeline:
Use Docker in CI/CD pipelines to ensure consistency across environments.
Example: GitHub Actions or Jenkins pipeline using Docker to build and push images.
Example .github/workflows/docker.yml:
yaml
name: CI/CD Pipeline on: [push] jobs: build: runs-on: ubuntu-latest steps: - name: Checkout Code uses: actions/checkout@v2 - name: Build Docker Image run: docker build -t myapp . - name: Push Docker Image run: docker push myapp
Automating Deployment:
Once images are built and pushed to a Docker registry (e.g., Docker Hub, Amazon ECR), they can be pulled into your production or staging environment.
8. Scaling Applications with Docker
Docker Swarm for Orchestration:
Docker Swarm is a native clustering and orchestration tool for Docker. You can scale your services by specifying the number of replicas.
Example:
bash
docker service scale myapp=5
Kubernetes for Advanced Orchestration:
Kubernetes (K8s) is more complex but offers greater scalability and fault tolerance. It can manage Docker containers at scale.
Load Balancing and Service Discovery:
Use Docker Swarm or Kubernetes to automatically load balance traffic to different container replicas.
9. Best Practices
Optimizing Docker Images:
Use smaller base images (e.g., alpine images) to reduce image size.
Use multi-stage builds to avoid unnecessary dependencies in the final image.
Environment Variables and Secrets Management:
Store sensitive data like API keys or database credentials in Docker secrets or environment variables rather than hardcoding them.
Logging and Monitoring:
Use tools like Docker’s built-in logging drivers, or integrate with ELK stack (Elasticsearch, Logstash, Kibana) for advanced logging.
For monitoring, tools like Prometheus and Grafana can be used to track Docker container metrics.
10. Conclusion
Why Use Docker in Full Stack Development? Docker simplifies the management of complex full-stack applications by ensuring consistent environments across all stages of development. It also offers significant performance benefits and scalability options.
Recommendations:
Encourage users to integrate Docker with CI/CD pipelines for automated builds and deployment.
Mention the use of Docker for microservices architecture, enabling easy scaling and management of individual services.
WEBSITE: https://www.ficusoft.in/full-stack-developer-course-in-chennai/
0 notes
qcs01 · 1 year ago
Text
Managing Containerized Applications Using Ansible: A Guide for College Students and Working Professionals
As containerization becomes a cornerstone of modern application deployment, managing containerized applications effectively is crucial. Ansible, a powerful automation tool, provides robust capabilities for managing these containerized environments. This blog post will guide you through the process of managing containerized applications using Ansible, tailored for both college students and working professionals.
What is Ansible?
Ansible is an open-source automation tool that simplifies configuration management, application deployment, and task automation. It's known for its agentless architecture, ease of use, and powerful features, making it ideal for managing containerized applications.
Why Use Ansible for Container Management?
Consistency: Ensure that container configurations are consistent across different environments.
Automation: Automate repetitive tasks such as container deployment, scaling, and monitoring.
Scalability: Manage containers at scale, across multiple hosts and environments.
Integration: Seamlessly integrate with CI/CD pipelines, monitoring tools, and other infrastructure components.
Prerequisites
Before you start, ensure you have the following:
Ansible installed on your local machine.
Docker installed on the target hosts.
Basic knowledge of YAML and Docker.
Setting Up Ansible
Install Ansible on your local machine:
pip install ansible
Basic Concepts
Inventory
An inventory file lists the hosts and groups of hosts that Ansible manages. Here's a simple example:
[containers] host1.example.com host2.example.com
Playbooks
Playbooks define the tasks to be executed on the managed hosts. Below is an example of a playbook to manage Docker containers.
Example Playbook: Deploying a Docker Container
Let's start with a simple example of deploying an NGINX container using Ansible.
Step 1: Create the Inventory File
Create a file named inventory:
[containers] localhost ansible_connection=local
Step 2: Create the Playbook
Create a file named deploy_nginx.yml:
name: Deploy NGINX container hosts: containers become: yes tasks:
name: Install Docker apt: name: docker.io state: present when: ansible_os_family == "Debian"
name: Ensure Docker is running service: name: docker state: started enabled: yes
name: Pull NGINX image docker_image: name: nginx source: pull
name: Run NGINX container docker_container: name: nginx image: nginx state: started ports:
"80:80"
Step 3: Run the Playbook
Execute the playbook using the following command:
ansible-playbook -i inventory deploy_nginx.yml
Advanced Topics
Managing Multi-Container Applications
For more complex applications, such as those defined by Docker Compose, you can manage multi-container setups with Ansible.
Example: Deploying a Docker Compose Application
Create a Docker Compose file docker-compose.yml:
version: '3' services: web: image: nginx ports: - "80:80" db: image: postgres environment: POSTGRES_PASSWORD: example
Create an Ansible playbook deploy_compose.yml:
name: Deploy Docker Compose application hosts: containers become: yes tasks:
name: Install Docker apt: name: docker.io state: present when: ansible_os_family == "Debian"
name: Install Docker Compose get_url: url: https://github.com/docker/compose/releases/download/1.29.2/docker-compose-uname -s-uname -m dest: /usr/local/bin/docker-compose mode: '0755'
name: Create Docker Compose file copy: dest: /opt/docker-compose.yml content: | version: '3' services: web: image: nginx ports: - "80:80" db: image: postgres environment: POSTGRES_PASSWORD: example
name: Run Docker Compose command: docker-compose -f /opt/docker-compose.yml up -d
Run the playbook:
ansible-playbook -i inventory deploy_compose.yml
Integrating Ansible with CI/CD
Ansible can be integrated into CI/CD pipelines for continuous deployment of containerized applications. Tools like Jenkins, GitLab CI, and GitHub Actions can trigger Ansible playbooks to deploy containers whenever new code is pushed.
Example: Using GitHub Actions
Create a GitHub Actions workflow file .github/workflows/deploy.yml:
name: Deploy with Ansible
on: push: branches: - main
jobs: deploy: runs-on: ubuntu-lateststeps: - name: Checkout code uses: actions/checkout@v2 - name: Set up Ansible run: sudo apt update && sudo apt install -y ansible - name: Run Ansible playbook run: ansible-playbook -i inventory deploy_compose.yml
Conclusion
Managing containerized applications with Ansible streamlines the deployment and maintenance processes, ensuring consistency and reliability. Whether you're a college student diving into DevOps or a working professional seeking to enhance your automation skills, Ansible provides the tools you need to efficiently manage your containerized environments.
For more details click www.qcsdclabs.com
0 notes
archaeopath · 1 year ago
Text
Updating a Tiny Tiny RSS install behind a reverse proxy
Tumblr media
Screenshot of my Tiny Tiny RSS install on May 7th 2024 after a long struggle with 502 errors. I had a hard time when trying to update my Tiny Tiny RSS instance running as Docker container behind Nginx as reverse proxy. I experienced a lot of nasty 502 errors because the container did not return proper data to Nginx. I fixed it in the following manner: First I deleted all the containers and images. I did it with docker rm -vf $(docker ps -aq) docker rmi -f $(docker images -aq) docker system prune -af Attention! This deletes all Docker images! Even those not related to Tiny Tiny RSS. No problem in my case. It only keeps the persistent volumes. If you want to keep other images you have to remove the Tiny Tiny RSS ones separately. The second issue is simple and not really one for me. The Tiny Tiny RSS docs still call Docker Compose with a hyphen: $ docker-compose version. This is not valid for up-to-date installs where the hyphen has to be omitted: $ docker compose version. The third and biggest issue is that the Git Tiny Tiny RSS repository for Docker Compose does not exist anymore. The files have to to be pulled from the master branch of the main repository https://git.tt-rss.org/fox/tt-rss.git/. The docker-compose.yml has to be changed afterwards since the one in the repository is for development purposes only. The PostgreSQL database is located in a persistent volume. It is not possible to install a newer PostgreSQL version over it. Therefore you have to edit the docker-compose.yml and change the database image image: postgres:15-alpine to image: postgres:12-alpine. And then the data in the PostgreSQL volume were owned by a user named 70. Change it to root. Now my Tiny Tiny RSS runs again as expected. Read the full article
0 notes
a2misocialstatus · 2 years ago
Text
October 20 Software Upgrades
On Friday, October 20, beginning at 9am Eastern Time, a2mi.social will undergo software upgrades which will require taking the site offline.
The site will be back online no later than 4pm Friday, and it may be intermittently available during the maintenance window.
Highlights for this maintenance window:
A2mi.social will be upgraded to Mastodon 4.2.1, which includes a number of major improvements including an improved search experience.
And details for the nerds:
We're migrating the site to run in Docker to ease OS upgrades
The server's disk capacity will be doubled
Postgres will be upgraded to v14.x
We'll upgrade the server to Ubuntu 20.04 LTS, and then 22.04 LTS if we have time during this window
A backup image of the server will be taken before each of these steps, to ensure that we can restore the site to a working state if any upgrade fails or we're nearing the 4pm end of the maintenance window.
Updates will be posted at status.a2mi.social occasionally during the maintenance window.
0 notes
yanashin-blog · 2 years ago
Text
Let's do Fly and Bun🚀
0. Sample Bun App
1. Install flycll
$ brew install flyctl
$ fly version fly v0.1.56 darwin/amd64 Commit: 7981f99ff550f66def5bbd9374db3d413310954f-dirty BuildDate: 2023-07-12T20:27:19Z
$ fly help Deploying apps and machines: apps Manage apps machine Commands that manage machines launch Create and configure a new app from source code or a Docker image. deploy Deploy Fly applications destroy Permanently destroys an app open Open browser to current deployed application Scaling and configuring: scale Scale app resources regions V1 APPS ONLY: Manage regions secrets Manage application secrets with the set and unset commands. Provisioning storage: volumes Volume management commands mysql Provision and manage PlanetScale MySQL databases postgres Manage Postgres clusters. redis Launch and manage Redis databases managed by Upstash.com consul Enable and manage Consul clusters Networking configuration: ips Manage IP addresses for apps wireguard Commands that manage WireGuard peer connections proxy Proxies connections to a fly VM certs Manage certificates Monitoring and managing things: logs View app logs status Show app status dashboard Open web browser on Fly Web UI for this app dig Make DNS requests against Fly.io's internal DNS server ping Test connectivity with ICMP ping messages ssh Use SSH to login to or run commands on VMs sftp Get or put files from a remote VM. Platform overview: platform Fly platform information Access control: orgs Commands for managing Fly organizations auth Manage authentication move Move an app to another organization More help: docs View Fly documentation doctor The DOCTOR command allows you to debug your Fly environment help commands A complete list of commands (there are a bunch more)
2. Sign up
$ fly auth signup
or
$ fly auth login
3. Launch App
Creating app in /Users/yanagiharas/works/bun/bun-getting-started/quickstart Scanning source code Detected a Bun app ? Choose an app name (leave blank to generate one): hello-bun
4. Dashboard
0 notes
mutter-official · 6 months ago
Text
I highly recommend doing it via Docker. It makes the whole system so much more modular and extensible and (for the most part) easier to maintain. Also, if you use the AIO image, it's really easy to set up and maintain (though you do lose some of the modularity)
I also highly recommend setting it up with Postgres and Redis if you want speed. Postgres in my experience is just a little bit faster than MariaDB, and it'll use Redis as a cache that'll make it waaaaaaayyyy faster.
It did used to be pretty slow, but it's improved a LOT over the last couple of years. It's how I manage every single file I use or ever have used with the exception of my git repos (which live on GitHub)
The nextcloud apps are really cool. You can just super easily add in extra services without needing to configure anything. I found "deck" yesterday and now we have a neat little kanban board for all the things we need to do for the move
33 notes · View notes
hersonls · 4 years ago
Text
Apple M1 para desenvolvedores do zero - Part 1
Eae, essa vai ser uma série ( espero ) de artigos de tudo que fiz assim que recebi o meu Macbook Air com processador M1. Espero que te ajude migrar todo seu ambiente Intel para M1 ( arm64 ).
Infelizmente ainda temos um caminho longo pela frente, boa parte dos softwares já conseguem ser compilados nativamente, porem, outros ainda precisam ser emulados atrav'ées do Rosetta por conta de bibliotecas de sistema com "ffi" que ainda não estao 100% compativels.
Separei em topicos de passos que segui exatamente na ordem que estão, espero que ajude.
Xcode + Rosetta
A instalação é feita através da App Store. Antes de começar toda a configuração da maquina instale a versão mais recente.
Em seguida instalar componentes de desenvolvimento: xcode-select --install
Brew no M1 ( Gerenciador de Pacotes )
As instalação do brew eu segui as instruções dadas a partir do artigo:
https://gist.github.com/nrubin29/bea5aa83e8dfa91370fe83b62dad6dfa
Instalando dependências de sistema
Essas bibliotecas servirão para diversos softwares como Python, Pillow, Node e etc. Elas são necessárias para a compilação e uso dos mesmos.
brew install libtiff libjpeg webp little-cms2 freetype harfbuzz fribidi brew install openblas brew install blis brew install zlib brew install gdal brew install pyenv brew install geoip brew install libgeoip brew install libffi brew install xz
Opcionais:
brew install postgres
Instalando Pyenv no M1
Instalando pyenv para facilitar o isolavento e definição de versões do python no sistema.
Eu tive problemas instalando diretamente no sistema, utilizando o brew. Algumas bibliotecas não eram encontradas e o trabalho para configuração de PATHs me fez migrar para pyenv.
brew install pyenv
Variaveis de ambiente
Para que as futuras compilações e instalações consigam encontrar todas as bibliotecas instaladas através do brew, é necessário configurar variáveis de ambiente para isso.
Adicione as linhas abaixo em seu arquivo de configuração do shell.
Se for bash: .bashrc Se for zsh: .zshrc
Eu fiz um script onde faz cache das chamadas do homebrew. Fique atento as versões das bibliotecas, se por ventura algumas delas não forem encontradas, confira o path das variaveis de ambiente.
Obs.: A melhor forma para obter o path das bibliotecas seria através do comando brew --prefix mas fazer inumeras chamadas pode deixar a inicialização do shell lenta.
Segue o que deve ser inserido em um dos arquivos acima:
if [ -z "$HOMEBREW_PREFIX" ]; then export PATH="/opt/homebrew/bin:$PATH" if command -v brew 1>/dev/null 2>&1; then SHELL_ENV="$(brew shellenv)" echo -n "$SHELL_ENV $(cat ~/.zshrc)" > .zshrc eval $SHELL_ENV fi fi export PATH="$HOMEBREW_PREFIX/opt/[email protected]/bin:$PATH" export LDFLAGS="-L/opt/homebrew/lib -L/opt/homebrew/opt/[email protected]/lib -L/opt/homebrew/opt/sqlite/lib -L/opt/homebrew/opt/[email protected]/lib -L/opt/homebrew/opt/libffi/lib -L/opt/homebrew/opt/openblas/lib -L/opt/homebrew/opt/lapack/lib -L/opt/homebrew/opt/zlib/lib -L/opt/homebrew/Cellar/bzip2/1.0.8/lib -L/opt/homebrew/opt/readline/lib" export CPPFLAGS="-I/opt/homebrew/include -I/opt/homebrew/opt/[email protected]/include -I/opt/homebrew/opt/sqlite/include -I/opt/homebrew/opt/libffi/include -I/opt/homebrew/opt/openblas/include -I/opt/homebrew/opt/lapack/include -I/opt/homebrew/opt/zlib/include -I/opt/homebrew/Cellar/bzip2/1.0.8/include -I/opt/homebrew/opt/readline/include" export PKG_CONFIG_PATH="$HOMEBREW_PREFIX/opt/[email protected]/lib/pkgconfig" export PKG_CONFIG_PATH="$PKG_CONFIG_PATH:$HOMEBREW_PREFIX/opt/[email protected]/lib/pkgconfig:$HOMEBREW_PREFIX/opt/libffi/lib/pkgconfig:$HOMEBREW_PREFIX/opt/openblas/lib/pkgconfig:$HOMEBREW_PREFIX/opt/lapack/lib/pkgconfig:$HOMEBREW_PREFIX/opt/zlib/lib/pkgconfig" export GDAL_LIBRARY_PATH="/opt/homebrew/opt/gdal/lib/libgdal.dylib" export GEOS_LIBRARY_PATH=$GDAL_LIBRARY_PATH export LD_LIBRARY_PATH=$HOMEBREW_PREFIX/lib:$LD_LIBRARY_PATH export DYLD_LIBRARY_PATH=$HOMEBREW_PREFIX/lib:$DYLD_LIBRARY_PATH export OPENBLAS="$(brew --prefix openblas)" if command -v pyenv 1>/dev/null 2>&1; then eval "$(pyenv init -)" fi
Instalando Python 3 no M1
Para instalar o Python através do pipenv no m1 é necessário aplicar um patch para corrigir algumas informações da arquitetura.
O time do Homebrew disponibilizou esses patchs e pode ser localizado em: https://github.com/Homebrew/formula-patches/tree/master/python
Como estou instalando a versão 3.8, segue o comando:
PYTHON_CONFIGURE_OPTS="--with-openssl=$(brew --prefix openssl)" \ pyenv install --patch 3.8.7 <<(curl -sSL "https://raw.githubusercontent.com/Homebrew/formula-patches/master/python/3.8.7.patch")
NVM e Node no M1
mkdir ~/.nvm
Colocar no .zshrc ou .bashrc
export NVM_DIR="$HOME/.nvm" [ -s "/opt/homebrew/opt/nvm/nvm.sh" ] && . "/opt/homebrew/opt/nvm/nvm.sh" # This loads nvm [ -s "/opt/homebrew/opt/nvm/etc/bash_completion.d/nvm" ] && . "/opt/homebrew/opt/nvm/etc/bash_completion.d/nvm" # This loads nvm bash_completion
Fim da primeira parte:
Essa primeira parte mostrei como instalar o gerenciador de pacores, Python e Node. Irei fazer Posts sobre partes especificas como Android Studio, Xcode, Docker e etc.
Espero que me acompanhem e qualquer dúvida pode deixar o comentário.
Até mais
1 note · View note
codeonedigest · 2 years ago
Text
youtube
0 notes
globaljobalert-blog · 2 years ago
Link
0 notes
computingpostcom · 3 years ago
Text
If this is not one of the most robust, free, rich and informative era ever then I cannot think of any other time in history adorned with the wealth of technology as this one. If you would wish to accomplish anything, this era wields the most virile grounds to nourish, nurture and aid the sprouting, the growth and the maturity of your dreams. You can literaly learn to be what you would wish to be in this age. That being said, this disquisition takes on a quest to get you into setting up something similar to Heroku on your own environment. We shall get to know what Heroku is then get off the dock and sail off towards our goal of having such an environment. The proliferation of cloud technologies brought with it many opportunities in terms of service offerings. First and foremost, users had the ability to get as much infrastructure as they could afford. Users can spawn servers, storage and network resources ad libitum which is popularly known as Infrastructure as a service. Then comes the second layer that sits on the infrastructure. It could be anything, cloud identity service, cloud monitoring server et cetera. This layer provides ready made solutions to people who might need them. This is known as software as a service. I hope we are flowing together this far. In addition to that there is another incredible layer that is the focus of this guide. It is a layer that targets developers majorly by making their lives easier on the cloud. In this layer , developers only concentrate on writing code and when they are ready to deploy, they only need to commit their ready project in a source control platform like GitHub/GitLab and the rest is done for them automatically. This layer provides a serverless layer to the developers since they do not have to touch the messy server side stuff. This layer as you might already have guessed is known as Platform as a Service (PaaS). Heroku is one of the solutions that sits on this layer. In this guide, are going to setup a platform that is similar to Heroku on your own infrastructure. As you know, you cannot download and install Heroku on your server. It is an online cloud service that you subscribe to. We will use Caprover to setup our own private Platform as a service (PaaS). CapRover is an extremely easy to use app/database deployment & web server manager for your NodeJS, Python, PHP, ASP.NET, Ruby, MySQL, MongoDB, Postgres, WordPress and even more applications. Features of Caprover CLI for automation and scripting Web GUI for ease of access and convenience No lock-in! Remove CapRover and your apps keep working! Docker Swarm under the hood for containerization and clustering Nginx (fully customizable template) under the hood for load-balancing Let’s Encrypt under the hood for free SSL (HTTPS) One-Click Apps: Deploying one-click apps is a matter of seconds! MongoDB, Parse, MySQL, WordPress, Postgres and many more. Fully Customizable: Optionally fully customizable nginx config allowing you to enable HTTP2, specific caching logic, custom SSL certs and etc Cluster Ready: Attach more nodes and create a cluster in seconds! CapRover automatically configures nginx to load balance. Increase Productivity: Focus on your apps! Not the bells and whistles just to run your apps! Easy Deploy: Many ways to deploy. You can upload your source from dashboard, use command line caprover deploy, use webhooks and build upon git push Caprover Pre-requisites Caprover runs as a container in your server which can be any that supports containerization. Depending on your preferences, you can use Podman or Docker to pull and run Caprover image. For this example, we are going to use Docker. In case you do not have Docker installed, the following guides listed below will be there to help you set it up as fast as possible. Install Docker and Docker Compose on Debian Setup Docker CE & Docker Compose on CentOS 8 | RHEL 8 How To Install Docker on RHEL 7 / CentOS 7
How To Install Docker CE on Ubuntu Once Docker Engine has been installed, add your user account to docker group: sudo usermod -aG docker $USER newgrp docker Another pre-requisite is a wildcard domain name pointed to the IP of your server where Caprover Server will be running. Setup your Heroku PaaS using CapRover Once the pre-requisites are out of the way, the only task remaining now is to set up our Caprover and poke around its rooms just to see what it has to offer. The following steps will be invaluable as you try to get it up and running. Step 1: Prepare your server Once Docker is installed, you can install all of the applications you need during your stay in the server. They include an editor and such kind of stuff. ##On CentOS sudo yum update sudo yum install vim git curl ##On Ubuntu sudo apt update sudo apt install vim git curl That was straingtforward. Next, let us pull Caprover image to set the stone rolling Step 2: Pull and execute Caprover Image We are going to cover the installation of Caprover depending on where your server sits. Scenario 1: Installation on a local server without Public IP Install dnsmasq After all, as mentioned in the pre-requisites section, we shall need a small DNS server to resolve domain names since Caprover is so particular with it. In case you have a local DNS server that supports wildcard domains, then you are good to go. You can skip the DNS setup part. In case you do not have it, install lightweight dnsmasq as follows: sudo yum -y install dnsmasq After dnsmasq is successfully installed, start and enable the service. sudo systemctl start dnsmasq sudo systemctl enable dnsmasq Add Wildcard DNS Recod Once dnsmasq is running as expected, we can go ahead and add the configs and wildcard domain name as shown below: $ sudo vim /etc/dnsmasq.conf listen-address=::1,127.0.0.1,172.20.192.38 domain=example.com server=8.8.8.8 address=/caprover.example.com/172.20.192.38 Replace the IPs therein with yours accordingly. Then restart dnsmasq sudo systemctl restart dnsmasq Test if it works We shall use the dig utility to test if our configuration works $ dig @127.0.0.1 test.caprover.example.com ; DiG 9.11.20-RedHat-9.11.20-5.el8 @127.0.0.1 test.caprover.example.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER> Checking System Compatibility > Checking System Compatibility
0 notes
safarijust · 3 years ago
Text
Kitematic remote host
Tumblr media
docker run -rm -link brewformulas_postgres_1:postgres -i postgres is create a new container of the postgres docker image, linking the running postgres empty database (Here it’s important to not have the -t flag in order to avoid the error cannot enable tty mode on non tty input).ssh -C connects to the remote machine and send the data.| ssh -C will push the dumped data through SSH using compression.pg_dump -host=$POSTGRES_PORT_5432_TCP_ADDR -dbname=brewformulas_org_prod -username=postgres will dump the brewformulas_org_prod database content to the stdout.sh -c is used in order to execute the line within the container (otherwise the environment variable POSTGRES_PORT_5432_TCP_ADDR is not yet accessible).docker run -rm -link brewformulasdb:postgres -it postgres will create a new container, linked to the container where the database, to be dumped, is running.(In this case, the first server is an Ubuntu server, so I had to use sudo while on the second server I’m running Debian as root). Sudo docker run -rm -link brewformulasdb:postgres -it postgres sh -c 'pg_dump -host=$POSTGRES_PORT_5432_TCP_ADDR -dbname=brewformulas_org_prod -username=postgres' | ssh -C "docker run -rm -link brewformulas_postgres_1:postgres -i postgres sh -c 'psql -host=172.17.0.31 -dbname=brewformulas_org_prod -username=postgres'"īasically, this command is using the pg_dump command in a container linked to the running database to be dumped, SSH the second server and send the dump data (We’re using the -C flag from SSH for the data compression) and then use the psql command on the remote server in a container in order to populate the database with the dumped data.
Tumblr media
0 notes
hackernewsrobot · 4 years ago
Text
Show HN: Postgres Docker image with common extensions
https://github.com/supabase/postgres Comments
0 notes
veilletechno2017 · 4 years ago
Link
via Hacker News
0 notes