#docker port forwarding
Explore tagged Tumblr posts
Text
Anyone have a quick guide for setting up port forwarding from the host machine to a docker image?
I'd rather not have to rely on virtual box to work through this install script like I used to.
Edit: nvm it's a default flag, super easy
11 notes
·
View notes
Text
Introduction
Nginx is a high-performance web server that also functions as a reverse proxy, load balancer, and caching server. It is widely used in cloud and edge computing environments due to its lightweight architecture and efficient handling of concurrent connections. By deploying Nginx on ARMxy Edge IoT Gateway, users can optimize data flow, enhance security, and efficiently manage industrial network traffic.
Why Use Nginx on ARMxy?
1. Reverse Proxying – Nginx acts as an intermediary, forwarding client requests to backend services running on ARMxy.
2. Load Balancing – Distributes traffic across multiple devices to prevent overload.
3. Security Hardening – Hides backend services and implements SSL encryption for secure communication.
4. Performance Optimization – Caching frequently accessed data reduces latency.
Setting Up Nginx as a Reverse Proxy on ARMxy
1. Install Nginx
On ARMxy’s Linux-based OS, update the package list and install Nginx:
sudo apt update sudo apt install nginx -y
Start and enable Nginx on boot:
sudo systemctl start nginx sudo systemctl enable nginx
2. Configure Nginx as a Reverse Proxy
Modify the default Nginx configuration to route incoming traffic to an internal service, such as a Node-RED dashboard running on port 1880:
sudo nano /etc/nginx/sites-available/default
Replace the default configuration with the following:
server { listen 80; server_name your_armxy_ip;
location / {
proxy_pass http://localhost:1880/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Save the file and restart Nginx:
sudo systemctl restart nginx
3. Enable SSL for Secure Communication
To secure the reverse proxy with HTTPS, install Certbot and configure SSL:
sudo apt install certbot python3-certbot-nginx -y sudo certbot --nginx -d your_domain
Follow the prompts to automatically configure SSL for your ARMxy gateway.
Use Case: Secure Edge Data Flow
In an industrial IoT setup, ARMxy collects data from field devices via Modbus, MQTT, or OPC UA, processes it locally using Node-RED or Dockerized applications, and sends it to cloud platforms. With Nginx, you can:
· Secure data transmission with HTTPS encryption.
· Optimize API requests by caching responses.
· Balance traffic when multiple ARMxy devices are used in parallel.
Conclusion
Deploying Nginx as a reverse proxy on ARMxy enhances security, optimizes data handling, and ensures efficient communication between edge devices and cloud platforms. This setup is ideal for industrial automation, smart city applications, and IIoT networks requiring low latency, high availability, and secure remote access.
0 notes
Link
#autoSSL#Automation#cloud#configuration#containerization#DevOps#Docker#HTTPS#Let'sEncrypt#Linux#networking#NGINX#proxyserver#reverseproxy#Security#self-hosted#servermanagement#systemadministration#TLS#Traefik#webserver
0 notes
Text
Setting Up A Docker Registry Proxy With Nginx: A Comprehensive Guide
In today’s software development landscape, Docker has revolutionized the way applications are packaged, deployed, and managed. One of the essential components of deploying Dockerized applications is a registry, which acts as a central repository to store and distribute Docker images. Docker Registry is the default registry provided by Docker, but in certain scenarios, a Docker Registry Proxy can be advantageous. In this article, we will explore what a Docker Registry Proxy is, why you might need one, and how to set it up.
What is a Docker Registry Proxy? It is an intermediary between the client and the Docker registry. It is placed in front of the Docker Registry to provide additional functionalities such as load balancing, caching, authentication, and authorization. This proxy acts as a middleman, intercepting the requests from clients and forwarding them to the appropriate Docker Registry.
Why do you need a Docker Registry Proxy? Caching and Performance Improvement: When multiple clients are pulling the same Docker image from the remote Docker Registry,It can cache the image locally. This caching mechanism reduces the load on the remote Docker Registry, speeds up the image retrieval process, and improves overall performance.
Bandwidth Optimization: In scenarios where multiple clients are pulling the same Docker image from the remote Docker Registry, it can consume a significant amount of bandwidth.It can reduce the bandwidth usage by serving the image from its cache.
Load Balancing: If you have multiple Docker registries hosting the same set of images, a Docker Registry can distribute the incoming requests among these registries. This load balancing capability enables the scaling of your Docker infrastructure and ensures high availability.
Authentication and Authorization: A Docker Registry can provide an additional layer of security by adding authentication and authorization mechanisms. It can authenticate the clients and enforce access control policies before forwarding the requests to the remote Docker Registry.
Setting up a Docker Registry Proxy To set up, we will use `nginx` as an example. `nginx` is a popular web server and reverse proxy server that can act as a Docker Registry Proxy.
Step 1: Install nginx First, ensure that `nginx` is installed on your machine. The installation process varies depending on your operating system. For example, on Ubuntu, you can install `nginx` using the following command:
Step 2: Configure nginx as a Docker Registry Proxy Next, we need to configure `nginx` to act as a Docker Registry. Open your nginx configuration file, typically located at `/etc/nginx/nginx.conf`, and add the following configuration:
In the above configuration, replace `my-docker-registry-proxy` with the hostname of your Docker Registry Proxy, and `remote-docker-registry` with the URL of the remote Docker Registry. This configuration sets up `nginx` to listen on port 443 and forwards all incoming requests to the remote Docker Registry.
Step 3: Start nginx After configuring `nginx`, start the `nginx` service to enable the Docker Registry using the following command:
Step 4: Test the Docker Registry Proxy To test the Docker Registry, update your Docker client configuration to use the Docker Registry as the registry endpoint. Open your Docker client configuration file, typically located at `~/.docker/config.json`, and add the following configuration:
Replace `my-docker-registry-proxy` with the hostname or IP address of your Docker Registry. This configuration tells the Docker client to use the Docker Registry for all Docker registry-related operations.
Now, try pulling a Docker image using the Docker client. The Docker client will send the request to the Docker Registry, which will then forward it to the remote Docker Registry. If the Docker image is not present in the Docker Registry Proxy’s cache, it will fetch the image from the remote Docker Registry and cache it for future use.
Conclusion It plays a vital role in managing a Docker infrastructure effectively. It provides caching, load balancing, and security functionalities, optimizing the overall performance and reliability of your Docker application deployments. Using `nginx` as a Docker Registry is a straightforward and effective solution. By following the steps outlined in this article, you can easily set up and configure a Docker Registry Proxy using `nginx`.
0 notes
Text
Deploying Text Generation Web UI on a Kubernetes Cluster
In this blog post, we'll walk through the process of deploying a text generation web UI using the Docker image atinoda/text-generation-webui on a Kubernetes cluster. We'll cover everything from creating a Persistent Volume Claim (PVC) to setting up a Kubernetes Service for port forwarding.
Prerequisites
A running Kubernetes cluster
kubectl installed and configured to interact with your cluster
Step 1: Create a Namespace
First, let's create a namespace called text-gen-demo to isolate our resources.
kubectl create namespace text-gen-demo
Step 2: Create a Persistent Volume Claim (PVC)
We'll need a PVC to store our data. Create a YAML file named text-gen-demo-pvc.yaml with the following content:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: text-gen-demo-pvc namespace: text-gen-demo spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: standard
Apply the PVC:
kubectl apply -f text-gen-demo-pvc.yaml
Step 3: Deploy the Pod
Create a YAML file named text-gen-demo-pod.yaml with the following content:
apiVersion: v1 kind: Pod metadata: name: text-gen-demo-pod namespace: text-gen-demo labels: app: text-gen-demo spec: containers: - name: text-gen-demo-container image: atinoda/text-generation-webui ports: - containerPort: 7860 - containerPort: 5000 - containerPort: 5005 env: - name: TORCH_CUDA_ARCH_LIST value: "7.5" volumeMounts: - name: text-gen-demo-pvc mountPath: /app/loras subPath: loras - name: text-gen-demo-pvc mountPath: /app/models subPath: models volumes: - name: text-gen-demo-pvc persistentVolumeClaim: claimName: text-gen-demo-pvc
Apply the Pod:
kubectl apply -f text-gen-demo-pod.yaml
Step 4: Create a Service
Create a YAML file named text-gen-demo-service.yaml with the following content:
apiVersion: v1 kind: Service metadata: name: text-gen-demo-service namespace: text-gen-demo spec: selector: app: text-gen-demo ports: - name: "webui" protocol: TCP port: 7860 targetPort: 7860 - name: "api" protocol: TCP port: 5000 targetPort: 5000 - name: "api-stream" protocol: TCP port: 5005 targetPort: 5005
Apply the Service:
kubectl apply -f text-gen-demo-service.yaml
Step 5: Port Forwarding
Finally, let's set up port forwarding to access the web UI locally.
kubectl port-forward svc/text-gen-demo-service 7860:7860 -n text-gen-demo
You should now be able to access the web UI at http://localhost:7860.
Troubleshooting
If you encounter issues with port forwarding, make sure:
The pod is running and healthy (kubectl get pods -n text-gen-demo)
The service is correctly configured (kubectl describe svc text-gen-demo-service -n text-gen-demo)
The service has endpoints (kubectl get endpoints text-gen-demo-service -n text-gen-demo)
Conclusion
You've successfully deployed a text generation web UI on a Kubernetes cluster! You can now interact with the web UI locally and generate text as needed.
0 notes
Text
Docker Container Port Mapping Tutorial for beginners | Docker Port Expose and Port Forwarding
Full Video Link: https://youtu.be/2gie3gpDJUg Hi, a new #video on #dockerportmapping is published on @codeonedigest #youtube channel. Learn docker container port forwarding and docker expose. What is docker port mapping and -p option. Running docker cont
Docker container port mapping and port forwarding. Docker expose attribute in Docker file is used to forward container port to the host machine. Running docker container on custom port. Using docker expose, run the docker application on specific port. How to run docker image on specific port? What is port mapping in docker container? Why docker port mapping is not working? Why containerized…
View On WordPress
#docker#docker and Kubernetes#docker compose#docker compose port mapping#docker container port change#docker container port forwarding not working#docker container port mapping#docker container port mapping explained#docker container port not exposed#docker container ports explained#docker port forwarding#docker port forwarding doesn’t work#docker port forwarding existing container#docker port forwarding running container#docker port mapping#docker port mapping explained#docker port mapping not working#docker port mapping tutorial#docker port mapping vs expose#docker tutorial#docker tutorial for beginners#port mapping in running docker container#run docker container on custom port#update docker container portainer#what is docker
0 notes
Photo
I want to go on a little segue to make something for #dcjam2023 so I’ve been putting together a template with raylib, cimgui, and physfs so I don’t have to waste time writing all of the boilerplate when the jam actually starts.
This here is a snippet of my Makefile, which I’ll be porting back to Ctesiphon and Turbostellar as well, to do a build with docker using the Steam Runtime environment instead of the local machine. Anyone who’s released binaries for Linux may be aware that glibc has completely non-existent forward-compatibility (one of the few things Linux does objectively worse than Windows). So if you build on Manjaro and someone tries to run on an older version of Ubuntu, odds are it just won’t work no matter what they try.
5 notes
·
View notes
Link
#Automation#backup#cloudstorage#collaboration#dataprivacy#database#Docker#filesharing#HTTPS#Install#Linux#networking#Nextcloud#Nextcloudsetup#open-source#reverseproxy#Security#self-hosted#Server#systemadministration
0 notes
Text
AEM Dispatcher
What is AEM Dispatcher?
Get In Touch
AEM Dispatcher is a web server module (provided by Adobe) that sits in front of an AEM instance and caches and delivers content to users. It acts as a proxy server that intercepts requests to AEM and serves cached content whenever possible. AEM Dispatcher can be used to cache both static and dynamic content, and it uses a set of rules and configurations to determine when to serve cached content and when to forward requests to AEM.
Key benefits of using AEM Dispatcher:
Improved performance: AEM Dispatcher reduces the load on the AEM instance by caching and serving content from the cache.
Reduced server load: By serving content from the cache, AEM Dispatcher reduces the load on the AEM server, allowing it to handle more requests and users.
Better security: AEM Dispatcher can be configured to provide an additional layer of security for AEM by blocking harmful requests and limiting access to certain resources.
Scalability: AEM Dispatcher can be used to distribute content across multiple servers, making it easier to scale AEM for large-scale deployments.
How AEM Dispatcher works:
AEM Dispatcher works by intercepting requests to AEM and checking whether the requested content is in the cache. If the content is in the cache, AEM Dispatcher serves it directly to the user.
If the content is not in the cache, AEM Dispatcher forwards the request to AEM, which generates the content and sends it back to AEM Dispatcher.
AEM Dispatcher then caches the content and serves it to the user. AEM Dispatcher uses a set of rules and configurations to determine when to serve cached content and when to forward requests to AEM.
Dispatcher setup for AEM as a cloud service in linux/mac using docker
Prerequisites for Dispatcher Setup
Apache 2.2 web server
Dispatcher Module
Docker setup in local
Installation Instructions
Install Apache web server
Run this command to install the apache package on ubuntu
sudo apt install apache2 -y
Install Docker
Run this command to install the latest docker package on ubuntu
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
For Verify docker was installed correctly, Run this comman
docker --version
Execute
aem-sdk-dispatcher-tools-<version>-unix.sh
Run this command to execute script
chmod +x <fileName>
sudo ./<fileName>
Validate the Dispatcher configuration contained in this SDK
$ sudo ./bin/validator full -d out src
This validates the configuration and generates deployment information in out
Validate the deployment information by the Dispatcher in a docker image
$ sudo ./bin/docker_run.sh out localhost:4503 test
This will start the container, run Apache in configuration test mode (httpd -t), dump processed dispatcher. any config (-D DUMP_ANY) and exit.
Confirm that no immutable config file was changed against a docker image ones
$ sudo ./bin/docker_immutability_check.sh src
With your AEM publish server running on your computer, listening on port 4503, you can start the dispatcher in front of that server as follows:
$ sudo ./bin/docker_run.sh out host.docker.internal:4503 8888
Sometimes, You may encounter this error “Waiting until host.docker.internal is available”, to resolve this use your host ip
bin/docker_run.sh src <HOST IP>:4503 8888
Read more!!
0 notes
Text
YaCy's Docker image behind Nginx as reverse proxy


YaCy is a peer-to-peer search engine. Every peer sets up his own client and is able to crawl and index websites. Searches are carried out by contacting all known peers and cumulating their returns. It is not necessary to have a web server for that. You may well install YaCy on your office computer but of course it only works as long as it is connected to the internet. A long time ago I maintained a YaCy peer on my web server. Later I lost interest because there were (and still are) too less peers online to be a reasonable alternative to Google. Usually only a few hundred concurrently. But to flatter my vanity I now decided to set up my own peer again mainly to introduce several websites I am part of the admin team. Main issue now was that my webserver employs Nginx as reverse proxy and I do not want to expose additional ports to the internet (YaCy's default ports are 8090 and 8443). Good luck, due to the Docker image the install procedure proved fairly easy! Both Nginx and YaCy need the default settings only! In order to use Nginx as reverse proxy its configuration needs to contain some special commands. My default proxy_params file is longer than its pendant in the Nginx GitHub repository: proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; client_max_body_size 100M; client_body_buffer_size 1m; proxy_intercept_errors on; proxy_buffering on; proxy_buffer_size 128k; proxy_buffers 256 16k; proxy_busy_buffers_size 256k; proxy_temp_file_write_size 256k; proxy_max_temp_file_size 0; proxy_read_timeout 300; This proved good enough. Installing YaCy from Docker requires only two comands (head over to this particular site to learn how to backup and update your instance): docker pull yacy/yacy_search_server:latest docker run -d --name yacy_search_server -p 8090:8090 -p 8443:8443 -v yacy_search_server_data:/opt/yacy_search_server/DATA --restart unless-stopped --log-opt max-size=200m --log-opt max-file=2 -e YACY_NETWORK_UNIT_AGENT=mypeername yacy/yacy_search_server:latest We do not need settings for TLS in YaCy since this is done bx Nginx (employing Let's Encrypt in this case). Since YaCy's internal links are all relative, we can proxy the localhost without caring for host name and protocol schemen. The following Nginx server is fully operational: server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name my.host.name; root /var/www/my.host.name; index index.html index.htm default.html default.htm; location / { proxy_pass http://127.0.0.1:8090; include /etc/nginx/proxy_params; } access_log /var/log/nginx/my.host.name_access.log; error_log /var/log/nginx/my.host.name_error.log; ssl_certificate /etc/letsencrypt/live/my.host.name/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/my.host.name/privkey.pem; # managed by Certbot } Head over to my search interface. But attention: there is an extended blacklist excluding pseudo science, extremist politics, conspiracy theories and so on (mainly German sites). Use another YaCy instance to get the same search without my exclusions.

Read the full article
0 notes
Text
Supertanker Disaster Adverted! Sailors rescued!
Ok, the title may be a bit over the top but at the time it didn’t seem so.
Yesterday started as another idyllic day in paradise. I glued the table that broke my fall back together, Linda and Laurie rode the bikes to get Bimini bread, boat chores got done and after Linda and I did a bike tour of the length of the island relaxing the afternoon away at the pool was the order of the day. We met other boaters on our dock, discussed itineraries and the weather. We also took note of a sailboat that had anchored about 100 yards off the marina dock and was partially blocking the channel that runs north and south parallel to the island and has considerable traffic. We discussed the IQ of the Captain of the boat regarding his anchoring decision, questioned his wisdom and experience, and expressed our hope they would move.
It was a typical day for a cruiser.
After dinner we were invited aboard Driftwood by the Bettings to torture us with that convoluted pinochle game, though it seemed a bit less intimidating last night. The weather had started to crap out as forecast with rain and lightning in the distance. Inside Driftwood we were comfy.
The excitement started when I noticed a bright light illuminating the dock. A very bright light, one that did not appear to come from a boat. I stuck my head out of Driftwood’s cabin for a look. A massive ship (from my perspective) was heading north in the channel. It sounded it’s horn, a big ship horn. The sailboat the was anchored in the channel was directly in its path!
The ship, named Ocean Breeze, slowed but continued to move forward. It kept its spotlight on the sailboat and continued to sound its horn. The crew of the sailboat seemed oblivious. And clueless. Not a person stirred, no effort was made to move.
A collision seemed imminent.
To our surprise Ocean Breeze slowly moved to port side and started to moor to the empty t-head at the end of our dock. Brad, Lee (a fellow docker) and I caught the lines from the tanker and looped them over the pilings as the crew of the tanker pulled them taught. The bow of the ship ended up about 20 yards from the sailboat, the occupants still below and blissfully unaware as to how close they came to disaster. Soon, though, they would be made aware of their situation.
Since it appeared there would be no collision, no shredded fiberglass or instant death Brad and I went below on Driftwood to finish the card game. About a half hour later the rain started and I climbed out of Driftwood to verify we had closed our door. Then I looked out at the bow of the tanker and saw the sailboat was trying to dock. Their second attempt at disaster was about to unfold.
Ocean Breeze had launched a tender. Apparently the tanks that fuel the diesel generator that powers the island is refilled from this ship by a long floating hose that stretches 300 yards from the end of the dock to the fuel tank. The tender is used to pull the hose from the ship to shore, and the sailboat was in the way. So the operator of the tender roused the clueless crew and told them they had to move.
The timing couldn’t be worse. The weather had deteriorated considerably from the last time I was on deck. The wind was blowing 20 right onto the dock they were attempting to get into. The current was going the same direction as the wind making docking a challenge for even an experienced skipper. For this crew a game of sailboat pinball was about to begin.
The helmsman of the boat tried to line up with an empty slip on the end of our dock but on the other side. A side just full of sailboats. Has he approached he got a bit crooked and tried reverse. That maneuver got the boat sideways giving the wind and current the entire side and keel to push on. As the boat picked up speed the skipper tried a right turn maneuver, but went into reverse instead of forward.
Lee, the docker from above, could see this boat smashing into his and rushed above to try to fend the sailboat off. Brad, wanting to see where I was came up, saw the situation and joined the fray. Somehow the sailboat skipper found forward and decided to t-bone the ship. As he approached ramming speed the piling at the end of the dock caught his sprit.
“Give me a line” I shouted. “Huh?” Was the informed reply from the crewman handling the lines. He made his way to the bow, handed me a line and I tied him to the piling. Now he couldn’t go anywhere. Despite his best effort mass sailboat carnage was adverted.
Brad came up with the best plan to ease him into the dock, though it involved allowing the skipper to use his motor. Port and starboard, fore and aft seemed to be a foreign language to the crew. But with shouting, occasional cursing and Herculean effort from the line handlers we got him into the slip.
A discussion ensued with the crew as to why they had anchored in a channel. “It was the only place they could get their anchor to hold” was the explanation. My comment regarding that was not my finest moment.
Just like a hand of cards (see what I did there?) the adventure needed to be rehashed and discussed. Laurie invited everyone onto Driftwood to do so over rum and diet cola. We gave our critiques, laughed about what happened and enjoyed the experience of helping another cruiser avoid disaster.
We did what cruisers do, and is part of the reason we cruise.
1 note
·
View note
Video
youtube
Run Postgres Database in Docker Container | Postgres Docker Container Tu...
Hi, a new #video on #springboot #microservices with #postgres #database is published on #codeonedigest #youtube channel. Complete guide for #spring boot microservices with #postgressql. Learn #programming #coding with #codeonedigest
#springboot #postgres #microservices #postgresdb #springboot #Springbootmicroserviceswithpostgres #stepbystepspringbootmicroserviceswithpostgressql #Springboottutorial #springboot #springbootproject #springboottutorialforbeginners #springbootannotations #springbootmicroservices #springbootfullcourse #springboottutorial #springbootprojectstepbystep #postgresjava #postgresinstallation #postgrestutorial #postgrestutorialforbeginners #Springbootmicroservicespostgresdatabase #springbootmicroservices #springbootmicroservicespostgressqlexample #springbootmicroservices #springbootmicroservices #springbootpostgressql #microservicespostgres #postgressqlmicroservicesexample #postgresandpgadmininstall #postgresandpgadmininstallwindows #postgresandpgadmininstallubuntu #postgresandpgadmininstallwindows11 #postgresandpgadmininstallmacos #postgresandpgadmininstallwindows10 #postgrespasswordreset #postgrestutorial #postgresdocker #postgresinstallationerror #postgres #postgresdatabase #rdbms #postgresdatabasesetup #postgresdatabaseconfiguration #database #relationaldatabase #postgresconfiguration #postgresconfigurationfile #postgresconfigurationparameters #postgresconfigfilelocation #postgresconfigurationinspringboot #postgresconfigfilewindows #postgresconfigfilemax #postgresconfigfileubuntu #postgresconfigurereplication #postgresconfigurationsettings #postgresconnectiontoserver #postgresconnectioninjava #postgresconnectioncommandline #postgresconnectioninnodejs
#youtube#postgres database#docker container#postgres docker image#run postgres in docker container#dbeaver database tool#dbeaver tool#postgres setup#postgres installation#postgres configuration#postgres config#port mapping#docker port expose#docker port forward
1 note
·
View note
Text
The Elastic stack (ELK) is made up of 3 open source components that work together to realize logs collection, analysis, and visualization. The 3 main components are: Elasticsearch – which is the core of the Elastic software. This is a search and analytics engine. Its task in the Elastic stack is to store incoming logs from Logstash and offer the ability to search the logs in real-time Logstash – It is used to collect data, transform logs incoming from multiple sources simultaneously, and sends them to storage. Kibana – This is a graphical tool that offers data visualization. In the Elastic stack, it is used to generate charts and graphs to make sense of the raw data in your database. The Elastic stack can as well be used with Beats. These are lightweight data shippers that allow multiple data sources/indices, and send them to Elasticsearch or Logstash. There are several Beats, each with a distinct role. Filebeat – Its purpose is to forward files and centralize logs usually in either .log or .json format. Metricbeat – It collects metrics from systems and services including CPU, memory usage, and load, as well as other data statistics from network data and process data, before being shipped to either Logstash or Elasticsearch directly. Packetbeat – It supports a collection of network protocols from the application and lower-level protocols, databases, and key-value stores, including HTTP, DNS, Flows, DHCPv4, MySQL, and TLS. It helps identify suspicious network activities. Auditbeat – It is used to collect Linux audit framework data and monitor file integrity, before being shipped to either Logstash or Elasticsearch directly. Heartbeat – It is used for active probing to determine whether services are available. This guide offers a deep illustration of how to run the Elastic stack (ELK) on Docker Containers using Docker Compose. Setup Requirements. For this guide, you need the following. Memory – 1.5 GB and above Docker Engine – version 18.06.0 or newer Docker Compose – version 1.26.0 or newer Install the required packages below: ## On Debian/Ubuntu sudo apt update && sudo apt upgrade sudo apt install curl vim git ## On RHEL/CentOS/RockyLinux 8 sudo yum -y update sudo yum -y install curl vim git ## On Fedora sudo dnf update sudo dnf -y install curl vim git Step 1 – Install Docker and Docker Compose Use the dedicated guide below to install the Docker Engine on your system. How To Install Docker CE on Linux Systems Add your system user to the docker group. sudo usermod -aG docker $USER newgrp docker Start and enable the Docker service. sudo systemctl start docker && sudo systemctl enable docker Now proceed and install Docker Compose with the aid of the below guide: How To Install Docker Compose on Linux Step 2 – Provision the Elastic stack (ELK) Containers. We will begin by cloning the file from Github as below git clone https://github.com/deviantony/docker-elk.git cd docker-elk Open the deployment file for editing: vim docker-compose.yml The Elastic stack deployment file consists of 3 main parts. Elasticsearch – with ports: 9200: Elasticsearch HTTP 9300: Elasticsearch TCP transport Logstash – with ports: 5044: Logstash Beats input 5000: Logstash TCP input 9600: Logstash monitoring API Kibana – with port 5601 In the opened file, you can make the below adjustments: Configure Elasticsearch The configuration file for Elasticsearch is stored in the elasticsearch/config/elasticsearch.yml file. So you can configure the environment by setting the cluster name, network host, and licensing as below elasticsearch: environment: cluster.name: my-cluster xpack.license.self_generated.type: basic To disable paid features, you need to change the xpack.license.self_generated.type setting from trial(the self-generated license gives access only to all the features of an x-pack for 30 days) to basic.
Configure Kibana The configuration file is stored in the kibana/config/kibana.yml file. Here you can specify the environment variables as below. kibana: environment: SERVER_NAME: kibana.example.com JVM tuning Normally, both Elasticsearch and Logstash start with 1/4 of the total host memory allocated to the JVM Heap Size. You can adjust the memory by setting the below options. For Logstash(An example with increased memory to 1GB) logstash: environment: LS_JAVA_OPTS: -Xm1g -Xms1g For Elasticsearch(An example with increased memory to 1GB) elasticsearch: environment: ES_JAVA_OPTS: -Xm1g -Xms1g Configure the Usernames and Passwords. To configure the usernames, passwords, and version, edit the .env file. vim .env Make desired changes for the version, usernames, and passwords. ELASTIC_VERSION= ## Passwords for stack users # # User 'elastic' (built-in) # # Superuser role, full access to cluster management and data indices. # https://www.elastic.co/guide/en/elasticsearch/reference/current/built-in-users.html ELASTIC_PASSWORD='StrongPassw0rd1' # User 'logstash_internal' (custom) # # The user Logstash uses to connect and send data to Elasticsearch. # https://www.elastic.co/guide/en/logstash/current/ls-security.html LOGSTASH_INTERNAL_PASSWORD='StrongPassw0rd1' # User 'kibana_system' (built-in) # # The user Kibana uses to connect and communicate with Elasticsearch. # https://www.elastic.co/guide/en/elasticsearch/reference/current/built-in-users.html KIBANA_SYSTEM_PASSWORD='StrongPassw0rd1' Source environment: source .env Step 3 – Configure Persistent Volumes. For the Elastic stack to persist data, we need to map the volumes correctly. In the YAML file, we have several volumes to be mapped. In this guide, I will configure a secondary disk attached to my device. Identify the disk. $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 40G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 39G 0 part ├─rl-root 253:0 0 35G 0 lvm / └─rl-swap 253:1 0 4G 0 lvm [SWAP] sdb 8:16 0 10G 0 disk └─sdb1 8:17 0 10G 0 part Format the disk and create an XFS file system to it. sudo parted --script /dev/sdb "mklabel gpt" sudo parted --script /dev/sdb "mkpart primary 0% 100%" sudo mkfs.xfs /dev/sdb1 Mount the disk to your desired path. sudo mkdir /mnt/datastore sudo mount /dev/sdb1 /mnt/datastore Verify if the disk has been mounted. $ sudo mount | grep /dev/sdb1 /dev/sdb1 on /mnt/datastore type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota) Create the persistent volumes in the disk. sudo mkdir /mnt/datastore/setup sudo mkdir /mnt/datastore/elasticsearch Set the right permissions. sudo chmod 775 -R /mnt/datastore sudo chown -R $USER:docker /mnt/datastore On Rhel-based systems, configure SELinux as below. sudo setenforce 0 sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config Create the external volumes: For Elasticsearch docker volume create --driver local \ --opt type=none \ --opt device=/mnt/datastore/elasticsearch \ --opt o=bind elasticsearch For setup docker volume create --driver local \ --opt type=none \ --opt device=/mnt/datastore/setup \ --opt o=bind setup Verify if the volumes have been created. $ docker volume list DRIVER VOLUME NAME local elasticsearch local setup View more details about the volume. $ docker volume inspect setup [ "CreatedAt": "2022-05-06T13:19:33Z", "Driver": "local", "Labels": , "Mountpoint": "/var/lib/docker/volumes/setup/_data", "Name": "setup", "Options": "device": "/mnt/datastore/setup", "o": "bind", "type": "none" , "Scope": "local" ] Go back to the YAML file and add these lines at the end of the file.
$ vim docker-compose.yml ....... volumes: setup: external: true elasticsearch: external: true Now you should have the YAML file with changes made in the below areas: Step 4 – Bringing up the Elastic stack After the desired changes have been made, bring up the Elastic stack with the command: docker-compose up -d Execution output: [+] Building 6.4s (12/17) => [docker-elk_setup internal] load build definition from Dockerfile 0.3s => => transferring dockerfile: 389B 0.0s => [docker-elk_setup internal] load .dockerignore 0.5s => => transferring context: 250B 0.0s => [docker-elk_logstash internal] load build definition from Dockerfile 0.6s => => transferring dockerfile: 312B 0.0s => [docker-elk_elasticsearch internal] load build definition from Dockerfile 0.6s => => transferring dockerfile: 324B 0.0s => [docker-elk_logstash internal] load .dockerignore 0.7s => => transferring context: 188B ........ Once complete, check if the containers are running: $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 096ddc76c6b9 docker-elk_logstash "/usr/local/bin/dock…" 9 seconds ago Up 5 seconds 0.0.0.0:5000->5000/tcp, :::5000->5000/tcp, 0.0.0.0:5044->5044/tcp, :::5044->5044/tcp, 0.0.0.0:9600->9600/tcp, 0.0.0.0:5000->5000/udp, :::9600->9600/tcp, :::5000->5000/udp docker-elk-logstash-1 ec3aab33a213 docker-elk_kibana "/bin/tini -- /usr/l…" 9 seconds ago Up 5 seconds 0.0.0.0:5601->5601/tcp, :::5601->5601/tcp docker-elk-kibana-1 b365f809d9f8 docker-elk_setup "/entrypoint.sh" 10 seconds ago Up 7 seconds 9200/tcp, 9300/tcp docker-elk-setup-1 45f6ba48a89f docker-elk_elasticsearch "/bin/tini -- /usr/l…" 10 seconds ago Up 7 seconds 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 0.0.0.0:9300->9300/tcp, :::9300->9300/tcp docker-elk-elasticsearch-1 Verify if Elastic search is running: $ curl http://localhost:9200 -u elastic:StrongPassw0rd1 "name" : "45f6ba48a89f", "cluster_name" : "my-cluster", "cluster_uuid" : "hGyChEAVQD682yVAx--iEQ", "version" : "number" : "8.1.3", "build_flavor" : "default", "build_type" : "docker", "build_hash" : "39afaa3c0fe7db4869a161985e240bd7182d7a07", "build_date" : "2022-04-19T08:13:25.444693396Z", "build_snapshot" : false, "lucene_version" : "9.0.0", "minimum_wire_compatibility_version" : "7.17.0", "minimum_index_compatibility_version" : "7.0.0" , "tagline" : "You Know, for Search"
Step 5 – Access the Kibana Dashboard. At this point, you can proceed and access the Kibana dashboard running on port 5601. But first, allow the required ports through the firewall. ##For Firewalld sudo firewall-cmd --add-port=5601/tcp --permanent sudo firewall-cmd --add-port=5044/tcp --permanent sudo firewall-cmd --reload ##For UFW sudo ufw allow 5601/tcp sudo ufw allow 5044/tcp Now proceed and access the Kibana dashboard with the URL http://IP_Address:5601 or http://Domain_name:5601. Login using the credentials set for the Elasticsearch user: Username: elastic Password: StrongPassw0rd1 On successful authentication, you should see the dashboard. Now to prove that the ELK stack is running as desired. We will inject some data/log entries. Logstash here allows us to send content via TCP as below. # Using BSD netcat (Debian, Ubuntu, MacOS system, ...) cat /path/to/logfile.log | nc -q0 localhost 5000 For example: cat /var/log/syslog | nc -q0 localhost 5000 Once the logs have been loaded, proceed and view them under the Observability tab. That is it! You have your Elastic stack (ELK) running perfectly. Step 6 – Cleanup In case you completely want to remove the Elastic stack (ELK) and all the persistent data, use the command: $ docker-compose down -v [+] Running 5/4 ⠿ Container docker-elk-kibana-1 Removed 10.5s ⠿ Container docker-elk-setup-1 Removed 0.1s ⠿ Container docker-elk-logstash-1 Removed 9.9s ⠿ Container docker-elk-elasticsearch-1 Removed 3.0s ⠿ Network docker-elk_elk Removed 0.1s Closing Thoughts. We have successfully walked through how to run Elastic stack (ELK) on Docker Containers using Docker Compose. Futhermore, we have learned how to create an external persistent volume for Docker containers. I hope this was significant.
0 notes
Text
Davmail gateway thunderbird

#Davmail gateway thunderbird how to#
#Davmail gateway thunderbird install#
You can now use this to interface the mail client(s) of your choice with an Exchange server. Enjoy!Īnd with that, you have davmail running inside a container, with the appropriate port-mappings configured. Complexity avoided is hours of time we get to keep. You can start, stop, kill, abuse, disabuse, etc, the container davmail is running inside without fear of anything more than disrupting communications between your clients and the server. This container is stateless - that is, it only handles translating between standards-compliant and Exchange-compliant tools.
#Davmail gateway thunderbird install#
Now that we have an upstart config file, all that remains is to install the file appropriately (that is, copy it into /etc/init/ and start the service: That is, when you go looking for IMAP, you'll find it accessible at 127.0.0.1:11143 from the cloud instance only this prevents attackers from being able to connect to your davmail instance remotely. binds only to ports on the loopback interface.We additionally tell docker to bind ours such that our cloud instance: Remember nix systems disallow non-privileged process from binding to ports Docker port-forwards are also established from the "trial run" log above we can see that the ports davmail will listen on are: Not surprisingly, the heart is centered around the docker run command that we can see at the core of the upstart config.
#Davmail gateway thunderbird how to#
declares how to start (lines 15-27) and stop (lines 30-35) the service.declares that the service should be relaunched if it terminates unexpectedly (line 11) and establishes safe limits (line 12).declares dependencies on the docker.io service (lines 6 and 7).Note how our upstart config does a couple different things here: It is available by default on recent Ubuntu systems, and is very convienient to our purposes. Upstart is an approach to unix daemon management. Tada! :) Configure a container to run as a system service via upstart If so, you can use this image as a starting point, ADD the changed configuration to the image, and rebuild. You may need to edit the davmail configuration to reflect the specific needs of your Exchange environment. does not run davmail as root inside the container.does not require any bind-mounts or other external volumes to be attached.We're going to use the rsrchboy/davmail-savvis-docker image - for obvious reasons - as it gives us a configured davmail container that: There are a couple davmail images available on the Docker Hub. It's worth mentioning that adding your userid to this group will allow you to interface with the docker service without needing to "sudo" it all the time this is left as an exercise for the reader. On line 33 we see a docker group being created. Our previous tutorial on containing Chef with Vagrant may provide some guidelines, but, as always, this is left as an exercise for the reader. You can fire up basic Ubuntu machine fairly easily. Prerequisitesįor the purposes of this article, we're going to assume that you're an enligtened sort, and are running a Ubuntu 14.04-based cloud instance (or workstation, or.). Even if your mail is boring beyond belief, please, do not access it unencrypted. Common - and well-established - solutions include the use of ssh port forwarding or (for a less ad-hoc approach) stunnel. Security is of utmost concern, particularly in a corporate environment, so please note that securing the connections between your local workstation and the cloud instance. One may find a more satisfying approach in using tools like puppet-git-receiver and Gareth Rushgrove's most excellent docker Puppet Forge module to manage the use of the upstream Docker packages as well as our container's upstart configuration - both of which will be covered in a future tutorial. For simplicity, we're using the docker.io package from Ubuntu and configuring by hand. Provisioning a CenturyLink Cloud instance is left as an exercise for the reader. Install Docker on a CenturyLink Cloud instance, provision a stateless container image running a davmail instance, and have the system manage the container via upstart. It's also standalone, can be used statelessly, and - with apologies - is Java, making it a fantastic candidate for running inside a Docker container. It's a great tool - and one of the only solutions. davmail is a Java application that knows how to translate between standards-compliant clients (like, say, Thunderbird) and an Exchange server that speaks only Exchange. This may pose a problem for you, if you prefer a non-Microsoft mail client: if the compatibility features are enabled, you'll be able to access your mail via IMAP and send over SMTP, as the Internet intended. Love it or hate it, sometimes we have to use an Exchange server to communicate.

0 notes
Text
Josh Dunkley vs Western Bulldogs hardball
loading Geelong remains silent on whether they are ready to trade with big man Isava Ratugulia who has made it clear he wants to get to Port Adelaide during this trading period. Geelong has been reluctant to let the 24-year-old leave despite playing only four games this season, as they believe he can give them depth in many parts of the land and coach Chris Scott is a firm believer in his talent. However, with the Cats needing to trade in a second round in the future to complete the Jack Bowes deal and possibly needing first-round picks to secure both Tanner Bruhn of the Giants and OIiver Henry of Collingwood, they may need to obtain a future second-round pick from the Port of Adelaide commits. The AFL rules about keeping future shots minimal. Ratogolia played his first three games before being demoted to VFL and then needed surgery after injuring his ankle in the middle of the year. He returned to play the final round but did not appear in the finals. Port sees him playing a role in defense and as a potential rider he probably has more chances at Alberton Oval than the top defender. The Cats have a long tradition of giving players chances at other clubs after several years in their system if their place in the first team is not established. Andrew Mackie, Geelong’s roster manager, told AFL Trade Radio on Wednesday that the big cat remained a sought-after player, although Ratojulia’s position was reasonable. “We put the games in ‘SAF’ and it was a big part of our plan to chain forward in football,” Mackie said. “It’s clear that guys like Sav after six years into the system just want to start playing football, and we’ve had that discussion with him. We’ll keep trying and paint him what it looks like. “He’s a contract player. No doubt the coach loves him, we love him, but we totally understand he’s trying to figure out where he’s going to play football. We’re very clear about how that looks, so that’s where the person sits.” Versatile Geelong Big Man Esava Ratugolea.attributed to him:Getty Images Collingwood and the Giants are so adamant they want fair returns for Henry and Bronn respectively as they were selected in the first round two years ago. The Cats are keen to stick with the No. 7 pick they will earn in the Jack Bowes deal, so they may have to forgo the second round pick in the future in return. Port Adelaide is awaiting deals to acquire Jason Horne-Francis of North Melbourne and Junior Rioli of the West Coast, but a potential triple-deal involving Port, Kangaroos and West Coast was held up when the Eagles decided they wanted to get a player from Port Adelaide for the deal to work. However, Porte says rising stars such as Zach Pattiers, Mitch Georgiades, Xavier Dorsma, Josh Sen and Miles Bergman have no interest in leaving the club and have no interest in trading them. Gold Coast striker Josh Corbett joined Fremantle on Thursday. In turn, The Suns received the Dockers’ fourth-round pick in the future. Josh Corbett.attributed to him:Getty Images Corbett has scored 33 goals from 36 games since his debut for The Sun in 2019 and is a forward mover for the Dockers who are also interested in fast winger Jeremy Sharp during this commercial period. Melbourne’s Adam Tomlinson is open to a Melbourne exit after his chances of defending have been limited in the past two seasons. Source link Originally published at Melbourne News Vine
0 notes