#Flask app Reverse Proxy
Explore tagged Tumblr posts
Text
Nginx Reverse Proxy For Flask Application.
To set up Nginx as a reverse proxy for a Flask application, you’ll need to configure Nginx to forward incoming requests to your Flask application’s server. Here’s a step-by-step guide to help you set up nginx reverse proxy for your Flask application. Install Nginx For Ubuntu/Debian: sudo apt-get install nginx For CentOS/RHEL: sudo yum install nginx Configure the Nginx server block Open the…
View On WordPress
0 notes
Text
How to Deploy Your Full Stack Application: A Beginner’s Guide

Deploying a full stack application involves setting up your frontend, backend, and database on a live server so users can access it over the internet. This guide covers deployment strategies, hosting services, and best practices.
1. Choosing a Deployment Platform
Popular options include:
Cloud Platforms: AWS, Google Cloud, Azure
PaaS Providers: Heroku, Vercel, Netlify
Containerized Deployment: Docker, Kubernetes
Traditional Hosting: VPS (DigitalOcean, Linode)
2. Deploying the Backend
Option 1: Deploy with a Cloud Server (e.g., AWS EC2, DigitalOcean)
Set Up a Virtual Machine (VM)
bash
ssh user@your-server-ip
Install Dependencies
Node.js (sudo apt install nodejs npm)
Python (sudo apt install python3-pip)
Database (MySQL, PostgreSQL, MongoDB)
Run the Server
bash
nohup node server.js & # For Node.js apps gunicorn app:app --daemon # For Python Flask/Django apps
Option 2: Serverless Deployment (AWS Lambda, Firebase Functions)
Pros: No server maintenance, auto-scaling
Cons: Limited control over infrastructure
3. Deploying the Frontend
Option 1: Static Site Hosting (Vercel, Netlify, GitHub Pages)
Push Code to GitHub
Connect GitHub Repo to Netlify/Vercel
Set Build Command (e.g., npm run build)
Deploy and Get Live URL
Option 2: Deploy with Nginx on a Cloud Server
Install Nginx
bash
sudo apt install nginx
Configure Nginx for React/Vue/Angular
nginx
server { listen 80; root /var/www/html; index index.html; location / { try_files $uri /index.html; } }
Restart Nginx
bash
sudo systemctl restart nginx
4. Connecting Frontend and Backend
Use CORS middleware to allow cross-origin requests
Set up reverse proxy with Nginx
Secure API with authentication tokens (JWT, OAuth)
5. Database Setup
Cloud Databases: AWS RDS, Firebase, MongoDB Atlas
Self-Hosted Databases: PostgreSQL, MySQL on a VPS
bash# Example: Run PostgreSQL on DigitalOcean sudo apt install postgresql sudo systemctl start postgresql
6. Security & Optimization
✅ SSL Certificate: Secure site with HTTPS (Let’s Encrypt) ✅ Load Balancing: Use AWS ALB, Nginx reverse proxy ✅ Scaling: Auto-scale with Kubernetes or cloud functions ✅ Logging & Monitoring: Use Datadog, New Relic, AWS CloudWatch
7. CI/CD for Automated Deployment
GitHub Actions: Automate builds and deployment
Jenkins/GitLab CI/CD: Custom pipelines for complex deployments
Docker & Kubernetes: Containerized deployment for scalability
Final Thoughts
Deploying a full stack app requires setting up hosting, configuring the backend, deploying the frontend, and securing the application.
Cloud platforms like AWS, Heroku, and Vercel simplify the process, while advanced setups use Kubernetes and Docker for scalability.
WEBSITE: https://www.ficusoft.in/full-stack-developer-course-in-chennai/
0 notes
Text
Python Tutorial: A simple Flask App using Redis with Docker Compose.
Docker Compose enables multiple Docker containers to run in a single environment. In this tutorial, we'll write a basic Flask app with Redis using Docker Compose. What is Flask? Flask is a very lightweight framework for developing APIs in Python. For the production purposes, it's recommended to use Flask with uWSGI application server and nginx which is a web server and a reverse proxy. But in this tutorial, we will just create a simple Flask application. What is redis? Redis is a free open source data memory store, used as a database, cache and message broker. Prerequisites: You need to install Docker Engine and Docker Compose.
STEP-1:
Create a project folder on your host machine and enter that directory: $ mkdir project_folder $ cd project_folder
STEP-2:
Create requirements.txt file in the project folder and add the following lines in that file. #project_folder/requirements.txt flask redis
STEP-3:
Create app.py file in the project folder and add the following lines. #project_folder/app.py import Flask and Redis libraries from flask import Flask from redis import Redis import random app = Flask(name) redis = Redis(host='redis', port=6379) # declare main route @app.route('/') def main(): return 'Hi. In order to earn bonus points enter your name in the url. eg: /John' # declare a route that gets the visitor's name. Every name has its own bonus points. @app.route('/') def greet(name): bonus = random.randrange(1, 100) redis.incrby(name, bonus) return 'Hello %s. You have earned %d bonus points. Your total point is %s.' % (name, bonus, redis.get(name)) # run the flask application on the local development server. if name == "main": app.run(host="0.0.0.0", debug=True)
STEP-4:
Create Dockerfile in the project_folder and add the following: FROM python:latest ADD . /app WORKDIR /app RUN pip3 install -r requirements.txt CMD python3 app.py
STEP-5:
Create docker-compose.yml in the project_folder and define the services. There will be two services named app and redis.The app service builds the custom image from the Dockerfile in the project folder which is the current directory.Container’s exposed port (5000) is mapped to the port 5000 on the host.The current directory is mapped to the working directory "/app" folder inside the container.This service depends on redis service. The redis service uses the latest Redis image from Docker Hub. version: '3' services: app: build: . ports: - "5000:5000" volumes: - .:/app depends_on: - redis redis: image: redis
STEP-6:
Build and Run with Docker Compose Start the application from your directory: $ docker-compose up
STEP-7:
You can test the app on your browser with http://localhost:5000. Second, visit http://localhost:5000/username You can try different usernames. Each username will have different bonus points thanks to redis.
STEP-8:
You can stop the application with CTRL+C. And then stop the container. $ docker-compose stop
STEP-9
Troubleshooting: If you make some changes and the changes don't apply, remove the built images and rebuild them. docker-compose rm -f docker-compose pull docker-compose up --build Further Reading: - Unix/ Linux Bash Shell Scripting. - How to use “amazon-linux-extras” and install a package on AWS EC2 running Amazon Linux 2? - How to remember Linux commands? What do Linux commands stand for? Read the full article
0 notes
Photo
Dockerize a Flask app with NGINX reverse proxy using Docker-Compose by @MrL33h https://t.co/mE2XtOSp36 #Python #Flask #NGINX #Docker (via Twitter http://twitter.com/PythonWeekly/status/1080841231058518016) #Python
0 notes
Text
docker compose... so i only need to type one command
So I guess this is the entire purpose for the existence of docker compose? So here's how to do exactly the same thing as in my last post
Let's assume the docker daemon is already installed.
Install docker-compose.
$ curl -L https://github.com/docker/compose/releases/download/1.15.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose $ sudo chmod +x /usr/local/bin/docker-compose $ docker-compose --version docker-compose version 1.15.0, build e12f3b9 $ docker-compose version docker-compose version 1.15.0, build e12f3b9 docker-py version: 2.4.2 CPython version: 2.7.13 OpenSSL version: OpenSSL 1.0.1t 3 May 2016
Put a docker-compose.yml into your working directory:
$ tree . ├── Vagrantfile ├── app │ ├── Dockerfile │ ├── index.py │ └── pewpew.wsgi ├── db │ └── Dockerfile ├── docker-compose.yml └── rp ├── Dockerfile └── site.conf version: '2' services: reverseproxy: build: /srv/rp ports: - "80:80" links: - flaskapp:flaskapp depends_on: - flaskapp flaskapp: build: /srv/app ports: - "5000:5000" links: - mongodb:mongodb depends_on: - mongodb mongodb: build: /srv/db ports: - "27017:27017"
Build those images & run those containers
$ docker-compose up -d . . . Creating mongodb ... Creating mongodb ... done Creating flaskapp ... Creating flaskapp ... done Creating reverseproxy ... Creating reverseproxy ... done
Check what's going on
$ docker-compose ps Name Command State Ports ----------------------------------------------------------------------------------- flaskapp gunicorn -k eventlet -b 0. ... Up 0.0.0.0:5000->5000/tcp mongodb docker-entrypoint.sh mongod Up 0.0.0.0:27017->27017/tcp reverseproxy nginx -g daemon off; Up 443/tcp, 0.0.0.0:80->80/tcp
Send a request to the reverse proxy
$ curl http://127.0.0.1 Pew Pew! {u'storageEngines': [u'devnull', u'ephemeralForTest', u'mmapv1', u'wiredTiger'], u'maxBsonObjectSize': 16777216, u'ok': 1.0, u'bits': 64, u'modules': [], u'openssl': {u'compiled': u'OpenSSL 1.0.1t 3 May 2016', u'running': u'OpenSSL 1.0.1t 3 May 2016'}, u'javascriptEngine': u'mozjs', u'version': u'3.4.6', u'gitVersion': u'c55eb86ef46ee7aede3b1e2a5d184a7df4bfb5b5', u'versionArray': [3, 4, 6, 0], u'debug': False, u'buildEnvironment': {u'cxxflags': u'-Woverloaded-virtual -Wno-maybe-uninitialized -std=c++11', u'cc': u'/opt/mongodbtoolchain/v2/bin/gcc: gcc (GCC) 5.4.0', u'linkflags': u'-pthread -Wl,-z,now -rdynamic -Wl,--fatal-warnings -fstack-protector-strong -fuse-ld=gold -Wl,--build-id -Wl,-z,noexecstack -Wl,--warn-execstack -Wl,-z,relro', u'distarch': u'x86_64', u'cxx': u'/opt/mongodbtoolchain/v2/bin/g++: g++ (GCC) 5.4.0', u'ccflags': u'-fno-omit-frame-pointer -fno-strict-aliasing -ggdb -pthread -Wall -Wsign-compare -Wno-unknown-pragmas -Winvalid-pch -Werror -O2 -Wno-unused-local-typedefs -Wno-unused-function -Wno-deprecated-declarations -Wno-unused-but-set-variable -Wno-missing-braces -fstack-protector-strong -fno-builtin-memcmp', u'target_arch': u'x86_64', u'distmod': u'debian81', u'target_os': u'linux'}, u'sysInfo': u'deprecated', u'allocator': u'tcmalloc'} [u'admin', u'local']
What do the containers look like now?
$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 774ad38c93d7 srv_reverseproxy "nginx -g 'daemon ..." 3 minutes ago Up 3 minutes 0.0.0.0:80->80/tcp, 443/tcp reverseproxy 1094581a5a17 srv_flaskapp "gunicorn -k event..." 3 minutes ago Up 3 minutes 0.0.0.0:5000->5000/tcp flaskapp 0f4c7f0d7175 srv_mongodb "docker-entrypoint..." 3 minutes ago Up 3 minutes 0.0.0.0:27017->27017/tcp mongodb $ docker-compose ps Name Command State Ports ----------------------------------------------------------------------------------- flaskapp gunicorn -k eventlet -b 0. ... Up 0.0.0.0:5000->5000/tcp mongodb docker-entrypoint.sh mongod Up 0.0.0.0:27017->27017/tcp reverseproxy nginx -g daemon off; Up 443/tcp, 0.0.0.0:80->80/tcp
And the images?
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE srv_reverseproxy latest 9283f67c41fb 3 minutes ago 107MB srv_flaskapp latest e3b9a8003c5d 3 minutes ago 683MB srv_mongodb latest 397be1d78005 4 minutes ago 359MB nginx latest b8efb18f159b 13 days ago 107MB mongo latest 6833171fe0ad 13 days ago 359MB python 2.7 fa8e55b2235d 2 weeks ago 673MB $ docker-compose images Container Repository Tag Image Id Size ---------------------------------------------------------------- flaskapp srv_flaskapp latest e3b9a8003c5d 651 MB mongodb srv_mongodb latest 397be1d78005 342 MB reverseproxy srv_reverseproxy latest 9283f67c41fb 102 MB
Now previously... say you want to change something in one of the containers, you'd have to like, docker stop.. docker rm.. docker rmi.. docker build.. docker run.. blah blah blah.
Say I've changed the app output from Pew Pew! to Peow Peow Lazor Beams!!
With docker-compose:
$ docker-compose build flaskapp . . Removing intermediate container 05f624a8c37b Successfully built 0e44b2dee5fa Successfully tagged srv_flaskapp:latest $ docker-compose up --no-deps -d flaskapp Recreating flaskapp ... Recreating flaskapp ... done $ docker-compose ps Name Command State Ports ----------------------------------------------------------------------------------- flaskapp gunicorn -k eventlet -b 0. ... Up 0.0.0.0:5000->5000/tcp mongodb docker-entrypoint.sh mongod Up 0.0.0.0:27017->27017/tcp reverseproxy nginx -g daemon off; Up 443/tcp, 0.0.0.0:80->80/tcp $ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2b4e890d5dcf srv_flaskapp "gunicorn -k event..." 3 minutes ago Up 3 minutes 0.0.0.0:5000->5000/tcp flaskapp 774ad38c93d7 srv_reverseproxy "nginx -g 'daemon ..." 2 hours ago Up About an hour 0.0.0.0:80->80/tcp, 443/tcp reverseproxy 0f4c7f0d7175 srv_mongodb "docker-entrypoint..." 2 hours ago Up About an hour 0.0.0.0:27017->27017/tcp mongodb
And the changes?
$ curl http://127.0.0.1 Peow Peow Lazor Beams!! {u'storageEngines': [u'devnull', u'ephemeralForTest', u'mmapv1', u'wiredTiger'], u'maxBsonObjectSize': 16777216, u'ok': 1.0, u'bits': 64, u'modules': [], u'openssl': {u'compiled': u'OpenSSL 1.0.1t 3 May 2016', u'running': u'OpenSSL 1.0.1t 3 May 2016'}, u'javascriptEngine': u'mozjs', u'version': u'3.4.6', u'gitVersion': u'c55eb86ef46ee7aede3b1e2a5d184a7df4bfb5b5', u'versionArray': [3, 4, 6, 0], u'debug': False, u'buildEnvironment': {u'cxxflags': u'-Woverloaded-virtual -Wno-maybe-uninitialized -std=c++11', u'cc': u'/opt/mongodbtoolchain/v2/bin/gcc: gcc (GCC) 5.4.0', u'linkflags': u'-pthread -Wl,-z,now -rdynamic -Wl,--fatal-warnings -fstack-protector-strong -fuse-ld=gold -Wl,--build-id -Wl,-z,noexecstack -Wl,--warn-execstack -Wl,-z,relro', u'distarch': u'x86_64', u'cxx': u'/opt/mongodbtoolchain/v2/bin/g++: g++ (GCC) 5.4.0', u'ccflags': u'-fno-omit-frame-pointer -fno-strict-aliasing -ggdb -pthread -Wall -Wsign-compare -Wno-unknown-pragmas -Winvalid-pch -Werror -O2 -Wno-unused-local-typedefs -Wno-unused-function -Wno-deprecated-declarations -Wno-unused-but-set-variable -Wno-missing-braces -fstack-protector-strong -fno-builtin-memcmp', u'target_arch': u'x86_64', u'distmod': u'debian81', u'target_os': u'linux'}, u'sysInfo': u'deprecated', u'allocator': u'tcmalloc'} [u'admin', u'local']
Noice.
0 notes
Link
The sample code linked below will show you how to get started using the Cisco Meraki Scanning API. This code is for sample purposes only. Before running in production, you should probably add SSL/TLS support by running this server behind a TLS-capable reverse proxy like nginx.
You should also test that your server is capable of handling the rate of events that will be generated by your networks. A good rule of thumb is that your server should be able to process all your network's nodes once per minute. So if you have 100 nodes, your server should respond to each request within 600 ms. For more than 100 nodes, you will probably need a multithreaded web app.
Ruby server - Runs a web server and also Resque, a background message queuing and processing system in Heroku.
Python with Flask - The basic app will place the data received on the console, and the MongoDB script will extend the app by placing data into a local MongoDB database.
Node.js server - A simple NodeJS application to accept WiFi location data from a Cisco Meraki network.
Node-RED node - A node that will collect Scanning API data into a Node-RED flow and store into a Mongo database.
AWS Lambda - This guide will walk through the process of building a Meraki Scanning API receiver using the AWS Lambda service by Amazon. The data will then be placed into DyanamoDB where it can optionally be indexed and searchable using ElasticSearch and visualized with Kibana.
0 notes
Text
Lessons from scale: Django
This February, I visited Pune to speak at PyCon Pune 2017. My talk was basically a brief summary of my learning after having been working with Django in production for the past couple of years, as part of building the DoSelect stack.
Why choose Django?
The last thing that I want to spark off is a flame war over frameworks here. There are a lot of good ones; Flask is one of my favourites, but if you are building something of a greater scope than a one-off web service, you’re going to need the batteries anyways. The pace of prototyping, and development in general, is pretty fast as a result.
Django is an opinionated framework and makes a lot of decisions for you. While most of the design patterns are common sense, it’s very easy to break out of the box when you want to. This gives you a lot of flexibility as and when your application grows in scale and use case.
Lastly, Django has great community support, and it has been around long enough to be considered a mature project.
Tuning your WSGI server
We’ve been using Gunicorn, and it has been pretty awesome for us. As easy it is to setup, it’s important to tune your conf to get the maximum juice out of it. Introspect your application’s needs and take the following decisions:
Determine worker_class — you need to know when to use sync and async workers. Async workers work great when your load is I/O bound and there are no CPU-heavy processing involved in a request-response cycle. Use sync workers when you are doing a lot of processing. This way, Gunicorn utilizes all available CPU cores optimally, and your response times are sane.
Adjust parameters like workers, worker_connections, and keepalivewisely. A trip to Gunicorn config documentation is worth your time.
There is such a thing as too many workers. Even if you scale your CPU and memory, you cannot infinitely scale the number of workers, since the efficiency curve plateaus eventually. Horizontally scaling your hosts is needed in the end.
Tuning your proxy server
NGINX is a great reverse proxy server — it’s lightweight and scales very well with increasing load. A few tweaks and you can optimize the perf.
Set worker_process auto, which will automatically scale your NGINX processes with the available cores. This is not the default setting.
Adjust keepalive_timeout. This should match the timeout on your WSGI server, so you don’t run into timeouts for requests that take more time.
Turn on tcp_nopush and tcp_nodelay.
gzip all the things — if you’re not doing it already!
This is a great blog post about NGINX performance optimization.
What’s taking so long?
If you have a client facing web-app, Chrome Developer Tools should be your best friend. Measure all the requests processed by Django, and take a detailed look into the response times — find the rate limiting step, and optimize it. Rinse. Repeat.
If you make a lot of complex queries, use EXPLAIN ANALYZE on those queries from your SQL command line. This is a nice and easy way to identify which queries are taking a lot of time, so you can optimize them. When using the Django ORM, developers generally don’t think too much about what the actual queries are. Tailing the database logs is a fun exercise that every Django developer should try at times.
Caching
If you’ve scaled your app servers horizontally, it’d be good to use Redis as your primary cache. Redis works wonderfully in such use case.
When you are using Django’s contrib User model, and all your auth works on top of this, start by caching all user sessions. Again, this is not the default Django conf, so a lot of people tend to miss it. While you are at it, you’d want to cache your User model lookups as well, since these resources are being read way many more times than being written to. After that, depending on your use case, you’d want to think about resource-level caching.
As a rule of thumb, use a CDN for all things that the user cannot change. This includes static files, images, and HTML templates (if you have a single-page app).
Django ORM
The ORM is awesome, and one of the primary reasons why Django has been adopted so widely. But after you’ve run your Django app in production for a considerable time, you’d realize that it’s not the silver bullet after all. Take a peek under the hood, and look at the queries the ORM is making. Since it’s so easy to use the ORM, it’s equally easy to use it the wrong way and axe your foot. Do not fear from breaking away from the ORM when you need to.
Automatic relationship access can bite you when you are using something like Django Rest Framework or Tastypie for creating API resources. It’s better to expand relationships carefully. Add extra indices where needed.
When you are building a complex application, chances are you’re gonna need a lot on-the-fly processing of data that you get from the DB before you can send it as response. While a lot of things can be denormalized for better access, this is not feasible in most cases.
For example, the test reports on DoSelect consist of a lot of derived metrics about the test-taker. Most of these metrics are hard to denormalize since they depend on attributes that can change arbitrarily — like the qualification status (which depends on the test cut-off), percentile (which depends on the number of test takers), etc. These derived attributes are also used to sort the leader boards.
Instead of doing these derived calculations in Python, it’s better to do this in the database itself, and query with the result. One example is time takenmetric. Normally, you’d need to store start and end times separately, since you might wanna change them. So instead of denormalizing time taken as a separate field, just calculate this in the database.
As a rule of thumb, always denormalize data which has no bounds — like number of comments on a post. If a post has, say. 20k comments, you’d better read an integer than perform a COUNT query every time. Dehydration, which means on-the-fly calculation, is better when you know the bounds — like on the previous paragraph, time taken.
Scaling your database
Always use database-level connection pooling — which works very well when you are scaling your services horizontally. Django already does application level connection pooling, but things can get complicated when you are using a Celery worker — and you will end up using Celery workers. If you’re using PostgreSQL, pgBouncer is a drop-in solution for this.
Monitor all the queries to see what’s holding you back.
If you’re using a streaming replication of your database, you might want to look at segregating your reads and writes. You can offload all your reads from slaves, and dedicate all writes to the master.
Aside: If you have a use case that involves a lot of filtering, in addition to search, you might want to offload all your list reads to the search engine as well. Elasticsearch is a great search engine, and is optimized for reads. You’d be surprised by the performance boost.
youtube
Full talk slides can be found here: https://sanketsaurav.github.io/django-on-steroids. If you have any questions related to this, I’d be happy to answer them. Add a response here, or hit me up on Twitter.
0 notes