#heroku
Explore tagged Tumblr posts
robomad · 1 year ago
Text
Deploying Django Applications to Heroku
Learn how to deploy your Django applications to Heroku with this comprehensive guide. Follow step-by-step instructions to set up, configure, and deploy your app seamlessly.
Introduction Deploying Django applications to Heroku is a streamlined process thanks to Heroku’s powerful platform-as-a-service (PaaS) offerings. Heroku abstracts away much of the infrastructure management, allowing developers to focus on building and deploying their applications. This guide will walk you through the steps to deploy a Django application to Heroku, including setting up the…
Tumblr media
View On WordPress
5 notes · View notes
salesforce-blog · 2 years ago
Text
Benefits Of Archiving Salesforce Data Using Heroku Cloud
Heroku is an excellent platform for running apps that integrate with Salesforce for a variety of use cases, one of which can also be data archiving.  Read More
13 notes · View notes
torchbox · 2 months ago
Text
Heroku Outage - 10 June 2025
Update 11 June 10:45 - These issues are now resolved. Heroku and Salesforce have reported that all services were restored by 06:16 UK time.
If your Torchbox-supported services are not working as expected, please report to us through the usual channels.
Update 10 June 17:35 - These issues are still ongoing. Our on-call staff are continuing to monitor the situation as it develops.
Update 10 June 12:59 - Heroku and Salesforce are continuing to experience difficulties, and have reported that their issue is more widespread than initially thought. In certain cases this may affect the ability of websites to synchronise data with 3rd party systems, or perform other tasks which take place on a repeating schedule.
Original status
Our 3rd-party hosting platform, Heroku, has been experiencing issues, particularly around access to their dashboard. Their parent company, Salesforce, are also reporting issues.
At this time, it's not believed this is impacting the availability of applications, however deployments and other configuration changes cannot be made. Similarly, staging sites which were turned off overnight have not been resumed this morning.
Further details can be found on Heroku's status page.
Outages of this kind fall outside of Torchbox’s control, however, we will continue to monitor the situation and update this page as required.
0 notes
technologyblogofmohit · 4 months ago
Text
Heroku and AWS each have their strengths and weaknesses. The decision to pick one from Heroku or AWS depends on your business requirements, budget, and resources. Read more!
0 notes
asadmukhtarr · 4 months ago
Text
Deploying a MERN (MongoDB, Express, React, Node.js) stack application is a crucial step to make your app accessible to users. Choosing the right deployment platform depends on your needs:
AWS (Amazon Web Services) is ideal for scalable, high-performance applications.
Vercel is great for quick and easy deployment, especially for frontend applications.
In this guide, we will walk through deploying a MERN stack application on AWS (using EC2 and S3) and Vercel (for serverless hosting).
0 notes
forcecrow · 7 months ago
Text
𝐇𝐞𝐫𝐨𝐤𝐮 𝐚𝐧𝐝 𝐒𝐚𝐥𝐞𝐬𝐟𝐨𝐫𝐜𝐞 𝐈𝐧𝐭𝐞𝐠𝐫𝐚𝐭𝐢𝐨𝐧!
Seamlessly connect your Salesforce data with Heroku apps, unlocking endless possibilities for real-time insights, custom solutions, and enhanced scalability. 📊✨
👉 𝐖𝐚𝐧𝐭 𝐭𝐨 𝐥𝐞𝐚𝐫𝐧 𝐦𝐨𝐫𝐞? 𝐂𝐥𝐢𝐜𝐤 𝐨𝐧 𝐭𝐡𝐞 𝐜𝐨𝐦𝐦𝐞𝐧𝐭𝐬 𝐛𝐞𝐥𝐨𝐰! 👇
Tumblr media
1 note · View note
mak1210 · 1 year ago
Text
Tumblr media
0 notes
newcodesociety · 1 year ago
Text
0 notes
techdirectarchive · 1 year ago
Text
Deploying Next.Js App Using Heroku Cloud Application Platform
Heroku is one of the best platforms as a service (PaaS) that many developers are using to build, run, and operate their applications fully on the cloud. They have a free and pay-for-service plan. On this platform, you can easily deploy your application for public access in few minutes. In this article, I will be deploying a Next.Js app using the Heroku cloud application platform. You can read…
Tumblr media
View On WordPress
0 notes
dromologue · 1 year ago
Link
This is a walkthrough of the development process and system design engineering for the Llama as a Service. LaaS is a website and public API that can serve random Llama images. It will respond with a single image URL, or even a list. Visit the LaaS website for a demo View the source code on GitHub View the walkthrough YouTube video What I Learned For this project, there is a frontend built with React hosted on Netlify, connected to the backend. I built each API with Node.js, Express, and Docker. Services connected to a NoSQL MongoDB database. Each service is in an independent repository to maintain separation of concerns. It would have been possible to build this in a monorepo, but it was good practice. Each repository uses GitHub Actions to build, and test the code on every push. Express API was deployed to Heroku when the main branch was pushed. With each app containerized with Docker, this allows it to be run on any other developer's machine also running Docker. Although I had automated deployments to Heroku without this, I decided to upload each service to a container registry. Each repository also used a GitHub Actions workflow to automatically tag and version updates and releases. It would then build and publish the most up to date Docker image, and release it to the GitHub Container Registry. For future use, this makes it crazy easy to deploy a Kubernetes cluster to the cloud, with a simple docker pull ghcr.io/OWNER/IMAGE_NAME command. However, that was beyond the scope of this project because of zero budget. To manage the environment variables, I was able to share Secrets to the GitHub Action workflows, which are encrypted, and can be shared across an entire organization (meaning multiple repos could access the variables). This allowed me to deploy my code securely to Heroku, without ever hard-coding the API keys. Another tool I used was Artillery for load testing on my local machine. Instead of npm, I tried using yarn for the package manager, and it was WAY faster in GitHub Actions even without caching enabled. Although they did not make it into production, I experimented with the RabbitMQ message broker, Python (Django, Flask), Kubernetes + minikube, JWT, and NGINX. This was a hobby project, but I intended to learn about microservices along the way. Demonstration Here is a screen shot of the LaaS website. If you would like to try out the API, simply made a GET request to the following endpoint: https://llama-as-a-service-images.herokuapp.com/random Creating an API First, I started with a simple API built with Node.js and Express, containerized with Docker. I set up GitHub Actions for CI to build and test it on each push. This will later connect to the database, to respond with image URLs. Although this API doesn’t NEED to scale to millions of users, it was a valuable exercise for building and scaling a system. I aimed for a minimum latency of 300ms with 200 RPS. Image Database With an API ready to connect to the database, it was time to choose between a NoSQL or SQL database. The answer is obvious for this use case. Let’s walk through the data we have, and the use cases. We are going to store one single table with image URLs. This could easily be done in either database, but there is one key factor. We need a way to randomly pull a list of images from the database. A SQL database makes it simple to query a random row, however, this is not horizontally scalable, and with a large data set, we are replicating the ENTIRE database to each new node. On the other hand, NoSQL databases are horizontally scalable; which leads me to Cassandra, but unfortunately it is very difficult to pull random selections from this type of NoSQL database. Finally, I settled with MongoDB, which has a built-in $sample method to pull from the records. Once I got the MongoDB database running locally with Docker, I created a quick script to seed the database. Now it’s time to connect the API to the database. Connecting API to the Database Next, I used the mongoose Node.js API to connect to the local MongoDB. I created two endpoints; one to upload an image URL, and another to retrieve a random list of images. Endpoint Load Testing To experiment with scaling the API, I wanted to do load testing. Keep in mind that this API does not have much logic, meaning caching, or optimizing the code's performance, will have a huge impact. I found a tool for load testing called Artillery. Following this guide I installed Artillery and began research for the test configuration. The API currently has the /random endpoint to return an image URL (a string), with very little computation. Let’s stress test this to see the current traffic limit. The random list endpoint is what we need to optimize. For the starting algorithm though, I seeded 100 image records into the database, and then pulled the ENTIRE list from the database each request. The API would then choose 25 random elements to return. Let’s benchmark how this performs with load testing. With the first run, API, the limit on the /random?count=25 endpoint was 225 RPS over 15 seconds, with 99% of the response times were under 300ms. We can improve this. Optimizing the Endpoint We have many records of image URLs in the database. Somehow, we need to efficiently transform these into a list, pulling random selections from the database. Let’s optimize the query for pulling documents from the database. Using a special mongodb query, we can drastically reduce the computational load for a single request. Running locally in postman, random?count=25 endpoint went from ~150ms for a single request, to <50ms. This is the only code we need for this endpoint, compared to the previous 20 lines and O(n^2) space. With the new query, the endpoint maintains 99% sub-300ms response time with a max of 440 RPS over 15 seconds. Horizontally Scaling the API With the containerized Node.js/Express API, I could run multiple containers, scaling to handle more traffic. Using a tool called minikube, we can easily spin up a local Kubernetes cluster to horizontally scale Docker containers. It was possible to keep one shared instance of the database, and many APIs were routed with an internal Kubernetes load balancer. Horizontally scaling the API to two instances, the random endpoint maintains 99% sub-300ms response time with a max of 650 RPS over 15 seconds. Three API Instances => 99% sub-300ms response time with a max of 1000 RPS over 15 seconds. Five API Instances => 99% sub-300ms response time with a max of 1200 RPS over 15 seconds. In practice, five instances were the limit of scaling the API horizontally. Even with more instances, the traffic was never sub 300ms response time. Note, this is dependent on the hardware of my local machine, and not accounting for cross-network latency in the real world. With scaling, we can achieve higher throughput, allowing more traffic to flow, and resiliency, where a failed node can simply be replaced. Since the image responses are intended to be random, we cannot cache the responses. It would be possible to scale the database with a slave/master system, but without a large data set, it is not worth the time to test. The bottleneck is most likely the API and connections to the database, versus MongoDB not handling read requests. It may be possible to improve the read times with a REDIS database, using in-memory caching, but that is overkill for this project. Setting up Authentication After playing around with load testing, I wanted to explore JSON Web Tokens and build an API to handle authentication. This auth API will generate tokens, which will be sent back to the client as headers. The tokens headers are stored client-side (e.g. cookies, local storage), and sent to the backend each request. If we expand the backend, we could include the authentication logic in each microservice. Not practical. Instead, we can decouple the logic into its own service as shown below: Creating a Gateway API Instead of exposing the users directly to each microservice, we should route ALL traffic from the clients to the Gateway API. For this, I chose the same tech stack of Node.js/Express. Using a library, I was able to set up a proxy to the other services. In the future, this could be very useful to standardize requests to the backend, track usage, forward data to a logging microservice, talk to a message broker, and more. Environment Variables and Configuration Most of the system built, I needed to simplify the process for configuring the Docker containers locally, and how environment variables would be shared to each. Keep in mind, each service needed to access these in GitHub Actions as well, during deployment. I used the docker-compose files to easily spin up the containers locally. I used default values for the environment variables for local development, and kept the config files separated so it was easy to follow. This step was just a process of carefully writing the Docker and docker-compose files, and setting up GitHub Actions Secrets. The code could not run without having all env variables, could be hard to debug locally or lead to ambiguity for other developers. A Simple Frontend I would talk about building the frontend, but it is just a single page React app I built quickly. It does use a CSS library called Bulma, which is similar to tailwind and worth checking out. I did spend a day implementing a login/signup page, but this was just for the learning experience, and not what I wanted in the final product. GitHub Actions Testing and Deployment With most of the code written, it was time to deploy the app. This was actually a bumpy road because I was not sure how to approach this. I was keeping each component in its own repository on my personal GitHub Account, which was getting hard to keep track of. My solution was to create the Llama as a Service GitHub Organization, which also allowed me to store organization-wide secrets that any repository could access. Using GitHub Actions, I created workflows to build and test code on every push, and deploy to main branch Heroku (and Netlify for the frontend). I also created a workflow to tag and version every update, and release the Docker image to the GitHub Container Registry. These packages could be private to the organization, or public. I did not end up using these published containers, but it was really dope to see everything automated. Deploying to Production So after deploying the gateway API, frontend, and backend, I hoped all the services would be connected in production. For some reason the http-proxy-middleware was causing problems, and it was not worth redesigning the whole system. I was not ready to work with deploying a Kubernetes Cluster, so I did not use the GHCR Docker packages for deploying. Instead, I just stripped away the extra services that I had been working on, and stuck with a simple system to deploy. For the final product, there is the frontend deployed on Netlify, which connects to the API on Heroku, with talks to the MongoDB Atlas database (in the cloud). View the Source Code If you wish to view all of the source code for this project, you can look through each repository here: GitHub Organization: github.com/llama-as-a-service All the GHCR Packages: github.com/orgs/llama-as-a-service/packages Frontend - github.com/llama-as-a-service/frontend Images API - github.com/llama-as-a-service/images-service Authentication API - github.com/llama-as-a-service/auth-service Gateway API - github.com/llama-as-a-service/gateway-service If you want to have a repository with Node.js, Express, and Docker set up with GitHub Actions, check out the boilerplate repository here If you are interested in more projects by me, you can check out the ManyShiba Twitter bot, or more on my website. Follow my journey or connect with me here: LinkedIn: /in/spencerlepine Email: [email protected] Portfolio: spencerlepine.com GitHub: @spencerlepine Twitter: @spencerlepine
0 notes
jvalentino2 · 2 years ago
Text
Tumblr media
The following is the entire codebase and documentation for a web-based system I made for a non-profit. This system is used to schedule and manage appointments involving clothing distribution for school children.
0 notes
tsubakicraft · 2 years ago
Text
生成AI遊び〜遊びも研究だ!
昨日は日本語言語モデルElyzaを使って遊んでいました。Gradioを使ったチャットボットです。このGradioはこのような実験をするときに便利なWebアプリケーション開発フレームワークです。 画像生成も文書生成もAIを使った研究や娯楽を楽しんでいる人が物凄く多いので、よくわかっていなくても何となくできてしまいます。僕も勉強中なのでよく知っているというわけではありません。こうして遊びながら勉強したことを試しているのが現状です。プログラムにミスがあって、質問をすると勝手に会話を始めます。頓珍漢な会話をするのを見て笑っています。 以前はWeb系のアプリケーションを開発(研究)して人に見せるときにはHerokuやNetlifyを使っていましたが、Herokuは無料で遊べる環境ではなくなったために使わなくなってしまいました。本番のアプリケーションをデプロイするならともかく、作り散らかしている…
Tumblr media
View On WordPress
0 notes
the-harvest-field · 2 years ago
Text
Deploying a Node.js App on Heroku
A Step-by-Step Guide Heroku has long been one of the favorite platforms for developers when it comes to deploying, managing, and scaling apps. Known for its simplicity, Heroku is especially friendly towards beginners, making it an excellent choice for deploying your first Node.js application. In this article, we’ll walk you through the process step-by-step. What is Heroku? Heroku is a cloud…
View On WordPress
0 notes
why-the-heck-not · 2 years ago
Text
death to services that ask for ur payment info even tho they are free >:( then why in the good goddamn hell would u need that info then hhUH ???
51 notes · View notes
technologyblogofmohit · 10 days ago
Text
0 notes
all-hail-trash-prince · 9 months ago
Text
I am currently fistfighting selenium on the kitchen floor. The chrome driver said it was going to get milk when I pushed my code to heroku and I haven't seen it since. Send help
0 notes