#Error Handling
Explore tagged Tumblr posts
Text
How to Create Multi-Step Forms With Vanilla JavaScript andĀ CSS
New Post has been published on https://thedigitalinsider.com/how-to-create-multi-step-forms-with-vanilla-javascript-and-css/
How to Create Multi-Step Forms With Vanilla JavaScript andĀ CSS
Multi-step forms are a good choice when your form is large and has many controls. No one wants to scroll through a super-long form on a mobile device. By grouping controls on a screen-by-screen basis, we can improve the experience of filling out long, complex forms.
But when was the last time you developed a multi-step form? Does that even sound fun to you? Thereās so much to think about and so many moving pieces that need to be managed that I wouldnāt blame you for resorting to a form library or even some type of form widget that handles it all for you.
But doing it by hand can be a good exercise and a great way to polish the basics. Iāll show you how I built my first multi-step form, and I hope youāll not only see how approachable it can be but maybe even spot areas to make my work even better.
Weāll walk through the structure together. Weāll build a job application, which I think many of us can relate to these recent days. Iāll scaffold the baseline HTML, CSS, and JavaScript first, and then weāll look at considerations for accessibility and validation.
Iāve created a GitHub repo for the final code if you want to refer to it along the way.
The structure of a multi-step form
Our job application form has four sections, the last of which is a summary view, where we show the user all their answers before they submit them. To achieve this, we divide the HTML into four sections, each identified with an ID, and add navigation at the bottom of the page. Iāll give you that baseline HTML in the next section.
Navigating the user to move through sections means weāll also include a visual indicator for what step they are at and how many steps are left. This indicator can be a simple dynamic text that updates according to the active step or a fancier progress bar type of indicator. Weāll do the former to keep things simple and focused on the multi-step nature of the form.,
The structure and basic styles
Weāll focus more on the logic, but I will provide the code snippets and a link to the complete code at the end.
Letās start by creating a folder to hold our pages. Then, create an index.html file and paste the following into it:
Open HTML
<form id="myForm"> <section class="group-one" id="one"> <div class="form-group"> <div class="form-control"> <label for="name">Name <span style="color: red;">*</span></label> <input type="text" id="name" name="name" placeholder="Enter your name"> </div> <div class="form-control"> <label for="idNum">ID number <span style="color: red;">*</span></label> <input type="number" id="idNum" name="idNum" placeholder="Enter your ID number"> </div> </div> <div class="form-group"> <div class="form-control"> <label for="email">Email <span style="color: red;">*</span></label> <input type="email" id="email" name="email" placeholder="Enter your email"> </div> <div class="form-control"> <label for="birthdate">Date of Birth <span style="color: red;">*</span></label> <input type="date" id="birthdate" name="birthdate" max="2006-10-01" min="1924-01-01"> </div> </div> </section> <section class="group-two" id="two"> <div class="form-control"> <label for="document">Upload CV <span style="color: red;">*</span></label> <input type="file" name="document" id="document"> </div> <div class="form-control"> <label for="department">Department <span style="color: red;">*</span></label> <select id="department" name="department"> <option value="">Select a department</option> <option value="hr">Human Resources</option> <option value="it">Information Technology</option> <option value="finance">Finance</option> </select> </div> </section> <section class="group-three" id="three"> <div class="form-control"> <label for="skills">Skills (Optional)</label> <textarea id="skills" name="skills" rows="4" placeholder="Enter your skills"></textarea> </div> <div class="form-control"> <input type="checkbox" name="terms" id="terms"> <label for="terms">I agree to the terms and conditions <span style="color: red;">*</span></label> </div> <button id="btn" type="submit">Confirm and Submit</button> </section> <div class="arrows"> <button type="button" id="navLeft">Previous</button> <span id="stepInfo"></span> <button type="button" id="navRight">Next</button> </div> </form> <script src="script.js"></script>
Looking at the code, you can see three sections and the navigation group. The sections contain form inputs and no native form validation. This is to give us better control of displaying the error messages because native form validation is only triggered when you click the submit button.
Next, create a styles.css file and paste this into it:
Open base styles
:root --primary-color: #8c852a; --secondary-color: #858034; body font-family: sans-serif; line-height: 1.4; margin: 0 auto; padding: 20px; background-color: #f4f4f4; max-width: 600px; h1 text-align: center; form background: #fff; padding: 40px; border-radius: 5px; box-shadow: 0 0 10px rgba(0, 0, 0, 0.1); display: flex; flex-direction: column; .form-group display: flex; gap: 7%; .form-group > div width: 100%; input:not([type="checkbox"]), select, textarea width: 100%; padding: 8px; border: 1px solid #ddd; border-radius: 4px; .form-control margin-bottom: 15px; button display: block; width: 100%; padding: 10px; color: white; background-color: var(--primary-color); border: none; border-radius: 4px; cursor: pointer; font-size: 16px; button:hover background-color: var(--secondary-color); .group-two, .group-three display: none; .arrows display: flex; justify-content: space-between align-items: center; margin-top: 10px; #navLeft, #navRight width: fit-content; @media screen and (max-width: 600px) .form-group flex-direction: column;
Open up the HTML file in the browser, and you should get something like the two-column layout in the following screenshot, complete with the current page indicator and navigation.
Adding functionality with vanilla JavaScript
Now, create a script.js file in the same directory as the HTML and CSS files and paste the following JavaScript into it:
Open base scripts
const stepInfo = document.getElementById("stepInfo"); const navLeft = document.getElementById("navLeft"); const navRight = document.getElementById("navRight"); const form = document.getElementById("myForm"); const formSteps = ["one", "two", "three"]; let currentStep = 0; function updateStepVisibility() formSteps.forEach((step) => document.getElementById(step).style.display = "none"; ); document.getElementById(formSteps[currentStep]).style.display = "block"; stepInfo.textContent = `Step $currentStep + 1 of $formSteps.length`; navLeft.style.display = currentStep === 0 ? "none" : "block"; navRight.style.display = currentStep === formSteps.length - 1 ? "none" : "block"; document.addEventListener("DOMContentLoaded", () => navLeft.style.display = "none"; updateStepVisibility(); navRight.addEventListener("click", () => if (currentStep < formSteps.length - 1) currentStep++; updateStepVisibility(); ); navLeft.addEventListener("click", () => if (currentStep > 0) currentStep--; updateStepVisibility(); ); );
This script defines a method that shows and hides the section depending on the formStep values that correspond to the IDs of the form sections. It updates stepInfo with the current active section of the form. This dynamic text acts as a progress indicator to the user.
It then adds logic that waits for the page to load and click events to the navigation buttons to enable cycling through the different form sections. If you refresh your page, you will see that the multi-step form works as expected.
Multi-step form navigation
Letās dive deeper into what the Javascript code above is doing. In the updateStepVisibility() function, we first hide all the sections to have a clean slate:
formSteps.forEach((step) => document.getElementById(step).style.display = "none"; );
Then, we show the currently active section:
document.getElementById(formSteps[currentStep]).style.display = "block";`
Next, we update the text that indicators progress through the form:
stepInfo.textContent = `Step $currentStep + 1 of $formSteps.length`;
Finally, we hide the Previous button if we are at the first step and hide the Next button if we are at the last section:
navLeft.style.display = currentStep === 0 ? "none" : "block"; navRight.style.display = currentStep === formSteps.length - 1 ? "none" : "block";
Letās look at what happens when the page loads. We first hide the Previous button as the form loads on the first section:
document.addEventListener("DOMContentLoaded", () => navLeft.style.display = "none"; updateStepVisibility();
Then we grab the Next button and add a click event that conditionally increments the current step count and then calls the updateStepVisibility() function, which then updates the new section to be displayed:
navRight.addEventListener("click", () => if (currentStep < formSteps.length - 1) currentStep++; updateStepVisibility(); );
Finally, we grab the Previous button and do the same thing but in reverse. Here, we are conditionally decrementing the step count and calling the updateStepVisibility():
navLeft.addEventListener("click", () => if (currentStep > 0) currentStep--; updateStepVisibility(); );
Handling errors
Have you ever spent a good 10+ minutes filling out a form only to submit it and get vague errors telling you to correct this and that? I prefer it when a form tells me right away that somethingās amiss so that I can correct it before I ever get to the Submit button. Thatās what weāll do in our form.
Our principle is to clearly indicate which controls have errors and give meaningful error messages. Clear errors as the user takes necessary actions. Letās add some validation to our form. First, letās grab the necessary input elements and add this to the existing ones:
const nameInput = document.getElementById("name"); const idNumInput = document.getElementById("idNum"); const emailInput = document.getElementById("email"); const birthdateInput = document.getElementById("birthdate") const documentInput = document.getElementById("document"); const departmentInput = document.getElementById("department"); const termsCheckbox = document.getElementById("terms"); const skillsInput = document.getElementById("skills");
Then, add a function to validate the steps:
Open validation script
function validateStep(step)
Here, we check if each required input has some value and if the email input has a valid input. Then, we set the isValid boolean accordingly. We also call a showError() function, which we havenāt defined yet.
Paste this code above the validateStep() function:
function showError(input, message) const formControl = input.parentElement; const errorSpan = formControl.querySelector(".error-message"); input.classList.add("error"); errorSpan.textContent = message;
Now, add the following styles to the stylesheet:
Open validation styles
input:focus, select:focus, textarea:focus outline: .5px solid var(--primary-color); input.error, select.error, textarea.error outline: .5px solid red; .error-message font-size: x-small; color: red; display: block; margin-top: 2px; .arrows color: var(--primary-color); font-size: 18px; font-weight: 900; #navLeft, #navRight display: flex; align-items: center; gap: 10px; #stepInfo color: var(--primary-color);
If you refresh the form, you will see that the buttons do not take you to the next section till the inputs are considered valid:
Finally, we want to add real-time error handling so that the errors go away when the user starts inputting the correct information. Add this function below the validateStep() function:
Open real-time validation script
function setupRealtimeValidation() nameInput.addEventListener("input", () => if (nameInput.value.trim() !== "") clearError(nameInput); ); idNumInput.addEventListener("input", () => if (idNumInput.value.trim() !== "") clearError(idNumInput); ); emailInput.addEventListener("input", () => if (emailInput.validity.valid) clearError(emailInput); ); birthdateInput.addEventListener("change", () => if (birthdateInput.value !== "") clearError(birthdateInput); ); documentInput.addEventListener("change", () => if (documentInput.files[0]) clearError(documentInput); ); departmentInput.addEventListener("change", () => if (departmentInput.value !== "") clearError(departmentInput); ); termsCheckbox.addEventListener("change", () => if (termsCheckbox.checked) clearError(termsCheckbox); );
This function clears the errors if the input is no longer invalid by listening to input and change events then calling a function to clear the errors. Paste the clearError() function below the showError() one:
function clearError(input) const formControl = input.parentElement; const errorSpan = formControl.querySelector(".error-message"); input.classList.remove("error"); errorSpan.textContent = "";
And now the errors clear when the user types in the correct value:
The multi-step form now handles errors gracefully. If you do decide to keep the errors till the end of the form, then at the very least, jump the user back to the erroring form control and show some indication of how many errors they need to fix.
Handling form submission
In a multi-step form, it is valuable to show the user a summary of all their answers at the end before they submit and to offer them an option to edit their answers if necessary. The person canāt see the previous steps without navigating backward, so showing a summary at the last step gives assurance and a chance to correct any mistakes.
Letās add a fourth section to the markup to hold this summary view and move the submit button within it. Paste this just below the third section in index.html:
Open HTML
<section class="group-four" id="four"> <div class="summary-section"> <p>Name: </p> <p id="name-val"></p> <button type="button" class="edit-btn" id="name-edit"> <span>ā</span> <span>Edit</span> </button> </div> <div class="summary-section"> <p>ID Number: </p> <p id="id-val"></p> <button type="button" class="edit-btn" id="id-edit"> <span>ā</span> <span>Edit</span> </button> </div> <div class="summary-section"> <p>Email: </p> <p id="email-val"></p> <button type="button" class="edit-btn" id="email-edit"> <span>ā</span> <span>Edit</span> </button> </div> <div class="summary-section"> <p>Date of Birth: </p> <p id="bd-val"></p> <button type="button" class="edit-btn" id="bd-edit"> <span>ā</span> <span>Edit</span> </button> </div> <div class="summary-section"> <p>CV/Resume: </p> <p id="cv-val"></p> <button type="button" class="edit-btn" id="cv-edit"> <span>ā</span> <span>Edit</span> </button> </div> <div class="summary-section"> <p>Department: </p> <p id="dept-val"></p> <button type="button" class="edit-btn" id="dept-edit"> <span>ā</span> <span>Edit</span> </button> </div> <div class="summary-section"> <p>Skills: </p> <p id="skills-val"></p> <button type="button" class="edit-btn" id="skills-edit"> <span>ā</span> <span>Edit</span> </button> </div> <button id="btn" type="submit">Confirm and Submit</button> </section>
Then update the formStep in your Javascript to read:
const formSteps = ["one", "two", "three", "four"];
Finally, add the following classes to styles.css:
.summary-section display: flex; align-items: center; gap: 10px; .summary-section p:first-child width: 30%; flex-shrink: 0; border-right: 1px solid var(--secondary-color); .summary-section p:nth-child(2) width: 45%; flex-shrink: 0; padding-left: 10px; .edit-btn width: 25%; margin-left: auto; background-color: transparent; color: var(--primary-color); border: .7px solid var(--primary-color); border-radius: 5px; padding: 5px; .edit-btn:hover border: 2px solid var(--primary-color); font-weight: bolder; background-color: transparent;
Now, add the following to the top of the script.js file where the other consts are:
const nameVal = document.getElementById("name-val"); const idVal = document.getElementById("id-val"); const emailVal = document.getElementById("email-val"); const bdVal = document.getElementById("bd-val") const cvVal = document.getElementById("cv-val"); const deptVal = document.getElementById("dept-val"); const skillsVal = document.getElementById("skills-val"); const editButtons = "name-edit": 0, "id-edit": 0, "email-edit": 0, "bd-edit": 0, "cv-edit": 1, "dept-edit": 1, "skills-edit": 2 ;
Then add this function in scripts.js:
function updateSummaryValues() nameVal.textContent = nameInput.value; idVal.textContent = idNumInput.value; emailVal.textContent = emailInput.value; bdVal.textContent = birthdateInput.value; const fileName = documentInput.files[0]?.name; if (fileName) const extension = fileName.split(".").pop(); const baseName = fileName.split(".")[0]; const truncatedName = baseName.length > 10 ? baseName.substring(0, 10) + "..." : baseName; cvVal.textContent = `$truncatedName.$extension`; else cvVal.textContent = "No file selected"; deptVal.textContent = departmentInput.value; skillsVal.textContent = skillsInput.value || "No skills submitted"; }
This dynamically inserts the input values into the summary section of the form, truncates the file names, and offers a fallback text for the input that was not required.
Then update the updateStepVisibility() function to call the new function:
function updateStepVisibility() formSteps.forEach((step) => document.getElementById(step).style.display = "none"; ); document.getElementById(formSteps[currentStep]).style.display = "block"; stepInfo.textContent = `Step $currentStep + 1 of $formSteps.length`; if (currentStep === 3) updateSummaryValues(); navLeft.style.display = currentStep === 0 ? "none" : "block"; navRight.style.display = currentStep === formSteps.length - 1 ? "none" : "block";
Finally, add this to the DOMContentLoaded event listener:
Object.keys(editButtons).forEach((buttonId) => const button = document.getElementById(buttonId); button.addEventListener("click", (e) => currentStep = editButtons[buttonId]; updateStepVisibility(); ); );
Running the form, you should see that the summary section shows all the inputted values and allows the user to edit any before submitting the information:
And now, we can submit our form:
form.addEventListener("submit", (e) => e.preventDefault(); if (validateStep(2)) alert("Form submitted successfully!"); form.reset(); currentFormStep = 0; updateStepVisibility(); );
Our multi-step form now allows the user to edit and see all the information they provide before submitting it.
Accessibility tips
Making multi-step forms accessible starts with the basics: using semantic HTML. This is half the battle. It is closely followed by using appropriate form labels.
Other ways to make forms more accessible include giving enough room to elements that must be clicked on small screens and giving meaningful descriptions to the form navigation and progress indicators.
Offering feedback to the user is an important part of it; itās not great to auto-dismiss user feedback after a certain amount of time but to allow the user to dismiss it themselves. Paying attention to contrast and font choice is important, too, as they both affect how readable your form is.
Letās make the following adjustments to the markup for more technical accessibility:
Add aria-required="true" to all inputs except the skills one. This lets screen readers know the fields are required without relying on native validation.
Add role="alert" to the error spans. This helps screen readers know to give it importance when the input is in an error state.
Add role="status" aria-live="polite" to the .stepInfo. This will help screen readers understand that the step info keeps tabs on a state, and the aria-live being set to polite indicates that should the value change, it does not need to immediately announce it.
In the script file, replace the showError() and clearError() functions with the following:
function showError(input, message) const formControl = input.parentElement; const errorSpan = formControl.querySelector(".error-message"); input.classList.add("error"); input.setAttribute("aria-invalid", "true"); input.setAttribute("aria-describedby", errorSpan.id); errorSpan.textContent = message; function clearError(input) const formControl = input.parentElement; const errorSpan = formControl.querySelector(".error-message"); input.classList.remove("error"); input.removeAttribute("aria-invalid"); input.removeAttribute("aria-describedby"); errorSpan.textContent = "";
Here, we programmatically add and remove attributes that explicitly tie the input with its error span and show that it is in an invalid state.
Finally, letās add focus on the first input of every section; add the following code to the end of the updateStepVisibility() function:
const currentStepElement = document.getElementById(formSteps[currentStep]); const firstInput = currentStepElement.querySelector( "input, select, textarea" ); if (firstInput) firstInput.focus();
And with that, the multi-step form is much more accessible.
Conclusion
There we go, a four-part multi-step form for a job application! As I said at the top of this article, thereās a lot to juggle ā so much so that I wouldnāt fault you for looking for an out-of-the-box solution.
But if you have to hand-roll a multi-step form, hopefully now you see itās not a death sentence. Thereās a happy path that gets you there, complete with navigation and validation, without turning away from good, accessible practices.
And this is just how I approached it! Again, I took this on as a personal challenge to see how far I could get, and Iām pretty happy with it. But Iād love to know if you see additional opportunities to make this even more mindful of the user experience and considerate of accessibility.
References
Here are some relevant links I referred to when writing this article:
How to Structure a Web Form (MDN)
Multi-page Forms (W3C.org)
Create accessible forms (A11y Project)
#:not#Accessibility#ADD#aria#Article#Articles#attention#attributes#background#border-radius#box#box-shadow#browser#buttons#challenge#change#classes#code#Color#content#CSS#CV#dept#direction#display#email#error handling#event#Events#Exercise
2 notes
Ā·
View notes
Text
You guys have no clue how much I love Rufus and it's absurd levels of error checking.
3 notes
Ā·
View notes
Text
Best Practices for Dual Write Integration: Mapping, Keys & Error Handling
Dual Write integration is a crucial feature for seamless data synchronization between Dynamics 365 Finance & Operations (F&O) and Dataverse. To ensure a smooth integration, businesses must follow best practices related to mapping, integration keys, legal entities, initial sync, solution updates, and error handling.Ā
Key Best Practices for Dual Write Integration:Ā
Mapping:Ā
ā Utilize existing/standard maps and customize only as needed.Ā ā Use only required fields to optimize performance.Ā ā Maintain proper versioning to track changes effectively.Ā
Integration Keys:Ā
š¹ Ensure integration keys maintain record uniqueness.Ā š¹ Always verify keys in both systems before integration.Ā š¹ Create custom keys if necessary for specific requirements.Ā
Legal Entities & Initial Sync-Up:Ā
š Maintain only necessary legal entities to avoid excess data storage.Ā š Initial sync-up helps migrate existing data but should be carefully planned.Ā š Check and clean data discrepancies before running an initial sync-up.Ā š Disable required maps while cleaning data to prevent unintended updates.Ā
Updating the Map:Ā
š Stop the map before making updates.Ā š¾ Save and refresh the map after changes.Ā š Maintain version control for different use cases.Ā
Solution Updates & Error Handling:Ā
š Always ensure Dual Write solutions are up to date.Ā ā Analyze error logs and trace logs for troubleshooting integration errors.Ā š§ Enable error log tables in F&O for additional insights.Ā š If issues persist, try resetting the integration link.Ā
0 notes
Text
Anyone around here happen to know how to fix Tumblr's Discord webhook integration saying "This is not a valid URL" when pasting the webhook URL, I've searched online for those keywords and gotten no hits. After squinting at the network tab it looks like the API is hitting some sort of internal server error and the frontend just doesn't recognize it, but I've tested the webhook url independently and it seems to be working fine.
Could be activity related or something? Docs don't mention any sort of limits.
#tech support#error handling#did you know if you look in the dev console they link their job board#remember that guy who got hired at a company just to fix one bug and then left#rewrite it in rust please
0 notes
Text
Mastering Advanced Error Handling Techniques in Node.js Applications
Introduction:Error handling is a critical aspect of building robust Node.js applications. While Node.js provides basic error handling mechanisms, complex applications require more sophisticated techniques to ensure errors are caught, logged, and handled gracefully. Effective error handling not only improves the reliability of your application but also enhances the user experience by providingā¦
#asynchronous programming#backend development#Error Handling#Exception Handling#JavaScript#logging#Node.js
0 notes
Text
Handling Vericlock Webhooks: Overcoming Challenges and Optimizing Costs
Discover how we solved Vericlock webhook integration at Prestige Lock and Door. Addressing data format issues, we transitioned from costly platforms like Zapier to an efficient, self-managed solution, optimizing integration and reducing costs.
Integrating third-party services can sometimes feel like navigating a minefield. Recently, I encountered a tricky situation while integrating Vericlock webhooks at Prestige Lock and Door. Vericlockās time-tracking service sent us JSON data containing nested and malformed fields, causing our webhook processing to fail. The primary issue was the unexpected data format and the lack of usefulā¦
#cost optimization#data sanitization#error handling#JSON parsing#Node.js#Prestige Lock and Door#third-party service integration#Vericlock integration#webhook processing
0 notes
Text
Quick Bet is essential for quickly placing bets on mobile devices. Strong UX handles price changes, suspensions and signal drops without delays. This includes gracefully handling errors, red cards, penalties, and goals that can suspend in-play markets... all lead to bet cancellations. Learn how prototyping and a hands-on design approach can solve user problems.
#ux#ui#gambling#mobile#branding#UX design#user experience#betting#live betting#mobile apps#sports betting#horse racing#quick bet functionality#In-play betting#seamless experience#error handling#graceful UX#Low-fidelity prototyping#user interface (UI)#design thinking#prototyping#mobile optimisation#brand
0 notes
Text
https://centralfinancehelp.com/SAP-CFIN/about-central-finance/error-handling/
On CentralFinanceHelp, you can find out more about how errors are handled in Central Finance. Learn to recognize, address, and fix issues in the SAP Central Finance system.
#Error Handling#SAP Central Finance#Central Finance Error Handling#SAP Central Finance system#CentralFinanceHelp
0 notes
Text
If I ever make a spelling/grammar mistake on a sexy post (again), and nobody tells me (again), and I only spot it days or even weeks or heaven forbid months after it's already been reblogged a bunch of times (again), that calls for my deactivation as punishment for all yee who hath betrayed me (aka you) and I will wallow in embarrassment (again) for the years to come
To the dungeon with you all
#nsft writing#My ocd cannot handle this#What do you MEAN the original version with that disgusting error is still existent in somebody's reblog#queer#Don't even get me started on the social anxiety of it all#lesbian nsft#wlw#lesbian#this is horrendous#sapphic
122 notes
Ā·
View notes
Text
Asynchronous LLM API Calls in Python: A Comprehensive Guide
New Post has been published on https://thedigitalinsider.com/asynchronous-llm-api-calls-in-python-a-comprehensive-guide/
Asynchronous LLM API Calls in Python: A Comprehensive Guide
As developers and dta scientists, we often find ourselves needing to interact with these powerful models through APIs. However, as our applications grow in complexity and scale, the need for efficient and performant API interactions becomes crucial. This is where asynchronous programming shines, allowing us to maximize throughput and minimize latency when working with LLM APIs.
In this comprehensive guide, weāll explore the world of asynchronous LLM API calls in Python. Weāll cover everything from the basics of asynchronous programming to advanced techniques for handling complex workflows. By the end of this article, youāll have a solid understanding of how to leverage asynchronous programming to supercharge your LLM-powered applications.
Before we dive into the specifics of async LLM API calls, letās establish a solid foundation in asynchronous programming concepts.
Asynchronous programming allows multiple operations to be executed concurrently without blocking the main thread of execution. In Python, this is primarily achieved through the asyncio module, which provides a framework for writing concurrent code using coroutines, event loops, and futures.
Key concepts:
Coroutines: Functions defined with async def that can be paused and resumed.
Event Loop: The central execution mechanism that manages and runs asynchronous tasks.
Awaitables: Objects that can be used with the await keyword (coroutines, tasks, futures).
Hereās a simple example to illustrate these concepts:
import asyncio async def greet(name): await asyncio.sleep(1) # Simulate an I/O operation print(f"Hello, name!") async def main(): await asyncio.gather( greet("Alice"), greet("Bob"), greet("Charlie") ) asyncio.run(main())
In this example, we define an asynchronous function greet that simulates an I/O operation with asyncio.sleep(). The main function uses asyncio.gather() to run multiple greetings concurrently. Despite the sleep delay, all three greetings will be printed after approximately 1 second, demonstrating the power of asynchronous execution.
The Need for Async in LLM API Calls
When working with LLM APIs, we often encounter scenarios where we need to make multiple API calls, either in sequence or parallel. Traditional synchronous code can lead to significant performance bottlenecks, especially when dealing with high-latency operations like network requests to LLM services.
Consider a scenario where we need to generate summaries for 100 different articles using an LLM API. With a synchronous approach, each API call would block until it receives a response, potentially taking several minutes to complete all requests. An asynchronous approach, on the other hand, allows us to initiate multiple API calls concurrently, dramatically reducing the overall execution time.
Setting Up Your Environment
To get started with async LLM API calls, youāll need to set up your Python environment with the necessary libraries. Hereās what youāll need:
Python 3.7 or higher (for native asyncio support)
aiohttp: An asynchronous HTTP client library
openai: The official OpenAI Python client (if youāre using OpenAIās GPT models)
langchain: A framework for building applications with LLMs (optional, but recommended for complex workflows)
You can install these dependencies using pip:
pip install aiohttp openai langchain <div class="relative flex flex-col rounded-lg">
Basic Async LLM API Calls with asyncio and aiohttp
Letās start by making a simple asynchronous call to an LLM API using aiohttp. Weāll use OpenAIās GPT-3.5 API as an example, but the concepts apply to other LLM APIs as well.
import asyncio import aiohttp from openai import AsyncOpenAI async def generate_text(prompt, client): response = await client.chat.completions.create( model="gpt-3.5-turbo", messages=["role": "user", "content": prompt] ) return response.choices[0].message.content async def main(): prompts = [ "Explain quantum computing in simple terms.", "Write a haiku about artificial intelligence.", "Describe the process of photosynthesis." ] async with AsyncOpenAI() as client: tasks = [generate_text(prompt, client) for prompt in prompts] results = await asyncio.gather(*tasks) for prompt, result in zip(prompts, results): print(f"Prompt: promptnResponse: resultn") asyncio.run(main())
In this example, we define an asynchronous function generate_text that makes a call to the OpenAI API using the AsyncOpenAI client. The main function creates multiple tasks for different prompts and uses asyncio.gather() to run them concurrently.
This approach allows us to send multiple requests to the LLM API simultaneously, significantly reducing the total time required to process all prompts.
Advanced Techniques: Batching and Concurrency Control
While the previous example demonstrates the basics of async LLM API calls, real-world applications often require more sophisticated approaches. Letās explore two important techniques: batching requests and controlling concurrency.
Batching Requests: When dealing with a large number of prompts, itās often more efficient to batch them into groups rather than sending individual requests for each prompt. This reduces the overhead of multiple API calls and can lead to better performance.
import asyncio from openai import AsyncOpenAI async def process_batch(batch, client): responses = await asyncio.gather(*[ client.chat.completions.create( model="gpt-3.5-turbo", messages=["role": "user", "content": prompt] ) for prompt in batch ]) return [response.choices[0].message.content for response in responses] async def main(): prompts = [f"Tell me a fact about number i" for i in range(100)] batch_size = 10 async with AsyncOpenAI() as client: results = [] for i in range(0, len(prompts), batch_size): batch = prompts[i:i+batch_size] batch_results = await process_batch(batch, client) results.extend(batch_results) for prompt, result in zip(prompts, results): print(f"Prompt: promptnResponse: resultn") asyncio.run(main())
Concurrency Control: While asynchronous programming allows for concurrent execution, itās important to control the level of concurrency to avoid overwhelming the API server or exceeding rate limits. We can use asyncio.Semaphore for this purpose.
import asyncio from openai import AsyncOpenAI async def generate_text(prompt, client, semaphore): async with semaphore: response = await client.chat.completions.create( model="gpt-3.5-turbo", messages=["role": "user", "content": prompt] ) return response.choices[0].message.content async def main(): prompts = [f"Tell me a fact about number i" for i in range(100)] max_concurrent_requests = 5 semaphore = asyncio.Semaphore(max_concurrent_requests) async with AsyncOpenAI() as client: tasks = [generate_text(prompt, client, semaphore) for prompt in prompts] results = await asyncio.gather(*tasks) for prompt, result in zip(prompts, results): print(f"Prompt: promptnResponse: resultn") asyncio.run(main())
In this example, we use a semaphore to limit the number of concurrent requests to 5, ensuring we donāt overwhelm the API server.
Error Handling and Retries in Async LLM Calls
When working with external APIs, itās crucial to implement robust error handling and retry mechanisms. Letās enhance our code to handle common errors and implement exponential backoff for retries.
import asyncio import random from openai import AsyncOpenAI from tenacity import retry, stop_after_attempt, wait_exponential class APIError(Exception): pass @retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=4, max=10)) async def generate_text_with_retry(prompt, client): try: response = await client.chat.completions.create( model="gpt-3.5-turbo", messages=["role": "user", "content": prompt] ) return response.choices[0].message.content except Exception as e: print(f"Error occurred: e") raise APIError("Failed to generate text") async def process_prompt(prompt, client, semaphore): async with semaphore: try: result = await generate_text_with_retry(prompt, client) return prompt, result except APIError: return prompt, "Failed to generate response after multiple attempts." async def main(): prompts = [f"Tell me a fact about number i" for i in range(20)] max_concurrent_requests = 5 semaphore = asyncio.Semaphore(max_concurrent_requests) async with AsyncOpenAI() as client: tasks = [process_prompt(prompt, client, semaphore) for prompt in prompts] results = await asyncio.gather(*tasks) for prompt, result in results: print(f"Prompt: promptnResponse: resultn") asyncio.run(main())
This enhanced version includes:
A custom APIError exception for API-related errors.
A generate_text_with_retry function decorated with @retry from the tenacity library, implementing exponential backoff.
Error handling in the process_prompt function to catch and report failures.
Optimizing Performance: Streaming Responses
For long-form content generation, streaming responses can significantly improve the perceived performance of your application. Instead of waiting for the entire response, you can process and display chunks of text as they become available.
import asyncio from openai import AsyncOpenAI async def stream_text(prompt, client): stream = await client.chat.completions.create( model="gpt-3.5-turbo", messages=["role": "user", "content": prompt], stream=True ) full_response = "" async for chunk in stream: if chunk.choices[0].delta.content is not None: content = chunk.choices[0].delta.content full_response += content print(content, end='', flush=True) print("n") return full_response async def main(): prompt = "Write a short story about a time-traveling scientist." async with AsyncOpenAI() as client: result = await stream_text(prompt, client) print(f"Full response:nresult") asyncio.run(main())
This example demonstrates how to stream the response from the API, printing each chunk as it arrives. This approach is particularly useful for chat applications or any scenario where you want to provide real-time feedback to the user.
Building Async Workflows with LangChain
For more complex LLM-powered applications, the LangChain framework provides a high-level abstraction that simplifies the process of chaining multiple LLM calls and integrating other tools. Letās look at an example of using LangChain with async capabilities:
This example shows how LangChain can be used to create more complex workflows with streaming and asynchronous execution. The AsyncCallbackManager and StreamingStdOutCallbackHandler enable real-time streaming of the generated content.
import asyncio from langchain.llms import OpenAI from langchain.prompts import PromptTemplate from langchain.chains import LLMChain from langchain.callbacks.manager import AsyncCallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler async def generate_story(topic): llm = OpenAI(temperature=0.7, streaming=True, callback_manager=AsyncCallbackManager([StreamingStdOutCallbackHandler()])) prompt = PromptTemplate( input_variables=["topic"], template="Write a short story about topic." ) chain = LLMChain(llm=llm, prompt=prompt) return await chain.arun(topic=topic) async def main(): topics = ["a magical forest", "a futuristic city", "an underwater civilization"] tasks = [generate_story(topic) for topic in topics] stories = await asyncio.gather(*tasks) for topic, story in zip(topics, stories): print(f"nTopic: topicnStory: storyn'='*50n") asyncio.run(main())
Serving Async LLM Applications with FastAPI
To make your async LLM application available as a web service, FastAPI is an great choice due to its native support for asynchronous operations. Hereās an example of how to create a simple API endpoint for text generation:
from fastapi import FastAPI, BackgroundTasks from pydantic import BaseModel from openai import AsyncOpenAI app = FastAPI() client = AsyncOpenAI() class GenerationRequest(BaseModel): prompt: str class GenerationResponse(BaseModel): generated_text: str @app.post("/generate", response_model=GenerationResponse) async def generate_text(request: GenerationRequest, background_tasks: BackgroundTasks): response = await client.chat.completions.create( model="gpt-3.5-turbo", messages=["role": "user", "content": request.prompt] ) generated_text = response.choices[0].message.content # Simulate some post-processing in the background background_tasks.add_task(log_generation, request.prompt, generated_text) return GenerationResponse(generated_text=generated_text) async def log_generation(prompt: str, generated_text: str): # Simulate logging or additional processing await asyncio.sleep(2) print(f"Logged: Prompt 'prompt' generated text of length len(generated_text)") if __name__ == "__main__": import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000)
This FastAPI application creates an endpoint /generate that accepts a prompt and returns generated text. It also demonstrates how to use background tasks for additional processing without blocking the response.
Best Practices and Common Pitfalls
As you work with async LLM APIs, keep these best practices in mind:
Use connection pooling: When making multiple requests, reuse connections to reduce overhead.
Implement proper error handling: Always account for network issues, API errors, and unexpected responses.
Respect rate limits: Use semaphores or other concurrency control mechanisms to avoid overwhelming the API.
Monitor and log: Implement comprehensive logging to track performance and identify issues.
Use streaming for long-form content: It improves user experience and allows for early processing of partial results.
#API#APIs#app#applications#approach#Article#Articles#artificial#Artificial Intelligence#asynchronous programming#asyncio#background#Building#code#col#complexity#comprehensive#computing#Concurrency#concurrency control#content#Delay#developers#display#endpoint#Environment#error handling#event#FastAPI#forest
0 notes
Text
Been working on some first run Chou sprites for funsies, hereās a handful of the ones Iāve done so far
#keese draws#isat#new game+#comic siffrin#comicfrin#isat siffrin#I mainly just wanted to figure out how chou may express differently than normal siffrin#they are alas far less kitty cat </3#but yeah even in their first run they were generally much lower energy than siffrin mostly due to stress#they are less used to traveling for long periods of time along with the whole savior thing going on#and while they start off generally handling their timeloop situation well enough even by the time they get the second orb theyāre Tired man#and after the incident where they lost their eye theyāre So ready for this to be over#and they havenāt even hit the wall of their final boss fight yet it took them a long while to get through that for the first time#the main thing is that their party spended most of their journey far more underleveled#for most of the main fights they were able to scrape through since the crew did have some disproportionally powerful abilities earlier than#the isat party would have similar skills (like their first recruit is a full blown shield specialist)#but as they got further and further and the boss gimmicks got more specific and punishing it would get Much harder#in the fight where chou lost their eye they were stuck for Months the first run trying to find a way to squeak through the fight#in game terms basically trying to abuse the consistent rng seed to find a chain of actions that would let them win#long story short the boss basically acts as a harsh punish for their lack of a proper healer#it also for realsies kills people like. during the battle. so the margin for error is slim to none.#basically once it takes out someone itāll continue to target them and it canāt have its intent redirected#and itās resistant to all craft types until itās about to and failed to perform the final blow attack#without going too much into the party comp and how they generally fight just know that even if they werenāt underleveled this would low key#hard counter them since their whole game plan is about redirecting damage and simply killing the enemy before it kills them#so during this first run this fight was actual hell for chou to figure out and poor bastard hadnāt even hit their worst wall yet š#anyways that was a whole ramble that had nothing to do with the actual post Iām making. whoops.#point is theyāve earned the right to be low energy and stressed as hell they were Not built to carry a rpg party
62 notes
Ā·
View notes
Text
decided to make one for Watsuki too with extra info and art that arent worth posting on their own. i also finally figure out what clothes they wear cuz that took so many tries.
#keyframes vn#keyframes fanart#keyframes oc#keyframes mc#deja lamarre#perseus tozaki#jamie porter#elio kealoha#the more i draw her the more attractive he becomes and idk if its a good thing or bad thing#i dont think i can handle it once i start drawing anything romantic with them and the boys lol#id like to yap about watsuki more since its easier but i dont want to make this post long#plus some stuff i want to make a comic of#it also took me a long time to pick out info i wanted to share because i dont want to retcon myself#edit: edit for the spelling errors in pics#vibi's kf mcs#watsuki hamamoto
143 notes
Ā·
View notes
Text
Node.js Security: Protecting Your Applications
Learn how to secure your Node.js applications with best practices and strategies. This guide covers common vulnerabilities, dependency management, authentication, and more.
Introduction As Node.js continues to grow in popularity for building web applications and services, ensuring the security of these applications is paramount. With its asynchronous, event-driven architecture, Node.js introduces unique security challenges and opportunities. This guide provides an in-depth look at the best practices and strategies for securing Node.js applications, covering commonā¦
View On WordPress
#authentication#authorization#dependency management#Error Handling#input validation#logging#Node.js security#secure configuration#secure web applications#web development
0 notes
Text
On CentralFinanceHelp, you can find out more about how Error Handling are handled in Central Finance. Learn to recognize, address, and fix issues in the SAP Central Finance system.
0 notes
Text
At first it was kind of a joke, but the more I think about this ship, the more I genuinely start to like it.
I apologize to all the Killer-Blogs that are following me for pushing your boy into a toxic relationship, but this duo is so fucking peak ( ā”Ģ_ā”Ģ)į¤
Credits:
Error belongs to LoverOfPiggies
Killer belongs to Rahafwabas
#undertale#utmv#error sans#killer sans#anti void#error x killer#sansshipping#clip studio paint#artists on tumblr#weirdghostcat#this drawing goes so hard my program crashed at least 5 times#it was worth it#Despite Error having god-like powers I think Killer is one of the rare people that could actually handle him#Don't know if there is or was a canon example of how strong Killer truly is#I imagine that Killer is stupidly strong when he truly wants to#Even if he didn't match the same strength as his opponent#His intelligence will balance it easily#Maybe I think to highly of him but that's how I view Killer
27 notes
Ā·
View notes
Note
An Error and Nightmare bitty? Please? ;w;
completely different energies
#hashiart#answered asks#undertale au#bittybones#bitty bones#nightmare sans#nightmare!sans#error sans#error!sans#bitty error#bitty nightmare#bitty error is full of silly string and the rage of a thousand suns#handle with care#enjoy the nightmare loaf#no one can convince me that he doesn't just loaf with his tentacles out like a blanket made of himself
900 notes
Ā·
View notes