#Python.Selenium
Explore tagged Tumblr posts
Text
Kosten senken durch Abhärtung von automatisierten Testfällen ?

Bekanntlich ein sehr großes Problem, die Wartung von Testfällen gerade in der Testautomatisierung kostet Zeit, obwohl meistens die Änderung relativ klein ist. Vielfach liegt es daran das in den Testfällen harte Codierungen von Werten vorliegen, das verursacht dann zeitnah einen Ausfall solch eines Testfalls. Und in den meisten Fällen müssen dann immer diese Testfälle bearbeitet werden, obwohl sich eventuell nur ein Pixel geändert hat. Das ist aber ein vollkommen falscher Ansatz, Ziel muss immer die Abhärtung eines Testfalls sein, nicht die elend lange Bearbeitung von Testfällen, was nachhaltig mehr Kosten verursacht als euch und euren Vorgesetzten, bzw. eurem Team lieb ist. https://www.dev-crowd.com/2022/08/09/wie-mache-ich-automatisierte-testfaelle-effizienter/ Schon 2022 hatte (siehe oberhalb) ich mir dazu Gedanken gemacht, wie man automatisierte Testfälle effektiver machen könnte. Betrachtet man das zusätzlich mit der Möglichkeit eines Einsatzes einer KI (wäre effektiv darüber nachzudenken) so ergibt sich ein neuer Ansatz, über den ich hier mal berichten möchte. Warum spare ich Kosten? - Weniger False-Positive Ergebnisse: Stabilisierte Tests führen zu weniger Fehlalarmen. Falsch-positive Ergebnisse können Teams dazu veranlassen, unnötig Zeit in die Untersuchung von Problemen zu investieren, die tatsächlich nicht existieren. - Zeitersparnis: Wenn Testfälle stabil sind und weniger oft fehlschlagen, verbringen Entwickler und QA-Teams weniger Zeit damit, die Ursachen von Testfehlern zu untersuchen. - Höhere Zuverlässigkeit: Stabile Tests erhöhen das Vertrauen in die Testergebnisse. Dies kann dazu führen, dass Software schneller freigegeben wird, da weniger Zeit für manuelle Überprüfungen aufgewendet wird. - Weniger Wartungsaufwand: Stabile automatisierte Testfälle benötigen weniger häufige Überarbeitungen. Das reduziert die Gesamtkosten für die Wartung des Testframeworks. - Bessere Nutzung von Ressourcen: Durch die Reduzierung von Fehlalarmen und den damit verbundenen manuellen Überprüfungen können Ressourcen (z. B. Testumgebungen, Hardware) effizienter genutzt werden. - Frühe Fehlererkennung: Gut abgegrenzte automatisierte Tests können Fehler früh im Entwicklungszyklus aufdecken, was oft kostengünstiger ist als das Beheben von Fehlern in späteren Phasen oder nach der Produktfreigabe. - Konsistente Testausführung: Automatisierte Tests werden jedes Mal in der gleichen Weise ausgeführt, was eine größere Konsistenz im Vergleich zu manuellen Tests gewährleistet. - Skalierbarkeit: Ihr könnt mehr Tests in kürzerer Zeit ausführen, insbesondere wenn ihr in der Lage seid, Tests parallel oder in verteilten Umgebungen auszuführen. - Dokumentation: Automatisierte Testfälle können als eine Art von Dokumentation für das Verhalten des Systems dienen. Sie zeigen klar, was von der Software erwartet wird. - Rückmeldung in Echtzeit: Bei der Integration von Testautomatisierung in eine CI/CD-Pipeline können Entwickler sofortiges Feedback über den Status ihres Codes erhalten. - Höhere Testabdeckung: Automatisierte Tests können eine höhere Codeabdeckung erreichen, besonders wenn sie regelmäßig und umfassend eingesetzt werden. - Häufigere Releases: Mit stabilen Tests können Organisationen sicherer und häufiger Software-Releases durchführen. - Verbesserung der Teammoral: Wenn QA-Teams weniger Zeit mit wiederholtem manuellen Testen und der Untersuchung von Fehlalarmen verbringen, könnt ihr euch auf komplexere und wertvollere Aufgaben konzentrieren. - Reduzierung menschlicher Fehler: Während des manuellen Testens können menschliche Fehler auftreten, etwa durch Übersehen oder inkonsistente Testausführung. Automatisierte Tests reduzieren dieses Risiko. - Erhöhte Marktreife: Die Fähigkeit, Software schneller und mit höherer Qualität zu testen und zu releasen, kann den Markteintritt beschleunigen. - Regressionsüberprüfung: Automatisierte Tests erleichtern die Durchführung von Regressionstests, um sicherzustellen, dass neue Änderungen keine vorhandenen Funktionen beeinträchtigen. Wie setze ich das um? - Gute Testdesign-Praktiken: Dies beinhaltet die Verwendung von Page-Object-Modellen, geeignete Testdatenverwaltung und das Isolieren von Tests voneinander. - Wartezeiten und Synchronisierung: Bei UI-Tests sollten dynamische Wartezeiten verwendet werden, um sicherzustellen, dass Elemente geladen sind, bevor Aktionen ausgeführt werden. - Umgang mit flüchtigen Tests: Tests, die intermittierend fehlschlagen, sollten identifiziert und behoben oder vorübergehend deaktiviert werden. - Regelmäßige Überprüfung und Aktualisierung: Tests sollten regelmäßig überprüft werden, um sicherzustellen, dass sie noch relevant sind und korrekt funktionieren. - Granularität der Tests: Schreibt kleine, zielgerichtete Tests, die nur eine bestimmte Funktionalität oder ein bestimmtes Feature testen. Dadurch wird die Fehlersuche erleichtert, da fehlschlagende Tests schnell auf eine spezifische Ursache hindeuten. - Idempotenz: Stellt sicher, dass Tests idempotent sind, d.h., sie können mehrmals unter denselben Bedingungen ausgeführt werden und liefern jedes Mal dasselbe Ergebnis. - Testumgebungen: Verwendet dedizierte Testumgebungen, die dem Produktionsumfeld so ähnlich wie möglich sind, um sicherzustellen, dass Tests in einer konsistenten und kontrollierten Umgebung laufen. - Logging und Berichterstattung: Ein gutes Logging und Berichtssystem kann dabei helfen, Probleme schneller zu diagnostizieren. Insbesondere bei Fehlschlägen sollte detaillierte Information verfügbar sein. - Code-Review für Tests: Genau wie Produktionscode sollten auch Testcodes regelmäßig überprüft werden. Dies stellt sicher, dass die Tests den Best Practices folgen und effektiv sind. - Priorisierung von Tests: Nicht alle Tests sind gleich wichtig. Bestimmt, welche Tests kritisch sind und welche weniger kritisch sind. Dies hilft bei der Entscheidungsfindung, wenn ihr beispielsweise eine schnelle Regressionstest-Suite ausführen müsst. - Cross-Browser- und Cross-Plattform-Tests: Wenn ihr Webanwendungen testet, stellt sicher, dass Ihre Tests in verschiedenen Browsern und Plattformen funktionieren. Tools wie Selenium Grid oder Dienste wie BrowserStack und Sauce Labs können dabei helfen. - Datengetriebene Tests: Anstatt für jede Datenkombination einen separaten Test zu schreiben, könnt ihr einen Test schreiben, der durch eine Reihe von Datenpunkten geführt wird. Frameworks wie pytest bieten Unterstützung für datengetriebene Tests. - Fehlerbehandlung: Berücksichtigt wie euer Testframework mit unerwarteten Fehlern, wie z.B. Timeouts, Abstürzen oder externen Abhängigkeiten, umgeht. Implementiert geeignete Fehlerbehandlungen, um solche Situationen zu adressieren. - Integration mit CI/CD: Integriert eure Tests in Ihre Continuous Integration/Continuous Deployment-Pipeline, um sicherzustellen, dass Tests automatisch bei jedem Code-Check-in oder -Release ausgeführt werden. - Monitoring und Alarmierung: Stellt sicher, dass ihr benachrichtigt werdet, wenn Tests fehlschlagen, insbesondere in CI/CD-Pipelines oder bei nächtlichen Testläufen. - Testabdeckung: VerwendetTools zur Messung der Codeabdeckung, um sicherzustellen, dass Ihr Code angemessen durch Tests abgedeckt ist. - Isolation von externen Diensten: Wenn euer Code externe Dienste oder APIs verwendet, sollten ihr diese Dienste während des Tests durch Mocks oder Stubs ersetzen, um sicherzustellen, dass eure Tests nicht von externen Faktoren beeinflusst werden. - Modularisierung von Testcodes: Vermeidet redundante Codes, indem ihr gemeinsame Vorgehensweisen und Funktionen in wiederverwendbare Module oder Hilfsfunktionen auslagert. - State Management: Stellt sicher, dass ihr den Zustand vor und nach Tests zurücksetzt, um Seiteneffekte zu vermeiden und die Unabhängigkeit der Tests zu gewährleisten. - Testen von Rand- und Ausnahmefällen: Neben den Standard-Testfällen sollten ihr auch Rand- und Ausnahmefälle in Betracht ziehen, um die Robustheit eures Codes zu überprüfen. - Performance- und Lasttests: Neben funktionalen Tests solltet ihr auch die Leistung und Skalierbarkeit eurer Anwendung unter verschiedenen Lastbedingungen testen. - Testen von Sicherheitsaspekten: Führt Sicherheitstests durch, um potenzielle Schwachstellen oder Sicherheitslücken in eurer Anwendung zu identifizieren. - Visual Testing: Für UI-basierte Anwendungen kann das visuelle Testen sicherstellen, dass die Benutzeroberfläche wie erwartet erscheint, insbesondere nach Änderungen oder Updates. - Verwaltung von Testkonfigurationen: Stellt sicher, dass ihr leicht zwischen verschiedenen Testkonfigurationen (z.B. verschiedene Umgebungen, Datenbanken oder Endpunkte) wechseln könnt. - Feedback-Schleifen: Nutzt das Feedback aus den Tests, um den Entwicklungsprozess kontinuierlich zu verbessern. - Schulung und Wissenstransfer: Stellt sicher, dass das Team regelmäßig geschult wird und Best Practices in Bezug auf Testautomatisierung und zugehörige Werkzeuge kennt. - Testtreiber und Teststubs: In einem TDD (Test Driven Development) Ansatz können Testtreiber (um fehlende Teile zu simulieren) und Teststubs (um komplexe Teile zu vereinfachen) verwendet werden, um den Testprozess zu erleichtern. - Code- und Testmetriken: Verwendet Metriken, um den Zustand euer Tests und eures Codes im Laufe der Zeit zu überwachen. Wie härte ich Selenium und Python basierte Testfälle ab? Wartet auf Elemente: - Verwendet explizite Wartezeiten, um auf das Erscheinen von Elementen zu warten, anstatt feste Zeitverzögerungen zu verwenden. - Nutzt Wartebedingungen wie WebDriverWait in Kombination mit erwarteten Bedingungen (ExpectedConditions), um auf das gewünschte Element zu warten, bevor ihr damit interagieren. from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC # Beispiel für das Warten auf ein Element mit WebDriverWait element = WebDriverWait(driver, 10).until( EC.presence_of_element_located((By.ID, "element_id")) ) Fehlerbehandlung: - Implementiert robuste Fehlerbehandlungen, um auf unerwartete Ausnahmen vorbereitet zu sein. - Verwendet try-except-Blöcke, um Ausnahmen abzufangen und entsprechend zu reagieren. try: # Führen Sie hier Ihre Aktionen aus except NoSuchElementException: # Handle den Fall, wenn das Element nicht gefunden wird except TimeoutException: # Handle den Fall, wenn ein Timeout auftritt Loggt eure Informationen: - Fügt Protokollausgaben hinzu, um den Testverlauf und Fehlermeldungen besser nachverfolgen zu können. - Verwendet Python-Logging oder andere geeignete Protokollierungsmechanismen. import logging # Konfigurieren Sie das Logging logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) # Protokollieren von Informationen logger.info("Aktion erfolgreich ausgeführt") logger.error("Ein Fehler ist aufgetreten") Verwendet den Headless-Modus: - Führt Tests im Headless-Modus aus, um die Leistung zu verbessern und die Stabilität zu erhöhen, insbesondere auf Servern ohne grafische Benutzeroberfläche. from selenium.webdriver.chrome.options import Options options = Options() options.headless = True driver = webdriver.Chrome(options=options) Testdatenmanagement: - Trennt die Testdaten von Testfällen, um die Wartbarkeit zu verbessern. - Verwendet externe Dateien oder Datenbanken, um Testdaten zu speichern und auf sie zuzugreifen Parallelisierung: - Nutzt Tools wie Selenium Grid oder Cloud-basierte Testplattformen, um Tests parallel auf verschiedenen Browsern und Plattformen auszuführen, was die Ausführung beschleunigen und die Ausfallsicherheit erhöhen kann. Überwachung und Berichterstattung: - Überwacht eure Testausführungen und erstellt detaillierte Berichte über den Status und die Ergebnisse Ieurer Tests. - Verwendet Test-Frameworks und Berichterstattungswerkzeuge wie PyTest und Allure Report. Read the full article
0 notes
Text
Xpath for text after element

#Xpath for text after element how to
#Xpath for text after element driver
After you have installed selenium and checked out – Navigating links using get method, you might want to play more with Selenium Python. Selenium Python bindings provides a simple API to write functional/acceptance tests using Selenium WebDriver. Selenium’s Python Module is built to perform automated testing with Python. Python program to convert a list to string.
#Xpath for text after element how to
How to get column names in Pandas dataframe.Adding new column to existing DataFrame in Pandas.MoviePy Composite Video – Setting starting time of single clip.Navigating links using get method – Selenium Python.Selenium Python Introduction and Installation.Selenium Basics – Components, Features, Uses and Limitations.Locating multiple elements in Selenium Python.Locating single elements in Selenium Python.Interacting with Webpage – Selenium Python.
#Xpath for text after element driver
find_elements_by_xpath() driver method – Selenium Python.
find_element_by_xpath() driver method – Selenium Python.
find_elements_by_css_selector() driver method – Selenium Python.
find_element_by_css_selector() driver method – Selenium Python.
find_element_by_class_name() driver method – Selenium Python.
Python – find_element_by_id() method in Selenium.
ISRO CS Syllabus for Scientist/Engineer Exam.
ISRO CS Original Papers and Official Keys.
GATE CS Original Papers and Official Keys.
Some times we may need to work with URLs with href attributes. In Css we rewrite them as css=a:contains('Forgot'), which will find the first anchor that contains 'Forgot'. We can also specify the partial text of the link as //a. We can just use as 'link=Forgot your password?', using xpath we should use as //a Links have anchor tags, we can apply the same as we applied for 'Text', the only difference here is we should add anchor tag. But If you want to match exactly to the text then we should have something like css=a or a The above can be done using css as css=div:contains('Demo Website!'). We find element by using xpath as //div or //div If the HTML is as below: Check Our Demo Website! We can use like this //button as Xpath to find out element containing exactly 'Log In'. As name describes, 'Exactly' will try to find the exact match and Contains looks for multiple matches. When working with text, we will have two scenarios, one is 'Exactly' and other one is 'Contains'. Now lets us look at the examples for 'Text'. How to match on text using CSS locators and Xpath Example css for child / sub child as div a In css this is very simple by using whitespace. In such cases, we can use two slashes to match any subnode for xpath. Css examples of a link inside of a div tag can be identified as div > aĪnd sometimes, if the element is not direct child, may be the element is inside another element. In CSS the child is indicated with a " >". How to access Child elements using css selectors Example XPATH for child elements : //div/a How to access direct child elements using xpathĪ child in XPATH is represented with a "/". We can also define xpath with 'Style' attribute xpath transparent '] Using xpath : - or here first it will check for the id and then it will check for the second.īased on index also, we can define the path as can also define by the using the value attribute or Phone'] Here using xpath / Css, we can combine two locators when ever required, lets see how we can achieve. Identify element using multiple attributes We can directly use them by using id or name locators. With Name - css=input or css=Īll the above syntax are simple. With ID - css=input#email or css=#emailĢ. With ID : - or we can also use as With Name - or we can also use as css we can use as below :ġ. Let us look for xpath examples to use ID and Name effectively with combinationsġ. Though we have some browser plug-ins to generate xpath or css selector, but they are not much useful in real time applications. In many cases like these, we depend locating elements by CSS or by XPath. It is always very important to make test scripts robust with reliable locators that do not break until changes made. You don't need to search for any other locator if there is ID or unique name present in your application.īut with the applications designed using modern JavaScript Frameworks like Angular, React and Vue.js have no proper web elements in DOM. As we know it is always better to use ID and Name to locate the elements which will work for sure. It is very simple to locate elements if the HTML DOM has 'id' or 'name' and they are the safest locators to use. In order to perform any operation on the element like click or type into an element, we need to locate that element.

0 notes
Text
Last- und Performance Testing mit Python Request

Ihr kennt das Problem sicherlich auch, der Kunde will "mal eben" einen Last und Performance Test durchführen, um an Ergebnisse zu kommen. Meistens wird dazu immer noch Jmeter genutzt, aber ich zeige euch wie man mit diesem Python Skript viel umfassender und flexibler arbeiten kann. Die Anpassungen sind für jedes mögliches Szenario auslegbar, selbst ich habe noch nicht alle Möglichkeiten dieses Skriptes hier entsprechend angepasst. Einige Ziele, die ich noch nicht umgesetzt habe: - Grafisches Reporting ähnlich Jmeter - Besseres Reporting in HTML oder PDF import requests import threading import time import csv from tqdm import tqdm import statistics import logging # Todo: ## 1. Logging ## 2. CSV-Datei ## 3. Statistiken ## 4. Auswertung ## 5. Ausgabe ## 6. Dokumentation ## 7. Testen #Author: Frank Rentmeister 2023 #URL: https://example.com #Date: 2021-09-30 #Version: 1.0 #Description: Load and Performance Tooling # Set the log level to DEBUG to log all messages LOG_FORMAT = '%(asctime)s - %(name)s - %(levelname)s - %(message)s - %(threadName)s - %(thread)d - %(lineno)d - %(funcName)s - %(process)d - %(processName)s - %(levelname)s - %(message)s - %(pathname)s - %(filename)s - %(module)s - %(exc_info)s - %(exc_text)s - %(created)f - %(relativeCreated)d - %(msecs)d - %(thread)d - %(threadName)s - %(process)d - %(processName)s - %(levelname)s - %(message)s - %(pathname)s - %(filename)s - %(module)s - %(exc_info)s - %(exc_text)s - %(created)f - %(relativeCreated)d - %(msecs)d - %(thread)d - %(threadName)s - %(process)d - %(processName)s - %(levelname)s - %(message)s - %(pathname)s - %(filename)s - %(module)s - %(exc_info)s - %(exc_text)s - %(created)f - %(relativeCreated)d - %(msecs)d - %(thread)d - %(threadName)s - %(process)d - %(processName)s - %(levelname)s - %(message)s - %(pathname)s - %(filename)s - %(module)s - %(exc_info)s - %(exc_text)s - %(created)f - %(relativeCreated)d - %(msecs)d' logging.basicConfig(level=logging.DEBUG, format=LOG_FORMAT, filename='Load_and_Performance_Tooling/Logging/logfile.log', filemode='w') logger = logging.getLogger() # Example usage of logging logging.debug('This is a debug message') logging.info('This is an info message') logging.warning('This is a warning message') logging.error('This is an error message') logging.critical('This is a critical message') logging.info('This is an info message with %s', 'some parameters') logging.info('This is an info message with %s and %s', 'two', 'parameters') logging.info('This is an info message with %s and %s and %s', 'three', 'parameters', 'here') logging.info('This is an info message with %s and %s and %s and %s', 'four', 'parameters', 'here', 'now') logging.info('This is an info message with %s and %s and %s and %s and %s', 'five', 'parameters', 'here', 'now', 'again') logging.info('This is an info message with %s and %s and %s and %s and %s and %s', 'six', 'parameters', 'here', 'now', 'again', 'and again') logging.info('This is an info message with %s and %s and %s and %s and %s and %s and %s', 'seven', 'parameters', 'here', 'now', 'again', 'and again', 'and again') logging.info('This is an info message with %s and %s and %s and %s and %s and %s and %s and %s', 'eight', 'parameters', 'here', 'now', 'again', 'and again', 'and again', 'and again') logging.info('This is an info message with %s and %s and %s and %s and %s and %s and %s and %s and %s', 'nine', 'parameters', 'here', 'now', 'again', 'and again', 'and again', 'and again', 'and again') # URL to test url = "https://example.com" assert url.startswith("http"), "URL must start with http:// or https://" # Make sure the URL starts with http:// or https:// #assert url.count(".") >= 2, "URL must contain at least two periods" # Make sure the URL contains at least two periods assert url.count(" ") == 0, "URL must not contain spaces" # Make sure the URL does not contain spaces # Number of users to simulate num_users = 2000 # Number of threads to use for testing num_threads = 10 # NEW- Create a list to hold the response times def simulate_user_request(url): try: response = requests.get(url) response.raise_for_status() # Raise an exception for HTTP errors return response.text except requests.exceptions.RequestException as e: print("An error occurred:", e) # Define a function to simulate a user making a request def simulate_user_request(thread_id, progress, response_times): for i in tqdm(range(num_users//num_threads), desc=f"Thread {thread_id}", position=thread_id, bar_format="{l_bar}{bar:20}{r_bar}{bar:-10b}", colour="green"): try: # Make a GET request to the URL start_time = time.time() response = requests.get(url) response_time = time.time() - start_time response.raise_for_status() # Raise exception if response code is not 2xx response.close() # Close the connection # Append the response time to the response_times list response_times.append(response_time) # Increment the progress counter for the corresponding thread progress += 1 except: pass # Define a function to split the load among multiple threads def run_threads(progress, response_times): # Create a list to hold the threads threads = # Start the threads for i in range(num_threads): thread = threading.Thread(target=simulate_user_request, args=(i, progress, response_times)) thread.start() threads.append(thread) # Wait for the threads to finish for thread in threads: thread.join() # Define a function to run the load test def run_load_test(): # Start the load test start_time = time.time() response_times = progress = * num_threads # Define the progress list here with tqdm(total=num_users, desc=f"Overall Progress ({url})", bar_format="{l_bar}{bar:20}{r_bar}{bar:-10b}", colour="green") as pbar: while True: run_threads(progress, response_times) # Pass progress list to run_threads total_progress = sum(progress) pbar.update(total_progress - pbar.n) if total_progress == num_users: # Stop when all users have been simulated break time.sleep(0.1) # Wait for threads to catch up pbar.refresh() # Refresh the progress bar display # NEW - Calculate the access time statistics mean_access_time = statistics.mean(response_times) median_access_time = statistics.median(response_times) max_access_time = max(response_times) min_access_time = min(response_times) # NEW -Print the access time statistics print(f"Mean access time: {mean_access_time:.3f} seconds") print(f"Median access time: {median_access_time:.3f} seconds") print(f"Maximum access time: {max_access_time:.3f} seconds") print(f"Minimum access time: {min_access_time:.3f} seconds") #todo: Save the load test results to a CSV file (think about this one) # hier werden die Zugriffszeiten gesammelt #access_times = { # 'https://example.com': , # 'https://example.org': , # 'https://example.net': #} # Calculate the duration of the load test duration = time.time() - start_time # Calculate access times and performance metrics access_times = )/num_threads for i in range(num_users//num_threads)] mean_access_time = sum(access_times)/len(access_times) median_access_time = sorted(access_times) max_access_time = max(access_times) min_access_time = min(access_times) throughput = num_users/duration requests_per_second = throughput/num_threads # Print the load test results print(f"Mean access time: {mean_access_time*1000:.2f} milliseconds") print(f"Load test duration: {duration:.2f} seconds") print(f"Mean access time: {mean_access_time:.3f} seconds") print(f"Median access time: {median_access_time:.3f} seconds") print(f"Maximum access time: {max_access_time:.3f} seconds") print(f"Minimum access time: {min_access_time:.3f} seconds") print(f"Throughput: {throughput:.2f} requests/second") print(f"Requests per second: {requests_per_second:.2f} requests/second") print(f"Number of users: {num_users}") print(f"Number of threads: {num_threads}") print(f"Number of requests per user: {num_users/num_threads}") print(f"Number of requests per thread: {num_users/num_threads/num_threads}") print(f"Number of requests per second: {num_users/duration}") print(f"Number of requests per second per thread: {num_users/duration/num_threads}") print(f"Number of requests per second per user: {num_users/duration/num_users}") print(f"Total duration: {duration:.2f} seconds") print(f"Total progress: {sum(progress)}") print(f"Total progress per second: {sum(progress)/duration:.2f}") print(f"Total progress per second per thread: {sum(progress)/duration/num_threads:.2f}") print(f"Total progress per second per user: {sum(progress)/duration/num_users:.2f}") print(f"Total progress per thread: {sum(progress)/num_threads:.2f}") print(f"Total progress per user: {sum(progress)/num_users:.2f}") print(f"Total progress per request: {sum(progress)/num_users/num_threads:.2f}") print(f"Total progress per request per second: {sum(progress)/num_users/num_threads/duration:.2f}") print(f"Total progress per request per second per thread: {sum(progress)/num_users/num_threads/duration/num_threads:.2f}") print(f"Total progress per request per second per user: {sum(progress)/num_users/num_threads/duration/num_users:.2f}") print(f"Total progress per request per thread: {sum(progress)/num_users/num_threads:.2f}") print(f"Total progress per request per user: {sum(progress)/num_users/num_threads:.2f}") print(f"Total progress per second per request: {sum(progress)/duration/num_users/num_threads:.2f}") print(f"Total progress per second per request per thread: {sum(progress)/duration/num_users/num_threads/num_threads:.2f}") print(f"Total progress per second per request per user: {sum(progress)/duration/num_users/num_threads/num_users:.2f}") # Save the load test results to a CSV file with open("load_test_results.csv", "w", newline='') as csv_file: fieldnames = # Create a CSV writer csv_writer = csv.DictWriter(csv_file, fieldnames=fieldnames, delimiter=",", quotechar='"', quoting=csv.QUOTE_MINIMAL) csv_writer.writeheader() # Write the load test results to the CSV file csv_writer.writerow({"Metric": "Average Response Time (seconds)", "Value": mean_access_time, "Short Value": round(mean_access_time, 3)}) csv_writer.writerow({"Metric": "Load Test Duration (seconds)", "Value": duration, "Short Value": round(duration, 2)}) csv_writer.writerow({"Metric": "Mean Access Time (milliseconds)", "Value": mean_access_time * 1000, "Short Value": round(mean_access_time * 1000, 2)}) csv_writer.writerow({"Metric": "Median Access Time (seconds)", "Value": median_access_time, "Short Value": round(median_access_time, 3)}) csv_writer.writerow({"Metric": "Maximum Access Time (seconds)", "Value": max_access_time, "Short Value": round(max_access_time, 3)}) csv_writer.writerow({"Metric": "Minimum Access Time (seconds)", "Value": min_access_time, "Short Value": round(min_access_time, 3)}) csv_writer.writerow({"Metric": "Throughput (requests/second)", "Value": throughput, "Short Value": round(throughput, 2)}) csv_writer.writerow({"Metric": "Requests per Second (requests/second)", "Value": requests_per_second, "Short Value": round(requests_per_second, 2)}) csv_writer.writerow({"Metric": "Number of Users", "Value": num_users, "Short Value": num_users}) csv_writer.writerow({"Metric": "Number of Threads", "Value": num_threads, "Short Value": num_threads}) csv_writer.writerow({"Metric": "Number of Requests per User", "Value": num_users / num_threads, "Short Value": round(num_users / num_threads)}) csv_writer.writerow({"Metric": "Number of Requests per Thread", "Value": num_users / (num_threads * num_threads), "Short Value": round(num_users / (num_threads * num_threads))}) csv_writer.writerow({"Metric": "Number of Requests per Second", "Value": num_users / duration, "Short Value": round(num_users / duration)}) csv_writer.writerow({"Metric": "Number of Requests per Second per Thread", "Value": num_users / (duration * num_threads), "Short Value": round(num_users / (duration * num_threads))}) csv_writer.writerow({"Metric": "Number of Requests per Second per User", "Value": num_users / (duration * num_users), "Short Value": round(num_users / (duration * num_users))}) csv_writer.writerow({"Metric": "Number of Requests per Minute", "Value": num_users / duration * 60, "Short Value": round(num_users / duration * 60)}) csv_writer.writerow({"Metric": "Number of Requests per Minute per Thread", "Value": num_users / (duration * num_threads) * 60, "Short Value": round(num_users / (duration * num_threads) * 60)}) csv_writer.writerow({"Metric": "Number of Requests per Minute per User", "Value": num_users / (duration * num_users) * 60, "Short Value": round(num_users / (duration * num_users) * 60)}) csv_writer.writerow({"Metric": "Number of Requests per Hour", "Value": num_users / duration * 60 * 60, "Short Value": round(num_users / duration * 60 * 60)}) csv_writer.writerow({"Metric": "Number of Requests per Hour per Thread", "Value": num_users / (duration * num_threads) * 60 * 60, "Short Value": round(num_users / (duration * num_threads) * 60 * 60)}) csv_writer.writerow({"Metric": "Number of Requests per Hour per User", "Value": num_users / (duration * num_users) * 60 * 60, "Short Value": round(num_users / (duration * num_users) * 60 * 60)}) csv_writer.writerow({"Metric": "Number of Requests per Day", "Value": num_users / duration * 60 * 60 * 24, "Short Value": round(num_users / duration * 60 * 60 * 24)}) csv_writer.writerow({"Metric": "Number of Requests per Day per Thread", "Value": num_users / (duration * num_threads) * 60 * 60 * 24, "Short Value": round(num_users / (duration * num_threads) * 60 * 60 * 24)}) csv_writer.writerow({"Metric": "Number of Requests per Day per User", "Value": num_users / (duration * num_users) * 60 * 60 * 24, "Short Value": round(num_users / (duration * num_users) * 60 * 60 * 24)}) csv_writer.writerow({"Metric": "Number of Requests per Month", "Value": num_users / duration * 60 * 60 * 24 * 30, "Short Value": round(num_users / duration * 60 * 60 * 24 * 30)}) csv_writer.writerow({"Metric": "Number of Requests per Month per Thread", "Value": num_users / (duration * num_threads) * 60 * 60 * 24 * 30, "Short Value": round(num_users / (duration * num_threads) * 60 * 60 * 24 * 30)}) csv_writer.writerow({"Metric": "Number of Requests per Month per User", "Value": num_users / (duration * num_users) * 60 * 60 * 24 * 30, "Short Value": round(num_users / (duration * num_users) * 60 * 60 * 24 * 30)}) csv_writer.writerow({"Metric": "Number of Requests per Year", "Value": num_users / duration * 60 * 60 * 24 * 365, "Short Value": round(num_users / duration * 60 * 60 * 24 * 365)}) csv_writer.writerow({"Metric": "Number of Requests per Year per Thread", "Value": num_users / (duration * num_threads) * 60 * 60 * 24 * 365, "Short Value": round(num_users / (duration * num_threads) * 60 * 60 * 24 * 365)}) csv_writer.writerow({"Metric": "Number of Requests per Year per User", "Value": num_users / (duration * num_users) * 60 * 60 * 24 * 365, "Short Value": round(num_users / (duration * num_users) * 60 * 60 * 24 * 365)}) #csv_writer.writeheader() # Add an empty row to separate the access times from the metrics #csv_writer.writerow({"Metric": "Access Time (seconds)", "Value": None}) # Write the access times to the CSV file csv_writer.writerow({"Metric": "Access Time (seconds)", "Value": None}) for access_time in response_times: csv_writer.writerow({"Metric": None, "Value": access_time}) # Sort the response times and write them to the CSV file response_times.sort() for response_time in response_times: csv_writer.writerow({"Metric": None, "Value": response_time}) # Run the load test run_load_test() # Path: Load_and_Performance/test_100_user.py ##### Documentation ##### ''' - The script imports the necessary modules for load testing, such as requests for making HTTP requests, threading for running multiple threads simultaneously, time for measuring time, csv for reading and writing CSV files, tqdm for displaying a progress bar, statistics for calculating performance metrics, and logging for logging messages. - The script defines the URL to test and checks that it starts with "http://" or "https://", that it contains at least two periods, and that it does not contain any spaces. - The script sets the number of users to simulate and the number of threads to use for testing. - The script defines a function called simulate_user_request() that simulates a user making a request to the URL. The function makes a GET request to the URL, measures the response time, and appends the response time to a list called response_times. The function also increments the progress counter for the corresponding thread. The function takes three arguments: thread_id, progress, and response_times. - The script defines a function called run_threads() that splits the load among multiple threads. The function creates a list to hold the threads, starts each thread, and waits for all threads to finish. The function takes two arguments: progress and response_times. - The script defines a function called run_load_test() that runs the load test. The function initializes the response_times list and a progress list that will keep track of the progress for each thread. The function then starts a progress bar using the tqdm module and enters a loop that runs until all users have been simulated. In each iteration of the loop, the function calls run_threads() to split the load among multiple threads, updates the progress bar, and waits for the threads to catch up. Read the full article
0 notes
Text
Last- und Performance Testing mit Python Request

Ihr kennt das Problem sicherlich auch, der Kunde will "mal eben" einen Last und Performance Test durchführen, um an Ergebnisse zu kommen. Meistens wird dazu immer noch Jmeter genutzt, aber ich zeige euch wie man mit diesem Python Skript viel umfassender und flexibler arbeiten kann. Die Anpassungen sind für jedes mögliches Szenario auslegbar, selbst ich habe noch nicht alle Möglichkeiten dieses Skriptes hier entsprechend angepasst. Einige Ziele, die ich noch nicht umgesetzt habe: - Grafisches Reporting ähnlich Jmeter - Besseres Reporting in HTML oder PDF import requests import threading import time import csv from tqdm import tqdm import statistics import logging # Todo: ## 1. Logging ## 2. CSV-Datei ## 3. Statistiken ## 4. Auswertung ## 5. Ausgabe ## 6. Dokumentation ## 7. Testen #Author: Frank Rentmeister 2023 #URL: https://example.com #Date: 2021-09-30 #Version: 1.0 #Description: Load and Performance Tooling # Set the log level to DEBUG to log all messages LOG_FORMAT = '%(asctime)s - %(name)s - %(levelname)s - %(message)s - %(threadName)s - %(thread)d - %(lineno)d - %(funcName)s - %(process)d - %(processName)s - %(levelname)s - %(message)s - %(pathname)s - %(filename)s - %(module)s - %(exc_info)s - %(exc_text)s - %(created)f - %(relativeCreated)d - %(msecs)d - %(thread)d - %(threadName)s - %(process)d - %(processName)s - %(levelname)s - %(message)s - %(pathname)s - %(filename)s - %(module)s - %(exc_info)s - %(exc_text)s - %(created)f - %(relativeCreated)d - %(msecs)d - %(thread)d - %(threadName)s - %(process)d - %(processName)s - %(levelname)s - %(message)s - %(pathname)s - %(filename)s - %(module)s - %(exc_info)s - %(exc_text)s - %(created)f - %(relativeCreated)d - %(msecs)d - %(thread)d - %(threadName)s - %(process)d - %(processName)s - %(levelname)s - %(message)s - %(pathname)s - %(filename)s - %(module)s - %(exc_info)s - %(exc_text)s - %(created)f - %(relativeCreated)d - %(msecs)d' logging.basicConfig(level=logging.DEBUG, format=LOG_FORMAT, filename='Load_and_Performance_Tooling/Logging/logfile.log', filemode='w') logger = logging.getLogger() # Example usage of logging logging.debug('This is a debug message') logging.info('This is an info message') logging.warning('This is a warning message') logging.error('This is an error message') logging.critical('This is a critical message') logging.info('This is an info message with %s', 'some parameters') logging.info('This is an info message with %s and %s', 'two', 'parameters') logging.info('This is an info message with %s and %s and %s', 'three', 'parameters', 'here') logging.info('This is an info message with %s and %s and %s and %s', 'four', 'parameters', 'here', 'now') logging.info('This is an info message with %s and %s and %s and %s and %s', 'five', 'parameters', 'here', 'now', 'again') logging.info('This is an info message with %s and %s and %s and %s and %s and %s', 'six', 'parameters', 'here', 'now', 'again', 'and again') logging.info('This is an info message with %s and %s and %s and %s and %s and %s and %s', 'seven', 'parameters', 'here', 'now', 'again', 'and again', 'and again') logging.info('This is an info message with %s and %s and %s and %s and %s and %s and %s and %s', 'eight', 'parameters', 'here', 'now', 'again', 'and again', 'and again', 'and again') logging.info('This is an info message with %s and %s and %s and %s and %s and %s and %s and %s and %s', 'nine', 'parameters', 'here', 'now', 'again', 'and again', 'and again', 'and again', 'and again') # URL to test url = "https://example.com" assert url.startswith("http"), "URL must start with http:// or https://" # Make sure the URL starts with http:// or https:// #assert url.count(".") >= 2, "URL must contain at least two periods" # Make sure the URL contains at least two periods assert url.count(" ") == 0, "URL must not contain spaces" # Make sure the URL does not contain spaces # Number of users to simulate num_users = 2000 # Number of threads to use for testing num_threads = 10 # NEW- Create a list to hold the response times def simulate_user_request(url): try: response = requests.get(url) response.raise_for_status() # Raise an exception for HTTP errors return response.text except requests.exceptions.RequestException as e: print("An error occurred:", e) # Define a function to simulate a user making a request def simulate_user_request(thread_id, progress, response_times): for i in tqdm(range(num_users//num_threads), desc=f"Thread {thread_id}", position=thread_id, bar_format="{l_bar}{bar:20}{r_bar}{bar:-10b}", colour="green"): try: # Make a GET request to the URL start_time = time.time() response = requests.get(url) response_time = time.time() - start_time response.raise_for_status() # Raise exception if response code is not 2xx response.close() # Close the connection # Append the response time to the response_times list response_times.append(response_time) # Increment the progress counter for the corresponding thread progress += 1 except: pass # Define a function to split the load among multiple threads def run_threads(progress, response_times): # Create a list to hold the threads threads = # Start the threads for i in range(num_threads): thread = threading.Thread(target=simulate_user_request, args=(i, progress, response_times)) thread.start() threads.append(thread) # Wait for the threads to finish for thread in threads: thread.join() # Define a function to run the load test def run_load_test(): # Start the load test start_time = time.time() response_times = progress = * num_threads # Define the progress list here with tqdm(total=num_users, desc=f"Overall Progress ({url})", bar_format="{l_bar}{bar:20}{r_bar}{bar:-10b}", colour="green") as pbar: while True: run_threads(progress, response_times) # Pass progress list to run_threads total_progress = sum(progress) pbar.update(total_progress - pbar.n) if total_progress == num_users: # Stop when all users have been simulated break time.sleep(0.1) # Wait for threads to catch up pbar.refresh() # Refresh the progress bar display # NEW - Calculate the access time statistics mean_access_time = statistics.mean(response_times) median_access_time = statistics.median(response_times) max_access_time = max(response_times) min_access_time = min(response_times) # NEW -Print the access time statistics print(f"Mean access time: {mean_access_time:.3f} seconds") print(f"Median access time: {median_access_time:.3f} seconds") print(f"Maximum access time: {max_access_time:.3f} seconds") print(f"Minimum access time: {min_access_time:.3f} seconds") #todo: Save the load test results to a CSV file (think about this one) # hier werden die Zugriffszeiten gesammelt #access_times = { # 'https://example.com': , # 'https://example.org': , # 'https://example.net': #} # Calculate the duration of the load test duration = time.time() - start_time # Calculate access times and performance metrics access_times = )/num_threads for i in range(num_users//num_threads)] mean_access_time = sum(access_times)/len(access_times) median_access_time = sorted(access_times) max_access_time = max(access_times) min_access_time = min(access_times) throughput = num_users/duration requests_per_second = throughput/num_threads # Print the load test results print(f"Mean access time: {mean_access_time*1000:.2f} milliseconds") print(f"Load test duration: {duration:.2f} seconds") print(f"Mean access time: {mean_access_time:.3f} seconds") print(f"Median access time: {median_access_time:.3f} seconds") print(f"Maximum access time: {max_access_time:.3f} seconds") print(f"Minimum access time: {min_access_time:.3f} seconds") print(f"Throughput: {throughput:.2f} requests/second") print(f"Requests per second: {requests_per_second:.2f} requests/second") print(f"Number of users: {num_users}") print(f"Number of threads: {num_threads}") print(f"Number of requests per user: {num_users/num_threads}") print(f"Number of requests per thread: {num_users/num_threads/num_threads}") print(f"Number of requests per second: {num_users/duration}") print(f"Number of requests per second per thread: {num_users/duration/num_threads}") print(f"Number of requests per second per user: {num_users/duration/num_users}") print(f"Total duration: {duration:.2f} seconds") print(f"Total progress: {sum(progress)}") print(f"Total progress per second: {sum(progress)/duration:.2f}") print(f"Total progress per second per thread: {sum(progress)/duration/num_threads:.2f}") print(f"Total progress per second per user: {sum(progress)/duration/num_users:.2f}") print(f"Total progress per thread: {sum(progress)/num_threads:.2f}") print(f"Total progress per user: {sum(progress)/num_users:.2f}") print(f"Total progress per request: {sum(progress)/num_users/num_threads:.2f}") print(f"Total progress per request per second: {sum(progress)/num_users/num_threads/duration:.2f}") print(f"Total progress per request per second per thread: {sum(progress)/num_users/num_threads/duration/num_threads:.2f}") print(f"Total progress per request per second per user: {sum(progress)/num_users/num_threads/duration/num_users:.2f}") print(f"Total progress per request per thread: {sum(progress)/num_users/num_threads:.2f}") print(f"Total progress per request per user: {sum(progress)/num_users/num_threads:.2f}") print(f"Total progress per second per request: {sum(progress)/duration/num_users/num_threads:.2f}") print(f"Total progress per second per request per thread: {sum(progress)/duration/num_users/num_threads/num_threads:.2f}") print(f"Total progress per second per request per user: {sum(progress)/duration/num_users/num_threads/num_users:.2f}") # Save the load test results to a CSV file with open("load_test_results.csv", "w", newline='') as csv_file: fieldnames = # Create a CSV writer csv_writer = csv.DictWriter(csv_file, fieldnames=fieldnames, delimiter=",", quotechar='"', quoting=csv.QUOTE_MINIMAL) csv_writer.writeheader() # Write the load test results to the CSV file csv_writer.writerow({"Metric": "Average Response Time (seconds)", "Value": mean_access_time, "Short Value": round(mean_access_time, 3)}) csv_writer.writerow({"Metric": "Load Test Duration (seconds)", "Value": duration, "Short Value": round(duration, 2)}) csv_writer.writerow({"Metric": "Mean Access Time (milliseconds)", "Value": mean_access_time * 1000, "Short Value": round(mean_access_time * 1000, 2)}) csv_writer.writerow({"Metric": "Median Access Time (seconds)", "Value": median_access_time, "Short Value": round(median_access_time, 3)}) csv_writer.writerow({"Metric": "Maximum Access Time (seconds)", "Value": max_access_time, "Short Value": round(max_access_time, 3)}) csv_writer.writerow({"Metric": "Minimum Access Time (seconds)", "Value": min_access_time, "Short Value": round(min_access_time, 3)}) csv_writer.writerow({"Metric": "Throughput (requests/second)", "Value": throughput, "Short Value": round(throughput, 2)}) csv_writer.writerow({"Metric": "Requests per Second (requests/second)", "Value": requests_per_second, "Short Value": round(requests_per_second, 2)}) csv_writer.writerow({"Metric": "Number of Users", "Value": num_users, "Short Value": num_users}) csv_writer.writerow({"Metric": "Number of Threads", "Value": num_threads, "Short Value": num_threads}) csv_writer.writerow({"Metric": "Number of Requests per User", "Value": num_users / num_threads, "Short Value": round(num_users / num_threads)}) csv_writer.writerow({"Metric": "Number of Requests per Thread", "Value": num_users / (num_threads * num_threads), "Short Value": round(num_users / (num_threads * num_threads))}) csv_writer.writerow({"Metric": "Number of Requests per Second", "Value": num_users / duration, "Short Value": round(num_users / duration)}) csv_writer.writerow({"Metric": "Number of Requests per Second per Thread", "Value": num_users / (duration * num_threads), "Short Value": round(num_users / (duration * num_threads))}) csv_writer.writerow({"Metric": "Number of Requests per Second per User", "Value": num_users / (duration * num_users), "Short Value": round(num_users / (duration * num_users))}) csv_writer.writerow({"Metric": "Number of Requests per Minute", "Value": num_users / duration * 60, "Short Value": round(num_users / duration * 60)}) csv_writer.writerow({"Metric": "Number of Requests per Minute per Thread", "Value": num_users / (duration * num_threads) * 60, "Short Value": round(num_users / (duration * num_threads) * 60)}) csv_writer.writerow({"Metric": "Number of Requests per Minute per User", "Value": num_users / (duration * num_users) * 60, "Short Value": round(num_users / (duration * num_users) * 60)}) csv_writer.writerow({"Metric": "Number of Requests per Hour", "Value": num_users / duration * 60 * 60, "Short Value": round(num_users / duration * 60 * 60)}) csv_writer.writerow({"Metric": "Number of Requests per Hour per Thread", "Value": num_users / (duration * num_threads) * 60 * 60, "Short Value": round(num_users / (duration * num_threads) * 60 * 60)}) csv_writer.writerow({"Metric": "Number of Requests per Hour per User", "Value": num_users / (duration * num_users) * 60 * 60, "Short Value": round(num_users / (duration * num_users) * 60 * 60)}) csv_writer.writerow({"Metric": "Number of Requests per Day", "Value": num_users / duration * 60 * 60 * 24, "Short Value": round(num_users / duration * 60 * 60 * 24)}) csv_writer.writerow({"Metric": "Number of Requests per Day per Thread", "Value": num_users / (duration * num_threads) * 60 * 60 * 24, "Short Value": round(num_users / (duration * num_threads) * 60 * 60 * 24)}) csv_writer.writerow({"Metric": "Number of Requests per Day per User", "Value": num_users / (duration * num_users) * 60 * 60 * 24, "Short Value": round(num_users / (duration * num_users) * 60 * 60 * 24)}) csv_writer.writerow({"Metric": "Number of Requests per Month", "Value": num_users / duration * 60 * 60 * 24 * 30, "Short Value": round(num_users / duration * 60 * 60 * 24 * 30)}) csv_writer.writerow({"Metric": "Number of Requests per Month per Thread", "Value": num_users / (duration * num_threads) * 60 * 60 * 24 * 30, "Short Value": round(num_users / (duration * num_threads) * 60 * 60 * 24 * 30)}) csv_writer.writerow({"Metric": "Number of Requests per Month per User", "Value": num_users / (duration * num_users) * 60 * 60 * 24 * 30, "Short Value": round(num_users / (duration * num_users) * 60 * 60 * 24 * 30)}) csv_writer.writerow({"Metric": "Number of Requests per Year", "Value": num_users / duration * 60 * 60 * 24 * 365, "Short Value": round(num_users / duration * 60 * 60 * 24 * 365)}) csv_writer.writerow({"Metric": "Number of Requests per Year per Thread", "Value": num_users / (duration * num_threads) * 60 * 60 * 24 * 365, "Short Value": round(num_users / (duration * num_threads) * 60 * 60 * 24 * 365)}) csv_writer.writerow({"Metric": "Number of Requests per Year per User", "Value": num_users / (duration * num_users) * 60 * 60 * 24 * 365, "Short Value": round(num_users / (duration * num_users) * 60 * 60 * 24 * 365)}) #csv_writer.writeheader() # Add an empty row to separate the access times from the metrics #csv_writer.writerow({"Metric": "Access Time (seconds)", "Value": None}) # Write the access times to the CSV file csv_writer.writerow({"Metric": "Access Time (seconds)", "Value": None}) for access_time in response_times: csv_writer.writerow({"Metric": None, "Value": access_time}) # Sort the response times and write them to the CSV file response_times.sort() for response_time in response_times: csv_writer.writerow({"Metric": None, "Value": response_time}) # Run the load test run_load_test() # Path: Load_and_Performance/test_100_user.py ##### Documentation ##### ''' - The script imports the necessary modules for load testing, such as requests for making HTTP requests, threading for running multiple threads simultaneously, time for measuring time, csv for reading and writing CSV files, tqdm for displaying a progress bar, statistics for calculating performance metrics, and logging for logging messages. - The script defines the URL to test and checks that it starts with "http://" or "https://", that it contains at least two periods, and that it does not contain any spaces. - The script sets the number of users to simulate and the number of threads to use for testing. - The script defines a function called simulate_user_request() that simulates a user making a request to the URL. The function makes a GET request to the URL, measures the response time, and appends the response time to a list called response_times. The function also increments the progress counter for the corresponding thread. The function takes three arguments: thread_id, progress, and response_times. - The script defines a function called run_threads() that splits the load among multiple threads. The function creates a list to hold the threads, starts each thread, and waits for all threads to finish. The function takes two arguments: progress and response_times. - The script defines a function called run_load_test() that runs the load test. The function initializes the response_times list and a progress list that will keep track of the progress for each thread. The function then starts a progress bar using the tqdm module and enters a loop that runs until all users have been simulated. In each iteration of the loop, the function calls run_threads() to split the load among multiple threads, updates the progress bar, and waits for the threads to catch up. Read the full article
0 notes