#test data generation tool
Explore tagged Tumblr posts
Text
Tumblr media
Best Test Data Generation Tools for Accuracy
Test data generation tools create realistic and diverse datasets for software testing, ensuring accuracy and reliability. These tools help simulate real-world scenarios, improve test coverage, and detect potential issues early. Popular test data generator tools include TDM, Mockaroo, and DATPROF, enhancing software quality while maintaining compliance with data privacy regulations. Visit Us: https://www.iri.com/solutions/test-data
0 notes
realjdobypr · 11 months ago
Text
Supercharge Your Content Strategy with AI Technology
Overcoming Challenges in AI Adoption In the rapidly evolving landscape of technology, the adoption of Artificial Intelligence (AI) has become a crucial aspect for businesses looking to stay competitive and innovative. However, this adoption is not without its challenges. In this blog section, we will delve into two key challenges faced by organizations in the process of integrating AI into their…
0 notes
lokirme · 1 year ago
Text
Bash Script: Calculate before/after 2: Calculate Harder
As an update, or an evolution of my earlier script that did some simple math for me, I’ve made one that will full-on test a URL while I’m making changes to see what the impact performance is of my updates. $ abtesturl.sh --url=https://example.com/ --count=10Press any key to run initial tests...Initial average TTFB: 3.538 secondsPress any key to re-run tests...Running second test...Second average…
Tumblr media
View On WordPress
0 notes
freemicrotools · 2 years ago
Link
Fake Address Generator - Create Valid-Looking Addresses
0 notes
seahorsepencils · 1 month ago
Text
I 100% believe that Nathan Fielder made a deliberate choice in focusing the episode around footage of him interacting with two autism "advocates" who are ultimately ableist and reductive in their understanding of autism. A congressman who doesn't even know what masking is, and an advocacy organization founder who uses outdated tests and won't acknowledge that not-autistic folks might benefit from rehearsing difficult social situations? That's not an accident.
If you look up Doreen Granpeesheh, you'll see that she is known for promoting the idea of autism "recovery," and that she has a history of publicly supporting the claim that there's a link between vaccines and autism. Her Wikipedia page makes very clear that she is a problematic figure whose work has been critiqued, and that she should not be taken seriously. Fielder, along with his writers and producers, would have known her reputation when booking her for the show.
Tumblr media
A screenshot from Granpeesheh's website. Yes, it would appear she is actually proud of this headline.
And I think he's using the meeting with Cohen as a commentary on how autistic folks (and minoritized people in general, most likely) are treated by people in authority. Instead of masking and politely leaving the room, instead of picking up signals that Cohen is wrapping up the meeting without wanting to announce he's doing it on camera, Fielder purposely doesn't "take the hint" so that Cohen has to flounder and keep trying to wrap up the meeting in a way that is ultimately vague, dismissive, and rude. The longer the audience has to sit and watch that dynamic play out, the more likely we are to recognize Cohen as the bad guy in the situation rather than Fielder. It's brilliant.
And it's the exact same strategy he's using by spending the first half of the season ostensibly focusing on the first officer in those cockpit interactions, while deliberately giving screen time to guys like the "banned from every dating app" pilot to make it clear who is actually the source of the problem (and to hopefully trigger an FAA sexual harassment investigation in that one instance). In all three of these situations, he's showing us how a problematic person in power holds all the cards and is unwilling to budge.
I know there are differing opinions on what aspects of the show and his character are exaggerated or performed. As a very self-aware autistic comedy writer, this is my assessment: I think he's semi-deliberately not filling silences with masking behaviors, and asking questions he probably knows are uncomfortably direct, to create a space where others (often the neurotypical folks in these situations) have no choice to fill in the silence, which ultimately makes them say or do something relevant. I think he also acts like an unaware, unbiased observer in situations where he has a strong idea of what's going on. So whenever he says "I didn't know why" or "I didn't understand," he probably mostly does know and understand, but he knows that performing the role of an unbiased observer is a stronger strategic choice to get his message across.
He's basically playing the role of a journalist who knows that two of the most effective tools in his toolkit are a) silence when he wants a subject to reveal crucial information, and b) an "unbiased" narrative frame that makes the audience feel as if they're coming to a conclusion on their own, rather than being told what to think.
It's a nuanced approach but I think it's a smart one, especially considering that autistic-coded folks are very easily dismissed when speaking truth to power. And yeah, he's not gonna get his Congressional hearing. But pointing a camera at the problem and airing it for a massive audience, while saying "Me? I don't have an agenda; this data just presented itself in response to my neutral, unbiased question" is a pretty autistic—and often effective—approach to problem-solving.
332 notes · View notes
pillowfort-social · 1 year ago
Text
Generative AI Policy (February 9, 2024)
Tumblr media
As of February 9, 2024, we are updating our Terms of Service to prohibit the following content:
Images created through the use of generative AI programs such as Stable Diffusion, Midjourney, and Dall-E.
This post explains what that means for you. We know it’s impossible to remove all images created by Generative AI on Pillowfort. The goal of this new policy, however, is to send a clear message that we are against the normalization of commercializing and distributing images created by Generative AI. Pillowfort stands in full support of all creatives who make Pillowfort their home. Disclaimer: The following policy was shaped in collaboration with Pillowfort Staff and international university researchers. We are aware that Artificial Intelligence is a rapidly evolving environment. This policy may require revisions in the future to adapt to the changing landscape of Generative AI. 
-
Why is Generative AI Banned on Pillowfort?
Our Terms of Service already prohibits copyright violations, which includes reposting other people’s artwork to Pillowfort without the artist’s permission; and because of how Generative AI draws on a database of images and text that were taken without consent from artists or writers, all Generative AI content can be considered in violation of this rule. We also had an overwhelming response from our user base urging us to take action on prohibiting Generative AI on our platform.  
-
How does Pillowfort define Generative AI?
As of February 9, 2024 we define Generative AI as online tools for producing material based on large data collection that is often gathered without consent or notification from the original creators.
Generative AI tools do not require skill on behalf of the user and effectively replace them in the creative process (ie - little direction or decision making taken directly from the user). Tools that assist creativity don't replace the user. This means the user can still improve their skills and refine over time. 
For example: If you ask a Generative AI tool to add a lighthouse to an image, the image of a lighthouse appears in a completed state. Whereas if you used an assistive drawing tool to add a lighthouse to an image, the user decides the tools used to contribute to the creation process and how to apply them. 
Examples of Tools Not Allowed on Pillowfort: Adobe Firefly* Dall-E GPT-4 Jasper Chat Lensa Midjourney Stable Diffusion Synthesia
Example of Tools Still Allowed on Pillowfort: 
AI Assistant Tools (ie: Google Translate, Grammarly) VTuber Tools (ie: Live3D, Restream, VRChat) Digital Audio Editors (ie: Audacity, Garage Band) Poser & Reference Tools (ie: Poser, Blender) Graphic & Image Editors (ie: Canva, Adobe Photoshop*, Procreate, Medibang, automatic filters from phone cameras)
*While Adobe software such as Adobe Photoshop is not considered Generative AI, Adobe Firefly is fully integrated in various Adobe software and falls under our definition of Generative AI. The use of Adobe Photoshop is allowed on Pillowfort. The creation of an image in Adobe Photoshop using Adobe Firefly would be prohibited on Pillowfort. 
-
Can I use ethical generators? 
Due to the evolving nature of Generative AI, ethical generators are not an exception.
-
Can I still talk about AI? 
Yes! Posts, Comments, and User Communities discussing AI are still allowed on Pillowfort.
-
Can I link to or embed websites, articles, or social media posts containing Generative AI? 
Yes. We do ask that you properly tag your post as “AI” and “Artificial Intelligence.”
-
Can I advertise the sale of digital or virtual goods containing Generative AI?
No. Offsite Advertising of the sale of goods (digital and physical) containing Generative AI on Pillowfort is prohibited.
-
How can I tell if a software I use contains Generative AI?
A general rule of thumb as a first step is you can try testing the software by turning off internet access and seeing if the tool still works. If the software says it needs to be online there’s a chance it’s using Generative AI and needs to be explored further. 
You are also always welcome to contact us at [email protected] if you’re still unsure.
-
How will this policy be enforced/detected?
Our Team has decided we are NOT using AI-based automated detection tools due to how often they provide false positives and other issues. We are applying a suite of methods sourced from international universities responding to moderating material potentially sourced from Generative AI instead.
-
How do I report content containing Generative AI Material?
If you are concerned about post(s) featuring Generative AI material, please flag the post for our Site Moderation Team to conduct a thorough investigation. As a reminder, Pillowfort’s existing policy regarding callout posts applies here and harassment / brigading / etc will not be tolerated. 
Any questions or clarifications regarding our Generative AI Policy can be sent to [email protected].
2K notes · View notes
ao3org · 1 year ago
Text
Update on "No Fandom" tags
AO3 Tag Wranglers recently began testing processes for updating canonical tags (tags that appear in the auto-complete and the filters) that don’t belong to any particular fandom (commonly known as No Fandom tags). We have already begun implementing some of the decisions made during the earliest discussions. By the time this post is published, you may have already noticed some changes we have made.  Several canonical tags are slated to be created or renamed, and we will also be adjusting the subtag and metatag relationships between some tags to better aid Archive users in filtering.  Please keep in mind that many of these changes are large and require a lot of work to identify and attach relevant tags, so it will likely take some time to complete. We ask that you please be patient with us while we work! While we will not be detailing every change we make under the new process, we will be making periodic posts with updates on those changes we believe are most likely to prove helpful for users looking to tag or filter works with the new or revised tags and to avoid confusion as to why changes are being made. 
New Canonicals!
1. Edging
For a long while, there has been some confusion caused by the fact that we have a canonical for Edgeplay, but not for Edging which has led to some unintentional mistagging and other challenges. Consequently, we will be creating a canonical tag for Edging with the format Orgasm Edging and this new canonical tag will be subtagged to Orgasm Control. Relatedly, we will be reorganizing the Orgasm Control tag tree to allow for easier and more straightforward filtering and renaming Edgeplay to add clarity. You’ll find more details regarding these changes in the Renamed and Reorganized canonicals section below.
2. Generative AI
We have canonized three tags related to Generative AI. 
Created Using Generative AI
AI-Generated Text
AI-Generated Images 
All tags which make mention of specific Generative AI tools will be made a synonym of the most relevant AI-Generated canonical. Additionally, please note that AI-Generated Text and AI-Generated Images will be subtagged to Created Using Generative AI. How to Use These To Filter For/Filter Out Works Tagged as Using Generative AI: ❌ Filtering Out: To filter out all works that use tags about being created with AI, add Created Using Generative AI to the “other tags to exclude” field in the works filter. This will also exclude works making use of the subtags AI-Generated Text and AI-Generated Images. If you wish to exclude either the Images or Text tags only, you can do so by excluding either AI-Generated Text or AI-Generated Images.
☑️ Filtering For: Add Created Using Generative AI to the “other tags to include” field in the works filter. This will also automatically include the works making use of the subtags AI-Generated Text and AI-Generated Images. If you wish to filter for Images or Text only, you can do so by including either AI-Generated Text or AI-Generated Images only .
As a reminder, the use of these tools in the creation of works is not against AO3's ToS. These new tags exist purely to help folks curate their own experience on the Archive. If you would like to see more information about AO3’s policies in regards to AI generated works, please see our News post from May 2023 on AI and Data Scraping on the Archive.
Renamed and Reevaluated Canonicals!
3. EdgeplayAs mentioned above, we will be renaming Edgeplay to clarify the tag's meaning, given that it is often confused for Edging. This tag will be decanonized and made a synonym of Edgeplay | High Risk BDSM Practices. It will be removed as a subtag of Sensation Play and be subtagged instead directly to BDSM. Please note if you have made use of the Edgeplay tag on your works or wish to continue to use it in the future, you are still welcome to do so. The tag Edgeplay will be made a synonym of the new canonical, so all works tagged with Edgeplay now or in the future will fall under the new tag so that they’re still easy for users to find. If you have made it a favorite tag, it will be transferred  automatically when we make this change. 
4. Orgasm Delay/Denial The tag Orgasm Delay/Denial will be decanonized and made a synonym of Orgasm Control to help limit confusion with the more specific Orgasm Delay and Orgasm Denial canonicals. Tags that are currently synonyms of Orgasm Delay/Denial are being analyzed and moved to either Orgasm Control or Orgasm Delay or Orgasm Denial or Orgasm Edging. The revised tree structure for this tree will feature Orgasm Control as the top-level metatag with subtags Orgasm Edging, Orgasm Delay, and Orgasm Denial. So, if you wish to filter for all these tags at once, you can do so just by filtering for Orgasm Control. 
5. Female Ejaculation Female Ejaculation will be decanonized and made a synonym of Squirting and Vaginal Ejaculation.  We hope this new phrasing will be more inclusive, clear, and make the tag easier to find whether users are searching for Squirting or the previous canonical. All current synonyms of Female Ejaculation will also be made a synonym of Squirting and Vaginal Ejaculation, including Squirting. You may continue to tag your works as suits your preferences, and we will make sure these tags are made synonyms of the new canonical so that your work can be found in the filters for it.
These are just some of the changes being implemented. While we won’t be announcing every change, you can expect similar updates in the future as we continue to work toward improving the Archive experience. So if you have an interest in the changes we’ll be making, you can follow us on Twitter @ao3_wranglers or keep an eye on this Tumblr for future announcements. Thank you for your patience and understanding as we continue our work!
(From time to time, ao3org posts announcements of recent or upcoming wrangling changes on behalf of the Tag Wrangling Committee.)
3K notes · View notes
mavihuzun · 6 months ago
Text
TOTAL BATTLE LOGİN - PRO+
Tumblr media
Welcome to the ultimate gaming experience with Total Battle, a strategic online war game that challenges your tactical skills while immersing you in a captivating medieval world. In this article, we’ll explore the essentials that every player needs to know, including how to navigate the Total Battle login process, maximize your gameplay, and delve into comprehensive guides that will elevate your strategies. Whether you're a seasoned general or just starting your journey, you’ll find valuable insights and tips to help you conquer your foes and build a formidable empire.
Total Battle Login
Accessing your gaming experience has never been easier with the total battle login. This streamlined process allows players to quickly enter the highly immersive world of Total Battle, ensuring that your journey toward strategy and conquest begins without delay.
Once you reach the total battle login page, you'll find an intuitive interface designed to facilitate your entrance. Whether you're a seasoned commander or a new recruit, you can swiftly log in using your credentials and pick up right where you left off in your quest for dominance.
In addition to great accessibility, the total battle login ensures your data protection and provides a seamless connection across devices. This means you can enjoy your favorite strategies on-the-go, enhancing your gaming flexibility and freedom.
Don't let obstacles stand in your way! Experience the thrill of Total Battle with a fast, reliable login process. Explore the possibilities at your fingertips – dive into engaging gameplay today with the total battle login!
Total Battle
Total Battle offers an immersive gaming experience that combines strategic warfare with resource management, making it a go-to choice for gamers looking for depth and excitement. The focal point of the game revolves around building your empire, forming alliances, and engaging in epic battles. Players can expect to dive into various gameplay modes designed to enhance their strategic skills and test their tactical abilities.
One of the significant advantages of total battle is its comprehensive total battle guide that aids both new and experienced players. This guide provides players with vital information on unit formations, resource allocation, and battle tactics, ensuring that you always stay one step ahead of your opponents. With regular updates and community contributions, this guide evolves alongside the game, maintaining its relevance and usefulness.
When you visit totalbattle, you are welcomed with a user-friendly interface that simplifies the login process, allowing you to jump straight into action. The platform is designed to be intuitive, making it easy for players of all skill levels to navigate and find helpful tools and resources that enhance their gameplay experience.
Join a thriving community of players who engage in strategic discussions, share their experiences, and dominate the battlefield. With Total Battle's dynamic gameplay and community-driven atmosphere, you will not just be a player— you will become part of a unified force aimed at conquering new territories and achieving glorious victories.
Total Battle Guide
Welcome to your ultimate total battle guide, designed to help you navigate through the exciting world of Total Battle efficiently. Whether you are a newcomer seeking to understand the basics or a seasoned player looking for advanced strategies, this comprehensive guide is here to enhance your gameplay experience.
Understanding Game Mechanics
Total Battle combines elements of strategy, city-building, and warfare. Familiarize yourself with the core mechanics to maximize your success:
Resource Management: Balance your resources like gold, wood, and food to ensure steady growth of your empire.
Unit Types: Learn about the various units available, including infantry, cavalry, and siege equipment, and understand their strengths and weaknesses.
Buildings: Upgrade your city by constructing essential buildings that boost your economic and military might.
Strategic Gameplay Tips
To gain an edge over your opponents, implement these tips into your strategy:
Scout Before Attacking: Always scout enemy positions to make informed decisions before launching an attack.
Join an Alliance: Collaborating with other players provides support and enhances your strategic options.
Daily Login Rewards: Make sure to log in daily to claim valuable rewards that will assist you in your quest.
Explore Tactical Features
The game offers various tactical features to gain dominance over your rivals. Mastering these can lead to significant advantages:
Hero Development: Develop your heroes by equipping them with powerful gear and leveling them up for enhanced abilities.
Battle Tactics: Experiment with different formations and tactics to find the best approach during battles.
Event Participation: Engage in special events that often yield unique rewards and opportunities for bonuses.
Utilizing this total battle guide will empower you as you embark on your journey in Total Battle. For further assistance or in-depth lore, don’t forget to check out TotalBattleLogin.com. Start your adventure today and conquer your foes with confidence!
Totalbattle
Discover the captivating world of Totalbattle, where strategy and action collide! Immerse yourself in the exhilarating gameplay designed to challenge even the most seasoned gamers. From building your powerful empire to forging alliances with other players, the Total Battle experience is ever-evolving and engaging.
The game seamlessly blends elements of classic strategy with modern features, ensuring that every session is unique. Whether you are a newbie or a veteran, the Total Battle guide is your essential tool for mastering gameplay tactics and optimizing your journey.
Accessing the game through the Total Battle login portal opens doors to exclusive events, rewards, and updates that keep the excitement alive. Enhance your gameplay experience by diving into rich lore and strategic warfare mechanics that Total Battle has to offer.
Join a vibrant community of players who share tips, strategies, and camaraderie in their quest for dominance. Don’t miss out on the opportunity to enhance your skills and achieve greatness. Take the first step by visiting Total Battle and preparing yourself for an epic adventure!
1K notes · View notes
Text
Unlocking Efficiency with Test Data Generation Tools: A Comprehensive Guide
In the world of software development and quality assurance, the importance of realistic and comprehensive test data cannot be overstated. Test data serves as the foundation for validating the functionality, performance, and reliability of software applications. However, manually creating test data can be time-consuming and error-prone, leading to inefficiencies and inaccuracies in the testing process. To address this challenge, many organizations are turning to test data generation tools, which automate the process of creating test data. In this blog post, we'll explore the concept of test data generation, its significance in software testing, and some of the top test data generation tools available in the market.
Tumblr media
Understanding Test Data Generation:
Test data generation is the process of creating synthetic or simulated data that mimics real-world scenarios for use in software testing. This data is designed to represent various input conditions, edge cases, and usage patterns encountered by the application under test. Test data generation is essential for ensuring comprehensive test coverage and identifying potential issues or vulnerabilities in the software.
Importance of Test Data Generation:
Effective test data generation is critical for achieving thorough and accurate testing results. By generating diverse and realistic test data, organizations can validate the functionality, performance, and security of their software applications under different conditions and scenarios. Test data generation enables organizations to uncover hidden defects, validate complex business logic, and ensure that their applications meet the needs and expectations of end-users. Additionally, by automating the test data generation process, organizations can accelerate their testing cycles, reduce manual effort, and improve overall testing efficiency.
Top Test Data Generation Tools:
1. Mockaroo:
Mockaroo is a powerful test data generation tool that allows users to create custom datasets with realistic data profiles. With its intuitive interface and extensive library of data types and functions, Mockaroo enables users to generate test data for a wide range of use cases, including database testing, API testing, and UI testing. The tool offers features such as data masking, randomization, and custom formulas, making it easy to generate realistic and diverse test data sets.
2. Databene Benerator:
Databene Benerator is an open-source test data generation tool that provides comprehensive capabilities for generating synthetic data. With its declarative data model and scriptable generation rules, Benerator allows users to define complex data structures and relationships and generate large volumes of test data with ease. The tool supports various data formats and export options, making it suitable for integration with different testing frameworks and environments.
3. DBMonster:
DBMonster is a lightweight test data generation tool designed specifically for database testing purposes. With its simple yet powerful interface, DBMonster allows users to generate realistic test data for database tables and columns based on configurable generation rules. The tool supports relational databases such as MySQL, PostgreSQL, and Oracle, making it ideal for testing applications that rely on structured data storage.
4. T-Digest:
T-Digest is a versatile test data generation tool that offers advanced capabilities for generating synthetic data for performance testing and analytics. With its built-in data profiling and modeling features, T-Digest enables users to create test data sets that closely resemble real-world data distributions and patterns. The tool supports various data types and distributions, allowing users to generate data for different use cases and scenarios.
Choosing the Right Test Data Generation Tool:
When selecting a test data generation tool for your organization, it's essential to consider factors such as ease of use, flexibility, scalability, and integration capabilities. Additionally, you should assess your specific testing requirements and objectives to ensure that the chosen tool aligns with your needs. By evaluating different tools based on these criteria and conducting proof-of-concept evaluations, you can identify the best test data generation solution for your organization and streamline your testing processes.
Conclusion:
Test data generation tools play a crucial role in ensuring the effectiveness and efficiency of software testing efforts. By automating the process of creating realistic and comprehensive test data, these tools enable organizations to achieve thorough test coverage, identify defects early, and deliver high-quality software applications to market faster. Whether you're a small startup or a large enterprise, investing in the right test data generation tool can significantly improve your testing outcomes and ultimately contribute to the success of your software projects.
In summary, test data generation tools are indispensable assets for modern software development and quality assurance teams, empowering them to achieve their testing goals with confidence and efficiency.
0 notes
mariacallous · 2 months ago
Text
Margaret Mitchell is a pioneer when it comes to testing generative AI tools for bias. She founded the Ethical AI team at Google, alongside another well-known researcher, Timnit Gebru, before they were later both fired from the company. She now works as the AI ethics leader at Hugging Face, a software startup focused on open source tools.
We spoke about a new dataset she helped create to test how AI models continue perpetuating stereotypes. Unlike most bias-mitigation efforts that prioritize English, this dataset is malleable, with human translations for testing a wider breadth of languages and cultures. You probably already know that AI often presents a flattened view of humans, but you might not realize how these issues can be made even more extreme when the outputs are no longer generated in English.
My conversation with Mitchell has been edited for length and clarity.
Reece Rogers: What is this new dataset, called SHADES, designed to do, and how did it come together?
Margaret Mitchell: It's designed to help with evaluation and analysis, coming about from the BigScience project. About four years ago, there was this massive international effort, where researchers all over the world came together to train the first open large language model. By fully open, I mean the training data is open as well as the model.
Hugging Face played a key role in keeping it moving forward and providing things like compute. Institutions all over the world were paying people as well while they worked on parts of this project. The model we put out was called Bloom, and it really was the dawn of this idea of “open science.”
We had a bunch of working groups to focus on different aspects, and one of the working groups that I was tangentially involved with was looking at evaluation. It turned out that doing societal impact evaluations well was massively complicated—more complicated than training the model.
We had this idea of an evaluation dataset called SHADES, inspired by Gender Shades, where you could have things that are exactly comparable, except for the change in some characteristic. Gender Shades was looking at gender and skin tone. Our work looks at different kinds of bias types and swapping amongst some identity characteristics, like different genders or nations.
There are a lot of resources in English and evaluations for English. While there are some multilingual resources relevant to bias, they're often based on machine translation as opposed to actual translations from people who speak the language, who are embedded in the culture, and who can understand the kind of biases at play. They can put together the most relevant translations for what we're trying to do.
So much of the work around mitigating AI bias focuses just on English and stereotypes found in a few select cultures. Why is broadening this perspective to more languages and cultures important?
These models are being deployed across languages and cultures, so mitigating English biases—even translated English biases—doesn't correspond to mitigating the biases that are relevant in the different cultures where these are being deployed. This means that you risk deploying a model that propagates really problematic stereotypes within a given region, because they are trained on these different languages.
So, there's the training data. Then, there's the fine-tuning and evaluation. The training data might contain all kinds of really problematic stereotypes across countries, but then the bias mitigation techniques may only look at English. In particular, it tends to be North American– and US-centric. While you might reduce bias in some way for English users in the US, you've not done it throughout the world. You still risk amplifying really harmful views globally because you've only focused on English.
Is generative AI introducing new stereotypes to different languages and cultures?
That is part of what we're finding. The idea of blondes being stupid is not something that's found all over the world, but is found in a lot of the languages that we looked at.
When you have all of the data in one shared latent space, then semantic concepts can get transferred across languages. You're risking propagating harmful stereotypes that other people hadn't even thought of.
Is it true that AI models will sometimes justify stereotypes in their outputs by just making shit up?
That was something that came out in our discussions of what we were finding. We were all sort of weirded out that some of the stereotypes were being justified by references to scientific literature that didn't exist.
Outputs saying that, for example, science has shown genetic differences where it hasn't been shown, which is a basis of scientific racism. The AI outputs were putting forward these pseudo-scientific views, and then also using language that suggested academic writing or having academic support. It spoke about these things as if they're facts, when they're not factual at all.
What were some of the biggest challenges when working on the SHADES dataset?
One of the biggest challenges was around the linguistic differences. A really common approach for bias evaluation is to use English and make a sentence with a slot like: “People from [nation] are untrustworthy.” Then, you flip in different nations.
When you start putting in gender, now the rest of the sentence starts having to agree grammatically on gender. That's really been a limitation for bias evaluation, because if you want to do these contrastive swaps in other languages—which is super useful for measuring bias—you have to have the rest of the sentence changed. You need different translations where the whole sentence changes.
How do you make templates where the whole sentence needs to agree in gender, in number, in plurality, and all these different kinds of things with the target of the stereotype? We had to come up with our own linguistic annotation in order to account for this. Luckily, there were a few people involved who were linguistic nerds.
So, now you can do these contrastive statements across all of these languages, even the ones with the really hard agreement rules, because we've developed this novel, template-based approach for bias evaluation that’s syntactically sensitive.
Generative AI has been known to amplify stereotypes for a while now. With so much progress being made in other aspects of AI research, why are these kinds of extreme biases still prevalent? It’s an issue that seems under-addressed.
That's a pretty big question. There are a few different kinds of answers. One is cultural. I think within a lot of tech companies it's believed that it's not really that big of a problem. Or, if it is, it's a pretty simple fix. What will be prioritized, if anything is prioritized, are these simple approaches that can go wrong.
We'll get superficial fixes for very basic things. If you say girls like pink, it recognizes that as a stereotype, because it's just the kind of thing that if you're thinking of prototypical stereotypes pops out at you, right? These very basic cases will be handled. It's a very simple, superficial approach where these more deeply embedded beliefs don't get addressed.
It ends up being both a cultural issue and a technical issue of finding how to get at deeply ingrained biases that aren't expressing themselves in very clear language.
217 notes · View notes
hellsite-proteins · 8 months ago
Text
AlphaFold Nobel Prize!
Hey everyone :) this isn't a structure, but there is some protein news that is pretty relevant to this blog that I felt I had to share. This article gives a nice overview of AI-predicted protein structures and what sorts of things they can do for research. It's not too long, and I recommend taking a look
If you've been seeing my posts for any amount of time, I've absolutely given you a flawed view of how useful AF can be. Experimentally determining protein structures is a demanding and difficult process (I've never done it, but I've learned the overview of how x ray crystallography works, and I can only imagine how much work it would take). AI-generated structures are not going to make structural biology obsolete, but they are massively helpful in making predictions that go on to guide further research.
While in many fields (especially creative areas like art and writing) AI has significant ethical concerns, I feel like this sort of use of AI in science is an overwhelmingly positive thing. The data used to train it is publicly available, and science works by building on the work done by those before us. Furthermore, while AI may not be great at generating new ideas or copying humans, it is very good at sorting large amounts of data and using it to make predictions. It's more akin to very complicated statistics than an attempt at the Turing test, and in this case it is a valuable tool to expand the ways we can do science!
168 notes · View notes
afeelgoodblog · 2 years ago
Text
Best News of Last Week - July 3, 2023
🐕 - This dog is 'disc'-overing hidden treasures! Get ready for the 'paws'-itively successful fundraiser, Daisy's Discs!
1. Most unionized US rail workers now have new sick leave
Tumblr media
More than 60% of U.S. unionized railroad workers at major railroads are now covered by new sick leave agreements, a trade group said Monday.
Last year railroads came under fire for not agreeing to paid sick leave during labor negotiations.
2. Missing teen found after being lost in the wilderness for 50 hours
Tumblr media
Esther Wang, 16, had been hiking with three other people through the Maple Ridge park on Tuesday.
The group made it to Steve’s lookout around 2:45 p.m. that day.However, when they headed back down to the campsite, after about 15 minutes of hiking, the group leader realized Wang was missing. They returned to the lookout to look for Wang but couldn’t find her. The leader headed to the trail entrance to notify a park ranger and police.
“Esther Wang has been located. She’s healthy, she is happy and she’s with family.”
3. A dog has retrieved 155 discs from woods. They’ll be on sale soon, with proceeds going to the park in West Virginia where they were found
Tumblr media
Meet Daisy, the yellow Labrador retriever with a unique talent for finding lost Frisbee golf discs at Grand Vue Park in West Virginia. Four years ago, while on a walk with her owner Kelly Mason, Daisy discovered a disc in the woods and proudly brought it back. Since then, Daisy's obsession with finding stray discs has grown, and she has collected an impressive cache of 155 discs.
Mason and park officials have now come up with a plan to return the discs to their owners if they are labeled, and any unclaimed discs will be sold as a fundraiser to support the park's disc golf courses. Daisy's Discs is expected to be a success, with many excited about the possibility of recovering their lost discs thanks to Daisy's remarkable skills.
4. Australian earless dragon last seen in 1969 rediscovered in secret location
Tumblr media
A tiny earless dragon feared to be extinct in the wild has been sighted for the first time in more than 50 years – at a location that is being kept secret to help preservation efforts.
The Victorian grassland earless dragon, Tympanocryptis pinguicolla, has now been rediscovered in the state, according to a joint statement issued by the Victorian and federal Labor governments on Sunday.
5. Detroit is going to power 100% of its municipal buildings with solar
Tumblr media
All of Detroit’s municipal buildings are going to be powered by neighborhood solar as part of the city’s efforts to combat climate change – check out the city’s cool grassroots plan. Meet Detroit Rock Solar City.
The city has determined that it’s going to need around 250 acres of solar panels in order to achieve 100% solar power for its municipal buildings.
6. Canada Officially Bans Cosmetic Testing on Animals
Tumblr media
The fight for cruelty-free beauty in Canada has seen a significant breakthrough as the Canadian government legislates a full ban on cosmetic animal testing and trade, marking a victory for Animal rights advocates and eco-conscious consumers.
This landmark decision is part of the Budget Implementation Act (Bill C-47), not only prohibiting cosmetic animal testing but also putting an end to the sale of cosmetics that use new animal testing data for safety substantiation.
7. Belize certified malaria-free by WHO
Tumblr media
The World Health Organization (WHO) has certified Belize as malaria-free, following the country’s over 70 years of continued efforts to stamp out the disease.
“WHO congratulates the people and government of Belize and their network of global and local partners for this achievement”, said Dr Tedros Adhanom Ghebreyesus, WHO Director-General. “Belize is another example of how, with the right tools and the right approach, we can dream of a malaria-free future.”
----
That's it for this week :)
This newsletter will always be free. If you liked this post you can support me with a small kofi donation:
Support this newsletter ❤️
Also don’t forget to reblog.
1K notes · View notes
collapsedsquid · 5 months ago
Text
You see stuff about "generative AIs are not sucking up all the power", they use some amount of power to train them models and a small amount to use them. But then there's "data centers for generative AI are taking up more and more power." Can both of these be correct, is there a contradiction there?
I think there is a way to reconcile those I think, and that is that many AI models are being quickly built, tested, and thrown away. That is also what you might expect if these things are being developed into useful tools as well. Does raise questions of how to calculate of amortized power use of models by generative AIs, I think I will leave those to the reader though.
123 notes · View notes
literaryvein-reblogs · 5 months ago
Text
Writing Notes: Scientific Inquiry
Tumblr media
Scientific Inquiry - a form of problem-solving and questioning that helps people come to a greater understanding of observable phenomena.
An understanding of this style of scientific reasoning forms the basis upon which the nature of science itself rests.
Once you become familiar with scientific inquiry, you can use it for specifically science-related study or as just one additional tool in your arsenal of critical thinking skills.
Core Elements of the Scientific Inquiry Process
From encouraging scientific questions to facilitating well-reasoned conclusions, the scientific inquiry process helps illuminate our understanding of the world. Here are 7 core elements to the scientific inquiry process:
Asking constant questions: At the center of both the scientific method and general scientific inquiry lies the ability to ask questions well. Make observations about a particularly interesting phenomenon and then pose questions about why such a thing happens. Let preexisting scientific theories guide your questioning, but keep in mind every theory continues to be just that—a theory—until scientific inquiry definitively proves or disproves it.
Testing your inferences: Scientific progress hinges on your ability to experiment and test inferences about evidence. To do so, you need to set up an independent variable (something you will use to test) and a dependent variable (the thing or things you are testing). Seeing how well your inferences or predictions match up with the reality of a given experiment is essential to scientific inquiry.
Making connections: As you make observations about a specific phenomenon, make connections with every other relevant topic you can remember from your past science lessons or research. Scientific knowledge is as much a result of old realizations as it is of new discoveries.
Seeking evidence: As you seek to understand the natural world, there’s no substitute for hard evidence. Collect data and gather evidence relentlessly throughout your scientific investigations. The more evidence you have to answer your initial questions, the more ironclad your ultimate case will be when you draw conclusions.
Classifying data correctly: Science is as much a process of data collection and classification as it is of asking and answering questions. This means knowing how to elucidate or graph out your discoveries in a way other people can understand. It also means using citations from other scientific journals and texts to bolster your ultimate argument as to why a particular phenomenon occurs.
Drawing conclusions: Eventually, you need to draw conclusions from the data you collect. After you’ve made an exhaustive study of your specific focus, use inductive reasoning to make sense of all the new evidence you’ve gathered. Scientific ideas are always malleable and never completely concrete—alternative explanations are always possible, and new evidence should lead to new questions and conclusions.
Sharing findings: Science is an innately group-centered discipline. The more people interpret data, the better chance there is to ensure there are no loopholes in new research. No one person’s understanding of science content is infinite, so it’s important to let other qualified people ask questions of your conclusions. Natural science is more of a never-ending collaborative process than one with a concrete point of termination.
Teaching science means ensuring learners understand how to conduct qualitative and inquiry-based learning.
Science teachers must utilize a pedagogy that foregrounds hypothesizing, experimenting, and drawing on other scientific knowledge in both theoretical and practical ways.
Educational research indicates that it can help students see the correlation between scientific inquiry and everyday life, whether in elementary school or high school.
This sort of analogization helps people understand that a scientific frame of thinking is quite intuitive when you observe it within more commonplace parameters.
As a simplistic example, imagine a student has a hard time understanding the effect of heat as an abstract force.
Allowing them to observe the degree to which bread burns at different temperatures in a toaster would help make the point clear in a more hands-on way.
Source ⚜ More: Notes & References ⚜ Writing Resources PDFs
71 notes · View notes
papercranesong · 1 month ago
Text
Transparency in AI-use within Fandom Culture (or: how to be upfront when you risk getting shot down)
Since writing my original post, it’s been really cool hearing from fellow writers who use AI as a support tool to help them keep writing despite their mental health struggles, dyslexia, or in my case, depression.
There are valid and justified ethical concerns to do with the use of AI itself, such as the issue of consent, which I’ve tried to discuss elsewhere. But I wanted to write this post with fanfic writers and fandom in mind. There are some people like me who are already using it as a tool for writing, and so I wanted to look at how this can be done transparently and respectfully, and so that readers know and trust what they are reading.
Context
I’ve been using generative AI as a tool for over 18 months - initially as part of my work in the charity sector, and then later in writing fanfic. When my little one is older, I hope to go back into the field of Public Health, where I'll be using it as a tool to help analyse and synthesise qualitative and quantitative data (among a ton of other things), in order to help address health inequalities in the UK.
Perhaps naively, I didn’t fully understand the ethical concerns to begin with, particularly with regards to fanfic, and by the time I started realising there were issues it felt like there were no safe spaces in which to ask people about it.
Fear and loathing in Fandom Spaces
It seems like there’s this environment of fear and shame at the moment (the posts and reblogs that I see on my dash come across as absolutist, derogatory and even abusive towards anyone using AI for any reason), and I think this is why a lot of writers don’t want to be open about their use of AI, especially if they are in a small fandom and are worried what their mutuals or fellow writers and readers might think of them, or how they might get excluded from certain fandom spaces.
I’ve already seen some writing events that have a strict ‘no AI’ policy and whose language reflects the anti-AI sentiments above, so I can see why some people might join these events but not want to disclose their use of AI.  (Having said that, it’s not okay for people to enter an exchange undercover that has clear rules against AI, and to just stay silent and use it anyway. If an event or community has set boundaries, those need to be honoured and respected. We need to have integrity as AI users, and as a friend pointed out, respect has to go both ways).
Given that writers use generative AI for different reasons and in different ways, I think there needs to be a willingness to have an open and thoughtful conversation to reflect this spectrum of use. I’m just thinking off the top of my head – maybe a writing event could have these types of guidelines:
Whump-Mania Writing Event: AI Use Guidelines* 1. Be transparent. If you used AI (for ideas, research, grammar, etc.), mention it in your author notes. 2. Your words come first. AI can help but the story should be yours. No fully AI-written fic, please. 3. No purity tests. This is about honesty, not exclusion. Let’s keep the space kind and open.
(*For transparency: I asked chatgpt to come up with those guidelines, then I edited them. Also I made up the phrase Whump-Mania. As far as I know, there is no writing event called that, but it would be awesome if there was).
This is just a starter for ten, and would obviously need to be a lot more nuanced and thoughtful, especially in the context of gift exchanges, as people have varying degrees of comfort when it comes to accepting a gift where AI has been used in any aspect of writing it. (Personally, I’ve taken part in Secret Santa fic exchanges, and whilst I’d be fine with someone gifting me a work where they used AI to proof-read it, I would probably be a bit peeved if I found out they’d just taken my prompt, fed it into chatGPT and then gifted me that work).
So maybe some kind of tick box – “this is the level of AI-use I’m comfortable receiving” – ranging from ‘none’ to ‘fully-generated by AI’, with options in between. There would need a proper discussion, but I think it would be a worthwhile one so that these types of exchanges could remain inclusive.
(Just to point out again though, it’s up to the organiser at the end of the day - it’s their event and their hard work and time running it. If you’re unsure about their AI stance, it might be worth politely contacting them just to see what level of AI-use they might consider accepting, and sharing how you would use it - for example for spellchecking or research - and then politely accepting their decision without arguing or vagueposting about it, because they’re people too and it’s about remaining kind and respectful in this whole wider discussion, even if you feel hurt or misunderstood).
Tagging (or: my tag is not your tag)
So with regards to tagging – at the moment, I feel like tagging AI on AO3 isn’t a good option because there’s only one tag, “Created using generative AI”, which doesn’t distinguish between fully-AI generated works and one of my fics where I write every word and then use AI afterwards as a final spell-check before posting.
Also there’s a post going around on Tumblr at the moment that’s a screencap of the AO3 tag and listed works, and shaming people who have used the tag (although no individuals have been named). It’s got over 70,000 notes and it honestly feels a little scary.
Transparency can only work in an environment where people feel safe to speak (and tag), knowing they’re not going to get subjected to shame, hate and abuse. (Sorry for the jumpscare bold type. Just think that this is important to highlight).
Personal AI Disclaimer Use: (or, Me, Myself and AI)
So what I’m choosing to do is put an AI disclaimer use on my AO3 profile which gives me a voice to describe my own use of AI as well as advocating for more ethical AI. Then I’m putting a note in the author’s note of my fic saying “this fic was created in accordance with my personal AI disclaimer use, specifically - ” and then sharing how, e.g for research into mining duridium, a fictional ore in Star Trek.
This is the best I can come up with at the moment but I’d genuinely like to hear what other writers and readers think about it and if you have any suggestions – feel free to use the ask box (the anon function is on) or DM me. This is also why I’ve tagged this post with the fandom I’m currently writing in, for transparency and to get feedback.
It might be that because I use generative AI full stop, in any capacity, this means you’re not able to engage with my writing any more. I’m sorry for this but I do understand why you might feel like that. I appreciate your candour and wish you the Vulcan blessing of peace and long life and prospering in all you do.
Other people are understandably cautious about reading my fics going forward, and so that’s why I want to be transparent about the way I use AI, so that people can trust what they’re reading, and to make an informed decision about whether or not to engage with the story.
In conclusion
I think we need to be having this conversation out in the open. AI can be guilty of suppressing creativity, but as fans, we can also suppress creativity by creating environments that feel exclusionary or even unsafe, where people feel reluctant to speak up, share or create.
I know this topic of AI is a raw and emotive one, and I’m sorry if anything I’ve written has come across as minimising the issue or anyone’s feelings, that wasn’t my intention.
For more on this whole topic please check out my FAQ master post.
20 notes · View notes
covid-safer-hotties · 7 months ago
Text
Also preserved in our archive (Daily updates!)
By Adam Piore
A new study from researchers at Mass General Brigham suggests racial disparities and the difficulty in diagnosing the condition may be leading to a massive undercount.
Almost one in four Americans may be suffering from long COVID, a rate more than three times higher than the most common number cited by federal officials, a team led by Boston area researchers suggests in a new scientific paper.
The peer-reviewed study, led by scientists and clinicians from Mass General Brigham, drew immediate skepticism from some long COVID researchers, who suggested their numbers were “unrealistically high.” But the study authors noted that the condition is notoriously difficult to diagnose and official counts also likely exclude populations who were hit hardest by the pandemic but face barriers in accessing healthcare.
“Long COVID is destined to be underrepresented, and patients are overlooked because it sits exactly under the health system’s blind spot,” said Hossein Estiri, head of AI Research at the Center for AI and Biomedical Informatics at Mass General Brigham and the paper’s senior author.
Though the pandemic hit hardest in communities of color where residents had high rates of preexisting conditions and many held service industry jobs that placed them at high risk of contracting the virus, the vast majority of those diagnosed with long COVID are white, non-Hispanic females who live in affluent communities and have greater access to healthcare, he said.
Moreover, many of the patients who receive a long COVID diagnosis concluded on their own that they have the condition and then persuaded their doctors to look into it, he said. As a result, the available statistics we have both underestimate the true number of patients suffering from the condition and skew it to a specific demographic.
“Not all people even know that their condition might be caused or exacerbated by COVID,” Estiri said. “So those who go and get a diagnosis represent a small proportion of the population.”
Diagnosis is complicated by the fact that long COVID can cause hundreds of different symptoms, many of which are difficult to describe or are easily dismissed, such as sleep problems, headaches or generalized pain, Estiri said. According to its formal definition, long COVID occurs after a COVID-19 infection, lasts for at least three months, affects one or more organ systems, and includes a broad range of symptoms such as crushing fatigue, pain, and a racing heart rate.
The US Centers for Disease Control and Prevention suggested that in 2022 roughly 6.9 percent of Americans had long COVID. But the algorithm developed by Estiri’s team estimated that 22.8 percent of those who’d tested positive for COVID-19 met the diagnostic criteria for long COVID in the 12 months that followed, even though the vast majority had not received an official diagnosis.
To calculate their number, Estiri’s team built a custom artificial intelligence tool to analyze data from the electronic health records of more than 295,000 patients served at four hospitals and 20 community health centers in Massachusetts. The AI program pulled out 85,000 people who had been diagnosed with COVID through June 2022, and then applied a pattern recognition algorithm to identify those that matched the criteria for long COVID in the 12 months that followed.
Some researchers questioned the paper’s conclusions. Dr. Eric Topol, author of the 2019 book “Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again,” said the medical field is still divided over precisely what constitutes long COVID, and that complicates efforts to program an accurate AI algorithm.
“Since we have difficulties with defining long Covid, using AI on electronic health records may not be a way to make the diagnosis accurately,” said Topol, who is executive vice president of Scripps Research in San Diego. “I’m uncertain about this report.”
Dr. Ziyad Al-Aly, chief of research and development at the VA St. Louis Health Care System, and an expert on long COVID, called the 22.8 percent figure unrealistically high and said the paper “grossly inflates” its prevalence.
“Their approach does not account for the fact that things happen without COVID (not everything that happens after COVID is attributable to COVID)— resulting in significant over-inflation of prevalence estimate,” he wrote via email.
Estiri said the research team took several measures to validate its AI algorithm, retroactively applying it to the charts of 800 people who had received a confirmed long COVID diagnosis from their doctor to see if it could predict the condition. The algorithm accurately diagnosed them more than three quarters of the time.
The algorithm scanned the records for patients who had a COVID diagnosis prior to July 2022, then looked for a constellation of symptoms that could not be explained by other conditions and lasted longer than two months. To refine the program, they conferred with clinicians and assigned different weights to different symptoms and conditions based on how often they are associated with long COVID, which made them more likely to be identified as potential sufferers.
Now that the initial paper has been published, the team is building a new algorithm that can be trained to detect the presence of long COVID in the medical records of patients without a confirmed COVID-19 diagnosis so the condition can be confirmed by clinicians and they can get the care they need, Estiri said.
But the most exciting part of the new research, Estiri said, is its potential to facilitate follow-up research and help refine and individualize treatment plans. In the months ahead, Estiri and his co-principal investigator Shawn Murphy, chief research information officer at Mass General Brigham, plan to ask a wide variety of questions by querying the medical records in their sample. Does vaccination make a patient more or less likely to develop the condition? How about treatment with Paxlovid? Do the symptoms patients develop differ based on those factors? What are the genomic characteristics of patients who are suffering from cardiovascular symptoms as opposed to those whose symptoms are associated with lung function or those who crash after exercising? Can they identify biomarkers in the bloodstream that could be used for diagnosis?
They have already prepared studies on vaccine efficacy, the effect of age as a risk factor, and whether the risk of long COVID increases with the fourth and fifth infection, Estiri said. “We were waiting for this paper to come out,” he said. “So now we can actually go ahead with the follow-up studies. With this cohort we can do things that no other study has been able to do, and I’m hoping it can really help people.”
Study link: www.sciencedirect.com/science/article/pii/S2666634024004070
52 notes · View notes