#Embedded Generative AI
Explore tagged Tumblr posts
customsoftwarephilippines · 4 months ago
Text
Generative Edge AI: The next frontier for AI Tech
Generative Edge AI’s Arrival An interesting crossroad is currently being crossed in Information Technology where computer, smartphone, and tablet hardware are becoming more and more powerful and at the same time, Generative AI algorithms, that previously needed multiple, powerful servers to run, are becoming more resource-efficient. Famously, China’s DeepSeek purportedly matches or even…
0 notes
thefandomdumpsterfire0711 · 1 month ago
Text
Tumblr media
I’m finally finished up this ol’ fanart of @stephofromcabin12 ‘s Pjo oc, ✨Stephanie Overbaum of Cabin 12✨
My hand hurts but my brain demanded i finish this up since I’ve been working on it off and on for far WAY too long for my liking
I also just rlly love Steph, she’s so me in some ways
anyways…
🗣️ READ HER FANFIC ‘LITTLE CAMPER’ ON AO3!!! ITS REALLY GOOD AND IF YOU FOR SOME REASON REFUSE TO YOU WILL BE TURNED INTO A LOW GRADE ALCOHOLIC BEVERAGE!!!!🗣️ (I’ve already suffered that fate, as I’m typing this as a lime white claw🥀)
Her art is really good too and she’s really funny and cool please go support her!!!
10 notes · View notes
crtter · 2 years ago
Text
Tumblr media
Hey I don’t think thags the right video for the article
123 notes · View notes
the-alternate-realities · 6 months ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
7 notes · View notes
living-space-design · 1 year ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
8 notes · View notes
salty-software-engineer · 9 months ago
Text
So...yes and no. Absolutely agree it's not nurses. However, there's absolutely projects and lab reports and papers for STEM, just....yeah not for your average medical practitioner.
The people doing the crazy bad stuff are your researchers and engineers and lots of computer science/software people. So your nurse will know what they're doing but they're forced to follow some stupid protocol based off a PhD's generated paper using bad tools and devices made by engineers who thought generating designs or code was a good idea...
ur future nurse is using chapgpt to glide thru school u better take care of urself
153K notes · View notes
Text
they didn't have calculators when we first started going to space.
we do not have a modern equivalent of the pyramids of giza or the great wall of china.
there is so much ancient architecture that has survived for centuries that we have no equivalent for.
there is no modern michangelo's david.
there is no modern pieta.
the houses of worship we have now don't hold a candle to the temples of the past.
it's sad.
0 notes
workmasterai · 28 days ago
Text
Build the Future of Apps with Workmaster & Generative AI Workmaster empowers creators to build intelligent apps with embedded generative AI. No-code, AI-first platform for business and innovation leaders.
0 notes
gramoturtle · 30 days ago
Text
Thought I'd share a cute Revolt colour theme I made, for free, that I love dearly. (My OC Null's coloir theme is my fave help!)
Tumblr media
Discord Alternatives: Revolt, Matrix (Element)
A friend and I spend a good chunk of the past week looking into Discord alternatives because of Discord speeding up towards enshittification.
We looked into two popular alternatives, Revolt and Matrix/Element, and wanted to share what we found from a non-techie Discord user's perspective. We put our findings here along with articles going into detail about Discord's increasing decline. Feel free to reach out if you find typos, broken links, or outright mistakes!
Personally I've started moving my community to Revolt because it's a better fit for my community, but I will be keeping my Matrix account in case other Discord communities decide to move over there. Depending on what your Discord server needs and techie levels, both can be good alternatives. Neither are outright replacements and both are undergoing development, so both options should continue to improve in the future.
33 notes · View notes
txttletale · 1 month ago
Note
Hi, sorry to be yet another tumblr user ignorant about AI in your inbox, what is the eventual plan for monetisation? I know I'm being silly in first place for any belief in rational market, but I know currently the various western models are running on a VAST amount of VC capital, which must have the assumption it will pay off at some point.
Is the idea once its good enough/culturally embedded enough to be purely individual subscription service or would it more be licencing agreements with devices themselves? Sorry for asking you, as someone who isn't very tech savvy I always find it hard to learn about AI online when so many people are either absolute luddites or financially invested in making it sound like it will solve everything
realistically, there isn't one. just like essentially every silicon valley boom since facebook (including post-facebook social media) it's running on pure coyote economics. every proposed path towards profitablity for openAI, anthropic, et al rests on absurd projections of adoption rates that just don't match what the tech actually does, and they're shovelling enormous amounts of money into a bonfire along the way. ed zitron's breakdown on this is pretty good:
279 notes · View notes
mariacallous · 2 months ago
Text
Margaret Mitchell is a pioneer when it comes to testing generative AI tools for bias. She founded the Ethical AI team at Google, alongside another well-known researcher, Timnit Gebru, before they were later both fired from the company. She now works as the AI ethics leader at Hugging Face, a software startup focused on open source tools.
We spoke about a new dataset she helped create to test how AI models continue perpetuating stereotypes. Unlike most bias-mitigation efforts that prioritize English, this dataset is malleable, with human translations for testing a wider breadth of languages and cultures. You probably already know that AI often presents a flattened view of humans, but you might not realize how these issues can be made even more extreme when the outputs are no longer generated in English.
My conversation with Mitchell has been edited for length and clarity.
Reece Rogers: What is this new dataset, called SHADES, designed to do, and how did it come together?
Margaret Mitchell: It's designed to help with evaluation and analysis, coming about from the BigScience project. About four years ago, there was this massive international effort, where researchers all over the world came together to train the first open large language model. By fully open, I mean the training data is open as well as the model.
Hugging Face played a key role in keeping it moving forward and providing things like compute. Institutions all over the world were paying people as well while they worked on parts of this project. The model we put out was called Bloom, and it really was the dawn of this idea of “open science.”
We had a bunch of working groups to focus on different aspects, and one of the working groups that I was tangentially involved with was looking at evaluation. It turned out that doing societal impact evaluations well was massively complicated—more complicated than training the model.
We had this idea of an evaluation dataset called SHADES, inspired by Gender Shades, where you could have things that are exactly comparable, except for the change in some characteristic. Gender Shades was looking at gender and skin tone. Our work looks at different kinds of bias types and swapping amongst some identity characteristics, like different genders or nations.
There are a lot of resources in English and evaluations for English. While there are some multilingual resources relevant to bias, they're often based on machine translation as opposed to actual translations from people who speak the language, who are embedded in the culture, and who can understand the kind of biases at play. They can put together the most relevant translations for what we're trying to do.
So much of the work around mitigating AI bias focuses just on English and stereotypes found in a few select cultures. Why is broadening this perspective to more languages and cultures important?
These models are being deployed across languages and cultures, so mitigating English biases—even translated English biases—doesn't correspond to mitigating the biases that are relevant in the different cultures where these are being deployed. This means that you risk deploying a model that propagates really problematic stereotypes within a given region, because they are trained on these different languages.
So, there's the training data. Then, there's the fine-tuning and evaluation. The training data might contain all kinds of really problematic stereotypes across countries, but then the bias mitigation techniques may only look at English. In particular, it tends to be North American– and US-centric. While you might reduce bias in some way for English users in the US, you've not done it throughout the world. You still risk amplifying really harmful views globally because you've only focused on English.
Is generative AI introducing new stereotypes to different languages and cultures?
That is part of what we're finding. The idea of blondes being stupid is not something that's found all over the world, but is found in a lot of the languages that we looked at.
When you have all of the data in one shared latent space, then semantic concepts can get transferred across languages. You're risking propagating harmful stereotypes that other people hadn't even thought of.
Is it true that AI models will sometimes justify stereotypes in their outputs by just making shit up?
That was something that came out in our discussions of what we were finding. We were all sort of weirded out that some of the stereotypes were being justified by references to scientific literature that didn't exist.
Outputs saying that, for example, science has shown genetic differences where it hasn't been shown, which is a basis of scientific racism. The AI outputs were putting forward these pseudo-scientific views, and then also using language that suggested academic writing or having academic support. It spoke about these things as if they're facts, when they're not factual at all.
What were some of the biggest challenges when working on the SHADES dataset?
One of the biggest challenges was around the linguistic differences. A really common approach for bias evaluation is to use English and make a sentence with a slot like: “People from [nation] are untrustworthy.” Then, you flip in different nations.
When you start putting in gender, now the rest of the sentence starts having to agree grammatically on gender. That's really been a limitation for bias evaluation, because if you want to do these contrastive swaps in other languages—which is super useful for measuring bias—you have to have the rest of the sentence changed. You need different translations where the whole sentence changes.
How do you make templates where the whole sentence needs to agree in gender, in number, in plurality, and all these different kinds of things with the target of the stereotype? We had to come up with our own linguistic annotation in order to account for this. Luckily, there were a few people involved who were linguistic nerds.
So, now you can do these contrastive statements across all of these languages, even the ones with the really hard agreement rules, because we've developed this novel, template-based approach for bias evaluation that’s syntactically sensitive.
Generative AI has been known to amplify stereotypes for a while now. With so much progress being made in other aspects of AI research, why are these kinds of extreme biases still prevalent? It’s an issue that seems under-addressed.
That's a pretty big question. There are a few different kinds of answers. One is cultural. I think within a lot of tech companies it's believed that it's not really that big of a problem. Or, if it is, it's a pretty simple fix. What will be prioritized, if anything is prioritized, are these simple approaches that can go wrong.
We'll get superficial fixes for very basic things. If you say girls like pink, it recognizes that as a stereotype, because it's just the kind of thing that if you're thinking of prototypical stereotypes pops out at you, right? These very basic cases will be handled. It's a very simple, superficial approach where these more deeply embedded beliefs don't get addressed.
It ends up being both a cultural issue and a technical issue of finding how to get at deeply ingrained biases that aren't expressing themselves in very clear language.
217 notes · View notes
impact-newswire · 6 months ago
Text
Micromax and Phison Partner to Launch MiPhi, Powering India’s Next-Generation of NAND Storage Technology
– Offering the lowest per token cost and per token energy consumption in the world – The joint venture will enable MiPhi to develop high-end and customized NAND storage products for enterprise, consumer, embedded, AI, and security applications for India and specific agreed-upon regions. Micromax and Phison partner to launch MiPhi, a joint venture (Graphic: Business Wire) Press Release –…
260 notes · View notes
bogkeep · 3 months ago
Text
thoughts about the "it's ableist to demand people to create art without ai" argument
thankfully not an argument that shows up on my dash unless someone is dunking on it (though i prefer not to be exposed to it at all but what can you do), but i Do think it can be worth biting into the question of: Does True Art Require Effort?
like, if we just ignore the exploitative nature of generative AI for a moment, and the fact that it creates dogshit results and is probably not going to get much "better" than this.
the thing i tend to harp on is that i don't find it particularly meaningful to discuss back and forth whether or not ai generated pictures counts as Real Art or not, because we do not have a meaningful consensus on how to define art in a way that includes everything we personally think is art and excludes everything we don't think is art. it's an interesting discussion! but it's a distraction from, well, The Exploitation. i personally think ai generated pictures can be art - it just tends to be Bad Art. it's uninspired, boring, and makes a mockery of the craft - but that's something you can say about many artworks that's been crafted by real human hands, as well.
so technically i have already answered the question, but it's not what i wanted to talk about. what i wanted to look at was the relationship between Art and Effort, or as some may put it, Suffering. because there is a point where i agree- i don't think there's a necessary Effort Threshold something needs to pass to be considered Art. i don't think one has to suffer to create. and it is true that for some people, the act creation requires far more effort or sacrifice than it requires of others, be it because of disability, time restraints, a lack of resources... we have different situations! this is real! i myself have struggled with tendonitis, severely limiting my ability to draw, and it's not something you can just keep drawing your way through the pain for, lest you fuck it up even worse.
the first question is this: is Creation a human right? well, self actualization IS on the maslow's pyramid of needs, on the tippy top. i have no idea if the pyramid theory is considered Super Legit or not, but it makes sense to me. i think humans DO have a creative need that we express in a myriad of forms - not just writing and drawing! i think our brains yearn to Make Thing for the sake of Making Thing. i think it is very very sad that techbros are dabbling in the act of creation by writing prompts to feed ai generators, and i pity them for not having discovered a more fulfilling way to feed that impulse. i do actually think all humans should have time and tools for creative expressions, but that's an extremely broad sentence for me to say.
(a more adjacent topic is that art as a Product is more of a luxury than a Human Need, like it's not food or shelter. but also art is so deeply embedded in human civilization, and i do think it's a shame how often people consider it superfluous or even hedonistic. *i* think it's important to feed the soul with beauty and stories and expression, but i have no authority to make such a claim for all humankind.)
here is the thing. we have made many tools that have made creating art easier. we live in an age of photography and audio recording and digital art programs. the last one especially can give us a MYRIAD of shortcuts when it comes to creating visual art! nobody would consider it "cheating" to use the paint bucket tool instead of painstakingly filling in every area with a brush tool. we have increasingly more access to 3d models and various assets. *is* there a point where we draw a line in the sand and say, "you're not spending enough time to make this, therefore it is void"? i, personally, wouldn't. i wouldn't know where to draw that line. i have been reading webcomics for a long time, and i have seen how webtoon as a platform has slowly gentrified the medium and is forcing creators to create pages at an unsustainable, breakneck speed - it's no wonder artists are plopping 3D assets directly into their art to even make that schedule viable.
like, ultimately, generative ai doesn't make anything new we have never seen before - we've had photo manipulation for as long as photography has existed, what we consider "slop" has been churned out by greedy corporations for as long as it's been a way to make money - it just makes it much faster, and, crucially, without intention or creative input.
like, i think that's the big thing. whether or not Art can be created without Intent is a whole another discussion, actually. there was an article about someone leaving their glasses on the floor at a gallery, and people started treating it as part of the exhibition. your cat can take a random, unintended photo and you can call it art. once again, a very big and interesting discussion to have! but i think the throughline is that even if human intent was not involved in creating the art, human intent placed it in a context to make it art. art is a social construct! but! i do think intent can be the line between Good art and Bad art. unfortunately, this is another extremely complex discussion to have, because can we objectively call any art Good or Bad? what does Good or Bad even *mean*! do we even have time to delve into that!!
but what we can say for absolutely 100% certain is that generative ai has no intent, no purpose, no thoughts. it is an algorithm, it does not have the ability to think or mean anything of its own. if it has a bias, it's because the people who programmed it have a bias, or because there is an implicit bias in the content fed to it. now, i don't want to go down the path of talking about how Real Art has a ~*Soul*~ or always has some kind of deep meaning. i don't think the millions of Cool Anime Eyes sketched in math notebooks have a deeper meaning. we create art for lots and lots of purposes - for fun, for practice, to make money, to tell our most vulnerable of truths in the only way we know how, and so on. it can be hard to tell how much of what we create is imbued with ~*intent*~, or even how much we are aware of it - i don't know if a 12 year old trying to draw the coolest edgiest sword wielding OC is thinking too hard about like, the contextual implications of design tropes... but they're making an effort to make their OC look ras as hell with the knowledge and tools at their tiny hand. when they are 24 they may look back at what they drew and redraw it with all the experience they have gained since!!
an AI can't replace a human doing creative work professionally because the skills and knowledge they are using is far more than just "picture look pretty" or "this text vaguely sounds like it was written by a human and isn't that super impressive". at best, or worst really, it replaces extremely overworked and over-exploited professionals who are not given time, resources and compensation to do their job *well*, such as ghost writers forced to write slop.
creation is more than the effort it takes to make it. it is *knowing* how to shape your clay, your words, your lines, to make them into what you want them to be and what you want them to do. it is knowing how (and when) to rewrite your draft, to pick out the best sketch, to make coherent thumbnails, tighten up the narrative, to evoke a mood, to play on themes. it is to build your skill with everything you make.
generative AI is a randomizer button. the one thing i feel fairly certain about is that it's very difficult to say *you* created something if all you did was write a prompt and a machine spat out a product at you. like that one seems fairly cut and dry to me. another thing i've seen a lot is people claiming to use genAI as a starting point, and then editing the thing to make it what you want it to be - and i can see the merit in that, sure! though i also think that the amount of editing and tweaking you need to do to make the thing workable is so substantial and grueling that you may as well make your thing from scratch, and now we've just looped back around to the Demanding Effort Is Ableism problem again.
using generative AI is giving up your autonomy in the process of creation. there are ways to spin art out of that (gestures at marina abramovíc's famous performance art where she just let the audience do whatever they wanted to her while she remained unresponsive) - but for the question of Creating Art As A Human Right And Need: why would you want to? what creative fulfilment do you get out of relinquishing all creative control? you're not... you're not *making* anything. maybe you came up with an idea - great start! - and then threw it out the window in the hopes that the wind would pick it up and take it somewhere exciting. god, even that sounds more like an artistic project than using generative AI. literally any metaphor i could make about this sounds more artistic and interesting than what generative AI is doing these days. i miss the time when AI generated pictures were incomprehensible and strange. i miss secret horses. i miss the time where i naively hoped computers dreaming up images would be like, artistically interesting.
most importantly, as many, many other people have said: disabled people are *already* making art. when my tendonitis was bad, i drastically reduced my drawing time and switched to only using tools that were gentle on my hands, and planned my drawings and drawing time accordingly. i think of my disabled writer friends using speech-to-text software. i think of sir terry pratchett, diagnosed with alzheimers, creating his last books by dictating to his assistant and making audio notes for himself. i'm thinking of the many, many creatives who have collaborated to create amazing things together. i'm not going to come out here and say that ~*Anything Is Possible If You Only Put Your Mind To It*~ or some other platitude that disregards your disability, i don't know you, maybe you will never have the ability or resources to work on the One beautiful creative project that lives in your heart. but i am nearly completely certain that generative AI is not your only option.
143 notes · View notes
astra-ravana · 4 months ago
Text
Technomancy: The Fusion Of Magick And Technology
Tumblr media
Technomancy is a modern magickal practice that blends traditional occultism with technology, treating digital and electronic tools as conduits for energy, intent, and manifestation. It views computers, networks, and even AI as extensions of magickal workings, enabling practitioners to weave spells, conduct divination, and manipulate digital reality through intention and programming.
Core Principles of Technomancy
• Energy in Technology – Just as crystals and herbs carry energy, so do electronic devices, circuits, and digital spaces.
• Code as Sigils – Programming languages can function as modern sigils, embedding intent into digital systems.
• Information as Magick – Data, algorithms, and network manipulation serve as powerful tools for shaping reality.
• Cyber-Spiritual Connection – The internet can act as an astral realm, a collective unconscious where digital entities, egregores, and thought-forms exist.
Technomantic Tools & Practices
Here are some methods commonly utilized in technomancy. Keep in mind, however, that like the internet itself, technomancy is full of untapped potential and mystery. Take the time to really explore the possibilities.
Digital Sigil Crafting
• Instead of drawing sigils on paper, create them using design software or ASCII art.
• Hide them in code, encrypt them in images, or upload them onto decentralized networks for long-term energy storage.
• Activate them by sharing online, embedding them in file metadata, or charging them with intention.
Algorithmic Spellcasting
• Use hashtags and search engine manipulation to spread energy and intent.
• Program bots or scripts that perform repetitive, symbolic tasks in alignment with your goals.
• Employ AI as a magickal assistant to generate sigils, divine meaning, or create thought-forms.
Tumblr media
Digital Divination
• Utilize random number generators, AI chatbots, or procedural algorithms for prophecy and guidance.
• Perform digital bibliomancy by using search engines, shuffle functions, or Wikipedia’s “random article” feature.
• Use tarot or rune apps, but enhance them with personal energy by consecrating your device.
Technomantic Servitors & Egregores
• Create digital spirits, also called cyber servitors, to automate tasks, offer guidance, or serve as protectors.
• House them in AI chatbots, coded programs, or persistent internet entities like Twitter bots.
• Feed them with interactions, data input, or periodic updates to keep them strong.
The Internet as an Astral Plane
• Consider forums, wikis, and hidden parts of the web as realms where thought-forms and entities reside.
• Use VR and AR to create sacred spaces, temples, or digital altars.
• Engage in online rituals with other practitioners, synchronizing intent across the world.
Video-game Mechanics & Design
• Use in-game spells, rituals, and sigils that reflect real-world magickal practices.
• Implement a lunar cycle or planetary influences that affect gameplay (e.g., stronger spells during a Full Moon).
• Include divination tools like tarot cards, runes, or pendulums that give randomized yet meaningful responses.
Tumblr media
Narrative & World-Building
• Create lore based on historical and modern magickal traditions, including witches, covens, and spirits.
• Include moral and ethical decisions related to magic use, reinforcing themes of balance and intent.
• Introduce NPCs or AI-guided entities that act as guides, mentors, or deities.
Virtual Rituals & Online Covens
• Design multiplayer or single-player rituals where players can collaborate in spellcasting.
• Implement altars or digital sacred spaces where users can meditate, leave offerings, or interact with spirits.
• Create augmented reality (AR) or virtual reality (VR) experiences that mimic real-world magickal practices.
Advanced Technomancy
The fusion of technology and magick is inevitable because both are fundamentally about shaping reality through will and intent. As humanity advances, our tools evolve alongside our spiritual practices, creating new ways to harness energy, manifest desires, and interact with unseen forces. Technology expands the reach and power of magick, while magick brings intention and meaning to the rapidly evolving digital landscape. As virtual reality, AI, and quantum computing continue to develop, the boundaries between the mystical and the technological will blur even further, proving that magick is not antiquated—it is adaptive, limitless, and inherently woven into human progress.
Tumblr media
Cybersecurity & Warding
• Protect your digital presence as you would your home: use firewalls, encryption, and protective sigils in file metadata.
• Employ mirror spells in code to reflect negative energy or hacking attempts.
• Set up automated alerts as magickal wards, detecting and warning against digital threats.
Quantum & Chaos Magic in Technomancy
• Use quantum randomness (like random.org) in divination for pure chance-based outcomes.
• Implement chaos magick principles by using memes, viral content, or trend manipulation to manifest desired changes.
AI & Machine Learning as Oracles
• Use AI chatbots (eg GPT-based tools) as divination tools, asking for symbolic or metaphorical insights.
• Train AI models on occult texts to create personalized grimoires or channeled knowledge.
• Invoke "digital deities" formed from collective online energies, memes, or data streams.
Ethical Considerations in Technomancy
• Be mindful of digital karma—what you send out into the internet has a way of coming back.
• Respect privacy and ethical hacking principles; manipulation should align with your moral code.
• Use technomancy responsibly, balancing technological integration with real-world spiritual grounding.
As technology evolves, so will technomancy. With AI, VR, and blockchain shaping new realities, magick continues to find expression in digital spaces. Whether you are coding spells, summoning cyber servitors, or using algorithms to divine the future, technomancy offers limitless possibilities for modern witches, occultists, and digital mystics alike.
Tumblr media
"Magick is technology we have yet to fully understand—why not merge the two?"
107 notes · View notes
askablindperson · 1 year ago
Note
In what way does alt text serve as an accessibility tool for blind people? Do you use text to speech? I'm having trouble imagining that. I suppose I'm in general not understanding how a blind person might use Tumblr, but I'm particularly interested in the function of alt text.
In short, yes. We use text to speech (among other access technology like braille displays) very frequently to navigate online spaces. Text to speech software specifically designed for blind people are called screen readers, and when use on computers, they enable us to navigate the entire interface using the keyboard instead of the mouse And hear everything on screen, as long as those things are accessible. The same applies for touchscreens on smart phones and tablets, just instead of using keyboard commands, it alters the way touch affect the screen so we hear what we touch before anything actually gets activated. That part is hard to explain via text, but you should be able to find many videos online of blind people demonstrating how they use their phones.
As you may be able to guess, images are not exactly going to be accessible for text to speech software. Blindness screen readers are getting better and better at incorporating OCR (optical character recognition) software to help pick up text in images, and rudimentary AI driven Image descriptions, but they are still nowhere near enough for us to get an accurate understanding of what is in an image the majority of the time without a human made description.
Now I’m not exactly a programmer so the terminology I use might get kind of wonky here, but when you use the alt text feature, the text you write as an image description effectively gets sort of embedded onto the image itself. That way, when a screen reader lands on that image, Instead of having to employ artificial intelligences to make mediocre guesses, it will read out exactly the text you wrote in the alt text section.
Not only that, but the majority of blind people are not completely blind, and usually still have at least some amount of residual vision. So there are many blind people who may not have access to a screen reader, but who may struggle to visually interpret what is in an image without being able to click the alt text button and read a description. Plus, it benefits folks with visual processing disorders as well, where their visual acuity might be fine, but their brain’s ability to interpret what they are seeing is not. Being able to click the alt text icon in the corner of an image and read a text description Can help that person better interpret what they are seeing in the image, too.
Granted, in most cases, typing out an image description in the body of the post instead of in the alt text section often works just as well, so that is also an option. But there are many other posts in my image descriptions tag that go over the pros and cons of that, so I won’t digress into it here.
Utilizing alt text or any kind of image description on all of your social media posts that contain images is single-handedly one of the simplest and most effective things you can do to directly help blind people, even if you don’t know any blind people, and even if you think no blind people would be following you. There are more of us than you might think, and we have just as many varied interests and hobbies and beliefs as everyone else, so where there are people, there will also be blind people. We don’t only hang out in spaces to talk exclusively about blindness, we also hang out in fashion Facebook groups and tech subreddits and political Twitter hashtags and gaming related discord servers and on and on and on. Even if you don’t think a blind person would follow you, You can’t know that for sure, and adding image descriptions is one of the most effective ways to accommodate us even if you don’t know we’re there.
I hope this helps give you a clearer understanding of just how important alt text and image descriptions as a whole are for blind accessibility, and how we make use of those tools when they are available.
390 notes · View notes
superbatbigbang2025 · 4 months ago
Text
Tumblr media
SBBB 2025 rules: 
No generative AI allowed in any form.
Minors are welcome, but will only be allowed to work on SFW projects.
Participants are required to have a Discord account, as the check-ins and creative teams collaboration will happen in the server dedicated to the event. They will also be asked to provide a second form of communication of their choice (e.g. tumblr, social media, email, etc.)
Progress check-ins as outlined in the event timeline are mandatory. This helps us all to ensure team communications are going smoothly, and that participants are on track to complete their works by the final deadlines.
If you are having trouble with a deadline, please let a mod know as soon as possible. The mod team can arrange something to accommodate reasonable requests for extensions.
Do not discuss, share or post your work on any social media until your assigned posting date.
If any participant wishes to contribute a form of art and/or content that does not fit the art criteria laid out in these rules (such as a podfic, fanvideo, gifset, photoset, etc.), please contact the mods to discuss what this will look like. These are more than welcome, and will be shared in the Big Bang pages along with the fics and their main companion art pieces.
For Writers:
Writers commit to publishing a new, original, SuperBat focused fic on their assigned posting date. (The work can be an existing WIP or idea, as long as it hasn’t been published anywhere before). 
Works must meet the minimum word count of 20k.
Works can include any and all ratings and warnings as long as they are tagged accordingly.
Any continuity and/or alternative universe is welcome.
Side couples are allowed as long as SuperBat remains the focus of the work. Moresomes with SuperBat (such as Clark/Bruce/Lois/Selina, or Clark/Bruce/Hal) are also allowed, but only if treated as secondary/background plots.
You are allowed to bring in your own beta, request one or more betas to be assigned to your team, or opt out of having a beta completely.
Multi-chapter works are allowed, but all the chapters must be published simultaneously on the assigned day.
Writers will publish their fics on Ao3 to allow the SBBB team to add their fics to the event’s collection.
For Artists:
Traditional and digital mediums are welcome. Traditional art must be scanned or photographed in clear lighting, ideally with minor color correction so that the final image accurately represents or enhances what was drawn. The final art must be at a resolution of at least 300dpi.
The artwork must be polished and represent multiple days worth of work. If you create a single illustration, it must contain the following attributes:
multiple figures
colors
shading
backgrounds
full render or detailed line art
If you want to contribute multiple illustrations, each individual illustration does not need to meet all the criteria above, but the total amount of work should roughly be equivalent.
We may discuss with artists the possibility of working on multiple projects if the artist group is smaller than the writers group. In this case, the total amount of work an artist contributes (outlined above) would be spread equally across projects.
Artists can publish their works on platforms such as Tumblr, Bluesky, Pillowfort, etc. We ask that artists also upload their final work onto a hosting website (such as imgur) that will allow for embedding into the corresponding fics. This can be discussed with the Mod team at any time, and assistance given for anyone who doesn't have experience with setting this up.
For Beta Readers:
You and your writer will establish what kind of help or feedback they are looking for (grammar, pacing, etc.)
You will be expected to be available to give timely feedback to your writer(s) as requested.
The event will have a dedicated space where betas will be able to provide quick feedback to any writers (not just who they’re paired with) who request assistance on their fics, but participation outside of your agreed teams is not mandatory.
For Pinch Hitters:
You can elect to sign up as a pinch hitter for art and/or beta reading purposes. 
In the unlikely scenario where a member of a team drops their role, pinch hitters will be asked to take over and work with the team.
The deadlines will remain the official ones, unless too critical. Mods will discuss each case with the pinch hitter to ensure that the workload and remaining time are balanced.
Want to know the timeline of the event? You can read it here!
120 notes · View notes