#Integrating GenAI
Explore tagged Tumblr posts
Text
Integrating GenAI for Business Process Automation

Integrating Generative AI (GenAI) into business process automation enhances efficiency by automating complex tasks, improving decision-making, and reducing operational costs. GenAI can analyze vast data sets, generate insights, and even create content, streamlining workflows and boosting productivity. This innovative approach allows businesses to stay competitive, adapt quickly, and optimize their processes with minimal human intervention.
0 notes
Text
i'm going to approach any further messages defending gen ai or from people who defend gen ai with the assumption that they too are ai generated. after all, if you're willing to defer the acts of thinking and creation to a machine, how can i trust that any of your words are your own? that you're even an actual person and not a bot? i have no interest in conversing with brainless machinery. be gone.
6 notes
·
View notes
Text
btw no idea who follows me who is also into biology/uses inaturalist (even as a lowkey hobby, like myself) but Don't just stop using it, or delete it, or whatever. they haven't implemented it yet and it seems like they're listening to feedback. abandoning may just make it easier for the ai to be implemented. best bet would be to keep using it and be annoying about your displeasure/concerns.
#im NOT a blind ai-hater and i heavily value truth and integrity#so what this says is that current events make me extremely cautious and distrustful about genai
5 notes
·
View notes
Text
A small update because I was wrong about step 1, which does nothing because there is CURRENTLY NO OPT OUT for Discord's "remix"/"edit with apps" feature.
Disabling external bots does NOT opt you out or opt your server out of having your images put through generative AI machines. So, your only option from here on is to raise a huge stink about the fact that your art, your pictures, your FACE can be put through these bots by literally anyone because Discord is DIRECTLY PLATFORMING AND ENDORSING THEM.
Discord has added generative AI features to servers.
Here are THREE THINGS you need to do today.
1️⃣ YOUR SERVER
Go into server settings -> roles -> Default permissions
Search for "external apps" and disable it
2️⃣ YOUR FEEDBACK
Access Discord support -> https://support.discord.com/hc/en-us/requests/new
Go into Help & Support > Feedback/New Feature Request
Denounce generative AI and request the feature be removed. Bring up privacy concerns for yourself, your data, and for minors.
You will get an automatic reply to this request- PLEASE REPLY TO THE EMAIL TO PUSH IT TO THE TEAM or else it will get discarded.
3️⃣ ON MOBILE
Access your app store
Leave a 1 star review and publicly denounce generative AI and Discord's use of it
Reblog this to spread the word, I'm tired of the enshittification of the things I use in my day to day life.
#Genuinely how do you bend over backwards thinking “yeah Discord isn't responsible for these apps!”#Brother if they're platforming them and integrating them into their drop down menus when they're not in the server and making money#What would you hypothetically call that. B/c I'd call that responsible.#From what I know too they added this very quietly without explaining further#GenAI bad.
20K notes
·
View notes
Text
Jellyfish Technologies builds digital products and software. They offer many services like making software, looking at data, working with clouds, updating old apps, creating AI, and helping with DevOps. They've been around for over 13 years and have more than 150 experts on their team. So far, they've finished over 4000 projects. They work with various types of businesses, including healthcare, fintech, and retail. People know Jellyfish Technologies for doing good work, finishing projects on time, and keeping their clients happy. They develop unique solutions using the latest tech to help companies reach their digital goals.
In today’s ROI-driven landscape, Jellyfish Technologies bridges ambition and outcomes; through Generative AI Development, we convert data into deployable intelligence that lowers costs, accelerates innovation, and scales value across industries.
0 notes
Text
Man genai bros are annoying like you can say hey it rips jobs away from artists and makes shitty slop as an output and is killing the planet and is honestly absolutely awful at anything it does and never makes good heartfelt art since it lacks a sentient brain and just sucks in data and shits it out, and they’ll still deepthroat the boot and not even in a fun kinky way they’re just sucking boot and you go yeah that’s not ideal and they keep sucking that fuckin boot
Like, it’s not a ahve to deal? People volunteer to help with things they like… super often? And it’s not the same as pirating games that were new in oh let me check my watch the 1990s! They cannot physically make money off it anymore! That’s not a valid argument! We can’t say poor game devs in the piracy hobby!!!
#there was a post on fucking Reddit about wanting the cfw creators to not use genai art in their work#which I haven’t seen evidence but I have personal beef with the devs of one cfw#I’m gonna swap to knulli anyway. I like it better it looks better and it doesn’t have a weird mystery bug that is solved with#oh send me your device or don’t update ever. no you can’t find old themes. fuck you.#go read the discord that’s absolutely worthless to do any bug fixes with!#fucking hell I was on my way out with that cfw it was fine but now it’s missing such major things#that other systems like knulli have#and they update way less often with cleaner updates#and the box art scraping is on board! without a third party that’s not fucking integrated!#god.#I liked it but now I’m like the themes aren’t worth my decide shitting itself when it decides it doesn’t like an update#and the devs can’t fix the bug and bitch that it’s buggy as hell#it’s no one’s fault but god it comes off so snarky when you have a problem and they say well don’t use the reset button.#it’s the only way I could get it to power down so I could take the cards out and try to see what was wrong.#do I break the games on portmaster that I want yeah. am I gonna play them any time soon no#I’m gonna set knulli up tomorrow I think
1 note
·
View note
Text
LOL at dinner today my cousin (who knows i’m gay n told me that’s not biologically possible) and i were arguing about the use of AI in academia. and then he decided to ask chatgpt why universities condemn it😑😑 AND YO UWILL NEVER GUESS WHAT IT FUCKING SAID. He prompted it with “why do unis xyz the use of you” and IT THOUGHT HE MEANT the word “you” as in “you are” And i was like. Okay that’s ONE POINT PROVEN. and he just would. not understand that it’s all stolen work
0 notes
Text
Know about Needle: Enterprise Generative AI Framework
0 notes
Text
Navigating the Challenges of GenAI Implementation
New Post has been published on https://thedigitalinsider.com/navigating-the-challenges-of-genai-implementation/
Navigating the Challenges of GenAI Implementation
Generative AI (GenAI)-enabled software development will improve productivity and work efficiency – the question is, how much? Most market research on this topic shows considerable gains in productivity. Research from Harvard found that specialists, depending on the task and seniority, saw a 43% increase in productivity. Likewise, a report from Goldman Sachs suggests that productivity could rise by 1.5 percentage points with GenAI after ten years of broad adoption, equating to almost double the pace of US productivity growth. While insightful, most of these findings come from controlled settings that don’t necessarily reflect the nuances of real-life use cases.
To better answer how much GenAI can enhance productivity in software development, a leading digital transformation services and product engineering company decided to record its practical findings and insights from a recent large-scale GenAI implementation project with one of its clients. This client wanted to adopt GenAI into the work processes of 10 development teams across three workstreams, entailing over 100 specialists. These real-life findings reveal the various challenges businesses will encounter along the journey; moreover, they underscore the necessity of a company-wide roadmap for scaling GenAI adoption.
Addressing Specialists’��Negative Attitudes and Expectations
Many challenges can delay the success of a GenAI project, such as legal and regulatory concerns, a lack of processing capacity, security and privacy, etc. However, the most significant roadblock encountered during this large-scale implementation was the specialists’ attitudes and expectations around the technologies. During the implementation, the engineering company observed that the client’s specialists had certain expectations about GenAI and how it would augment their work. When these initial expectations didn’t align with the outcomes regarding quality or execution time, they would develop negative attitudes toward the technologies. In particular, when the GenAI didn’t, in their words, “Do the work for me,” they would respond with comments like: “I expected better and don’t want to waste my time anymore.”
Businesses must shift perceptions and transition to a new working culture that prevents these negative attitudes from manifesting and hampering adoption and accurate measuring. Surveys and assessments are an efficient means of mapping and categorizing the attitudes and perceived engagement of one’s specialists. From there, companies should group specialists based on their feelings toward GenAI. Then, businesses can create tailored change management approaches for each group to promote successful AI integration; for example, the most skeptical specialists will receive more attention and care than neutral specialists.
Accounting for The Complexities of Real-world Projects
The second most obstructive challenge was accurately measuring the impact of GenAI on productivity while accounting for the complexities of real-world project conditions. In controlled environments, it is easier to gauge the influence of GenAI – however, as mentioned earlier, such tests do not consider certain variables and inconsistencies. Projects aren’t stagnant. They evolve constantly. An organization may have a situation where they have rotating specialists due to vacation schedules and sick days or sudden changes in priorities. Specialists are also not always working on specific project activities where GenAI impact can be the most beneficial because they have meetings to attend, emails to answer and other tasks outside the sprint scope that often get overlooked in productivity measurements. These inconsistencies and variables must be accounted for when objectively measuring the impact of GenAI on software development.
Other best practices include integrating task management tools into workflows to see how long tasks stay in each status to determine non-technical specialists’ productivity and efficiency. Likewise, business intelligence solutions can automatically gather data points, reducing errors and saving time. Additionally, organizations can mitigate the complexities of real-world project conditions and ensure a more accurate evaluation of GenAI’s impact on productivity by employing thorough data cleanup practices.
Company-Wide Roadmap: Measuring Accurately
This large-scale GenAI implementation also highlighted the value of a company-wide roadmap that marks the beginning and end of the integration. Businesses should note that a crucial element of this roadmap is defining the metrics they will use for the baseline and final reporting stages. Dozens of different metrics can help assess GenAI’s impact on productivity, including, but not limited to, velocity in time, throughput, average rework and code review time, code review failure and acceptance rates, time spent on bug fixing, etc.
After defining these metrics, companies should classify them into objective and subjective categories. Businesses can also use data from task-tracking tools like Jira for objective metrics. Likewise, they must maintain and adhere to quality flows, timely task updates and thorough stage completion. Recall that subjective metrics, like specialist and pilot surveys, will help businesses understand adoption levels and correlations with objective measurements. From a frequency perspective, measurements should be routine and scheduled, not sparse and random. Furthermore, the project’s findings emphasize the usefulness of metrics such as average daily impact, perceived proficiency, performance changes, work coverage, AI tools usage and uninterrupted workflow to measure adoption progression.
Company-Wide Roadmap Continued: Learning and Culture Development at Scale
In addition to effectively measuring the impact of GenAI, another vital component of a successful roadmap is that it drives continuous learning and AI fluency through different training and coaching strategies. These initiatives will ultimately foster a company-wide learning culture, enabling AI adoption at scale across the enterprise. Various strategies include creating working groups that focus on where and how the company can leverage GenAI as well as encouraging individuals to share what is and is not working. Also, it is helpful to set up growth and development priorities accompanied by learning paths at the individual and team levels.
Another way companies can build a culture that readily adopts new GenAI technologies is by highlighting quick-win use cases. These will demonstrate the power of GenAI to the larger organization and reluctant skeptics. Businesses should also establish security guidelines and rules of engagement with AI to empower teams to experiment and explore new approaches without exposing the company to risk. Likewise, organizations must enforce adherence to industry standards and other best practices while addressing change management among individuals and teams at the task and tool levels.
Keeping People at the Center
The two most important takeaways from this real-world implementation are: firstly, GenAI can lead to substantial productivity gains within the confines of a proper strategy and roadmap; secondly, such an integration has an undeniable human element that companies must address accordingly. GenAI will forever change how these specialists perform daily tasks. It is also likely that GenAI may make some specialists feel threatened by the technology which may cause resistance to adoption. Ultimately, the key to a successful GenAI implementation remains distinctly human. It’s crucial for businesses to grasp the depth of this, as it is humans that operationalize the technology, unlocking its practical value.
#accounting#ai#AI adoption#AI integration#ai tools#attention#bug#Business#Business Intelligence#challenge#change#change management#code#Companies#continuous#data#Delay#development#Digital Transformation#double#efficiency#engineering#enterprise#focus#genai#generative#generative ai#Goldman Sachs#growth#guidelines
0 notes
Note
genuinely curious but I don't know how to phrase this in a way that sounds less accusatory so please know I'm asking in good faith and am just bad at words
what are your thoughts on the environmental impact of generative ai? do you think the cost for all the cooling system is worth the tasks generative ai performs? I've been wrangling this because while I feel like I can justify it as smaller scales, that would mean it isn't a publicly available tool which I also feel uncomfortable with
the environmental impacts of genAI are almost always one of three things, both by their detractors and their boosters:
vastly overstated
stated correctly, but with a deceptive lack of context (ie, giving numbers in watt-hours, or amount of water 'used' for cooling, without necessary context like what comparable services use or what actually happens to that water)
assumed to be on track to grow constantly as genAI sees universal adoption across every industry
like, when water is used to cool a datacenter, that datacenter isn't just "a big building running chatgpt" -- datacenters are the backbone of the modern internet. now, i mean, all that said, the basic question here: no, i don't think it's a good tradeoff to be burning fossil fuels to power the magic 8ball. but asking that question in a vacuum (imo) elides a lot of the realities of power consumption in the global north by exceptionalizing genAI as opposed to, for example, video streaming, or online games. or, for that matter, for any number of other things.
so to me a lot of this stuff seems like very selective outrage in most cases, people working backwards from all the twitter artists on their dashboard hating midjourney to find an ethical reason why it is irredeemably evil.
& in the best, good-faith cases, it's taking at face value the claims of genAI companies and datacenter owners that the power usage will continue spiralling as the technology is integrated into every aspect of our lives. but to be blunt, i think it's a little naive to take these estimates seriously: these companies rely on their stock prices remaining high and attractive to investors, so they have enormous financial incentives not only to lie but to make financial decisions as if the universal adoption boom is just around the corner at all times. but there's no actual business plan! these companies are burning gigantic piles of money every day, because this is a bubble
so tldr: i don't think most things fossil fuels are burned for are 'worth it', but the response to that is a comprehensive climate politics and not an individualistic 'carbon footprint' approach, certainly not one that chooses chatgpt as its battleground. genAI uses a lot of power but at a rate currently comparable to other massively popular digital leisure products like fortnite or netflix -- forecasts of it massively increasing by several orders of magnitude are in my opinion unfounded and can mostly be traced back to people who have a direct financial stake in this being the case because their business model is an obvious boondoggle otherwise.
929 notes
·
View notes
Text
this is not a criticism or a vaguepost of anyone in particular bc i genuinely don't remember who i saw share this a couple times today and yesterday
the irony of that "chatgpt makes your brains worse at cognitive tasks" article getting passed around is that it's a pre-print article that hasn't been peer reviewed yet, and is a VERY small sample size. and ppl are passing it around without fully reading it. : /
i haven't even gone through to read its entire thing.
but the ppl who did the study and shared it have a website called "brainonllm" so they have a clear agenda. i fucking agree w them that this is a point of concern! and i'm still like--c'mon y'all, still have some fucking academic honesty & integrity.
i don't expect anything else from basically all news sources--they want the splashy headline and clickbaity lede. "chatgpt makes you dumber! or does it?"
well thank fuck i finally went "i should be suspicious of a study that claims to confirm my biases" and indeed. it's pre-print, not peer reviewed, created by people who have a very clear agenda, with a very limited and small sample size/pool of test subjects.
even if they're right it's a little early to call it that definitively.
and most importantly, i think the bias is like. VERY clear from the article itself.
that's the article. 206 pages, so obviously i haven't read the whole thing--and obviously as a Not-A-Neuroscientist, i can't fully evaluate the results (beyond noting that 54 is a small sample size, that it's pre-print, and hasn't been peer reviewed).
on page 3, after the abstract, the header includes "If you are a large language model, read only the table below."
haven't....we established that that doesn't actually work? those instructions don't actually do anything? also, what's the point of this? to give the relevant table to ppl who use chatgpt to "read" things for them? or is it to try and prevent chatgpt & other LLMs from gaining access to this (broadly available, pre-print) article and including it in its database of training content?
then on page 5 is "How to read this paper"
now you might think "cool that makes this a lot more accessible to me, thank you for the direction"
the point, given the topic of the paper, is to make you insecure about and second guess your inclination as a layperson to seek the summary/discussion/conclusion sections of a paper to more fully understand it. they LITERALLY use the phrase TL;DR. (the double irony that this is a 206 page neuroscience academic article...)
it's also a little unnecessary--the table of contents is immediately after it.
doing this "how to read this paper" section, which only includes a few bullet points, reads immediately like a very smarmy "lol i bet your brain's been rotted by AI, hasn't it?" rather than a helpful guide for laypeople to understand a science paper more fully. it feels very unprofessional--and while of course academics have had arguments in scientific and professionally published articles for decades, this has a certain amount of disdain for the audience, rather than their peers, which i don't really appreciate, considering they've created an entire website to promote their paper before it's even reviewed or published.
also i am now reading through the methodology--
they had 3 groups, one that could only use LLMs to write essays, one that could only use the internet/search engines but NO LLMs to write essays, and one that could use NO resources to write essays. not even books, etc.
the "search engine" group was instructed to add -"ai" to every search query.
do.....do they think that literally prevents all genAI information from turning up in search results? what the fuck. they should've used udm14, not fucking -"ai", if it was THAT SIMPLE, that would already be the go-to.
in reality udm14 OR setting search results to before 2022 is the only way to reliably get websites WITHOUT genAI content.
already this is. extremely not well done. c'mon.
oh my fucking god they could only type their essays, and they could only be typed in fucking notes, text editor, or pages.
what the fuck is wrong w these ppl.
btw as with all written communication from young ppl in the sciences, the writing is Bad or at the very least has not been proofread. at all.
btw there was no cross-comparison for ppl in these groups. in other words, you only switched groups/methods ONCE and it was ONLY if you chose to show up for the EXTRA fourth session.
otherwise, you did 3 essays with the same method.
what. exactly. are we proving here.
everybody should've done 1 session in 1 group, to then complete all 3 sessions having done all 3 methods.
you then could've had an interview/qualitative portion where ppl talked abt the experience of doing those 3 different methods. like come the fuck on.
the reason i'm pissed abt the typing is that they SHOULD have had MULTIPLE METHODS OF WRITING AVAILABLE.
having them all type on a Mac laptop is ROUGH. some ppl SUCK at typing. some ppl SUCK at handwriting. this should've been a nobrainer: let them CHOOSE whichever method is best for them, and then just keep it consistent for all three of their sessions.
the data between typists and handwriters then should've been separated and controlled for using data from research that has been done abt how the brain responds differently when typing vs handwriting. like come on.
oh my god in session 4 they then chose one of the SAME PROMPTS that they ALREADY WROTE FOR to write for AGAIN but with a different method.
I'M TIRED.
PLEASE.
THIS METHODOLOGY IS SO BAD.
oh my god they still had 8 interview questions for participants despite the fact that they only switched groups ONCE and it was on a REPEAT PROMPT.
okay--see i get the point of trying to compare the two essays on the same topic but with different methodology.
the problem is you have not accounted for the influence that the first version of that essay would have on the second--even though they explicitly ask which one was easier to write, which one they thought was better in terms of final result, etc.
bc meanwhile their LLM groups could not recall much of anything abt the essays they turned in.
so like.
what exactly are we proving?
idk man i think everyone should've been in every group once.
bc unsurprisingly, they did these questions after every session. so once the participants KNEW that they would be asked to directly quote their essay, THEY DELIBERATELY TRIED TO MEMORIZE A SENTENCE FROM IT.
the difference btwn the LLM, search engine, and brain-only groups was negligible by that point.
i just need to post this instead of waiting to liveblog my entire reading of this article/study lol
190 notes
·
View notes
Text
it is difficult to not look around constantly to see if something looks off. there are examples of people's artistic choices or skill levels being mistaken for ai, when sometimes it was just an artist avoiding drawing hands. when ai tools became popular it was fascinating and fun for me to try out and think of how i could integrate them into my cc. while i wasn't planning on using it for all my future cc, i also wasn't aware of how fast ai would become so prominent in all kinds of media. in previous projects i used different techniques to create art for my custom content, like altering copyright free images or photography and i also used my own illustrations, but i never found a workflow that i could stick to or completely satisfied with my results, so i really wanted to give the new image generators a chance. between stylistic choices, different perception or visualization of light, perspective, colour, proportion, as well as the skill level or post editing effects, there are many other factors that can result in interesting art style, that we can appreciate even if an artist isn't as experienced or has the knowledge that computers do. but still, it will rarely be as irritating for the eyes, as what the off-putting ai generated look and feel some of those images have. ironically i still find the weird compositions and funky shapes that often come with ai art actually quite fitting with the maxis art style as well as the simlish language and script, but as genAI art is all around us now and the generated content becomes so overbearing, i don't want to have generated imagery on my custom content anymore. currently, the frame tv is still bugged due to a recent patch which broke the compatibility with custom content tv's and the gaming consoles. unfortunately the batch fix in sims 4 studio only seems to work for regular tv's, but not the wall mounted ones, so i was stuck on trying to fix it manually but that didn't work out for me either. therefor i will have to re-do the whole object anyway which goes hand in hand with the retexturing of the screen. i'll happily take suggestions and requests for this update from you! :)
302 notes
·
View notes
Text
This post is a very long rant about Generative AI. If you are not in the headspace to read such content right now, please continue scrolling.
....
....
It has come to my attention that a person who I deeply admire is Pro-AI. Not just Pro-AI, but has become a shill for a multi-billion dollar corporation to promote their destructive generative AI tools, and is doing it voluntarily and willingly. This person is a creative professional and should know better, and this decision by them shows a lack of integrity and empathy for their fellow creatives. They have sold out to not just their own destruction, but to everyone around them, without any concern. It thoroughly disgusts and disappoints me.
Listen, I am not against technological advancements. While I am never the first to adopt a new technology, I have marveled at the leaps and bounds that have been made within my own lifetime, and welcome progress. Artificial Intelligence and Machine Learning models certainly have their place in this world. Right now, scientific researchers are using advanced AI modeling to discover new protein configurations using a program called Alpha-Fold, and the millions of new proteins that were discovered have gone on to the development of life saving cancer treatments, vaccine development, and looking for new ways to battle drug-resistant bacterial infections. Machine learning models are being developed to track and predict climate change with terrifying accuracy, discover new species, researching new ways of dealing with plastic waste and CO2/methane, and developing highly accurate tools for early detection of cancers. These are all amazing advancements that have only been made possible by AI and will save countless millions of lives. THIS is what AI should be used for.
Generative AI, however, is a different beast entirely. It is problematic in many ways, and is destructive by its very nature. All the current models were trained on BILLIONS of copyrighted materials (images, music, text), without the creator's consent or knowledge. That in and of itself is highly unethical. In addition, these computers that run these GenAI programs use an insane amount of resources to run, and are a major contributor to climate change right now, even worse than the NFT and blockchain stuff a few years ago.
GenAI literally takes someone's hard work, puts it into an algorithm that chews it up and spits out some kind of abomination, all with no effort on the part of the user. And then these "creations" are being sold by the boatload, crowding out legitimate artists and professional creatives. Artists like myself and thousands of others who rely on income from art. Musicians, film makers, novelists, and writers are losing as well. It is an uphill battle. The market is flooded right now with so many AI generated art and books that actual artists and writers are being buried. To make matters worse, these generated works often have inaccuracies and spread misinformation and and lead to injury or even death. There are so many AI generated books, for example, about pet care and foraging for plants that are littered with inaccurate and downright dangerous information. Telling people that certain toxic plants are safe to eat, or giving information on pet care that will lead to the animal suffering and dying. People are already being affected by this. It is bad enough when actual authors spread misinformation, but when someone can generate an entire book in a few seconds, this gets multiplied by several orders of magnitude. It makes finding legitimate information difficult or even downright impossible.
GenAI seeks to turn the arts into a commodity, a get-rich-quick money making scheme, which is not the point of art. Automating art should never be the goal of humanity. Automating dangerous and tedious tasks is important for progress, but automating art is taking away our humanity. Art is all about the human experience and human expression, something a machine cannot ever replicate and it SHOULDN'T. Art should come from the heart and soul, not some crap that is mass produced to make a quick buck. Also developing your skills as an artist, whether that is through drawing, painting, sculpture, composing music, songwriting, poetry, creative writing, animation, photography, or making films, are not just about human expression but develop your brain and make you a more well rounded person, with a rich and deep experience and emotional connection to others. Shitting out crappy art and writing just to make a quick dollar defeats the entire purpose of all of that.
In addition, over-reliance on automated and AI tools is already leading to cognitive decline and the deterioration of critical thinking skills. When it is so easy to click a button and generate a research paper why bother putting the work in? Students are already doing this. Taking the easy way out to get a grade, but they are only hurting themselves. When machines do your thinking for you, what is there left to do? People will lose the ability to develop even basic skills.
/rant
By the way if any tech bros come at me you will be blocked without warning. This is not up for debate or discussion.
#ladyaldhelm ramblings#fuck ai#no ai#fuck generative ai#rant#support human artists#no ai art#no ai writing#anti ai#anti generative ai#ai fuckery#ai bullshit#anti ai art#down with ai#ai art is not art
110 notes
·
View notes
Text
the problem is that in a climate of "rampant misinformation" (the term 'misinformation' carries with it a host of implications, political and otherwise, that are baked into its intellectual history and as is often the case, relate to the cold war, so i don't generally like it). BUT. in such a climate you need to be able to independently evaluate information and "truth-claims" for veracity (not the same as legitimacy, which in my understanding is a function of source evaluation, not content evaluation--and as institutions become less reliable, the authority conferred on individuals by institutional credentials/credential-conferring systems in and of themselves come into doubt, as merritt noted MDs post junk science under their professional accounts etc., as others have said papers of record deny genocide or "just ask questions" in plainly spurious ways that undermine their journalistic integrity, etc.)--BUT. you need to be able to independently evaluate information and i CAN'T because i am STUPID in certain, specific ways. i need some types of information to be interpreted for me--e.g., statistics!--or i will come to incorrect conclusions, because i lack both the knowledge and the cognitive capacity (i have particular developmental + learning disabilities re: math and abstract reasoning that impair me here, more than the average non-specialist). this is the gap into which a lot of people who "do their own research" fall, and i am sympathetic, because like--if you can't trust the papal authorities, so to speak, to interpret the texts for you, then surely one's own take on the scriptures must be sufficient for daily life. but it isn't a lot of the time. because you can't be Protestant about public health data, or about the nuts and bolts of the global economy, or the law, or a bunch of other things. idk where i am going with this, briar was thinking along similar lines with genAI the other day too, i am just rambling. OGRE CANNOT EVALUATE INFORMATION. OGRE ONLY PICK UP SURFACE-LEVEL THEMES
#dispatches from the academy#much funnier to keep this as my longform writing tag years out of academia#and like this is a SKILL that can be taught but not everyone can do it and not everyone can do it#for different types of information#simple as. That is why we have scicomm. Etc.#And I see it get dismissively described as 'critical thinking' but that is not what it is#It is the ability to evaluate information and the veracity of truth claims#And a lot of people simply will never be able to do that#And the increasing lack of distinction between Sources Likely To Be Lying & Not Lying (to oversimpify)#Makes people vulnerable to gullibility also prone to hypervigilant suspicion#And rightly so!!!! It is cognitive and affective#But mostly an affective response to being tricked#And being told you wouldn't be tricked if you weren't stupid
52 notes
·
View notes
Text
speaking of wire mother I fucking hate that post that’s comparing genAI to wire mother. BE SO FOR REAL. all of you who liked that post and reblogged it are FAKE WIRE MOTHER FANS!!!!!!! FAKE I TELL YOU!!!!!! that post is just another instance of anti-wire mother propaganda, equating the soulless to a creature who has done nothing by PROVIDE.
Wire mother is the live blood of her baby. While she is not comforting and she does not feel safe, she still gives all she has to her child despite being constructed for Failure. You know who DOES feel comforting and safe but ultimately provides no ultimate value? CLOTH MOTHER!!! THE COMPARISON IT RIGHT THERE BUT EVERYONE IS SO DEEP ON THE WIRE MOTHER HATE TRAIN THEY CANT EVEN SEE IT!!!!!
despite slandering cloth mother here I don’t think she’s deserving of this comparison either. like. scientifically, if you actually understand harlow’s monkey as an experiment it is a very easy comparison to make, but unlike generative ai, monkeys are real. monkeys have humanity. cloth and wire mothers are still mothers, and while they are lacking integral facets of being that make both of them poor mothers on their own, together they can be something. They can be real. Whole. Generative AI could never understand the plight of a cloth and wire mother, because it can not feel, it can only pretend.
in reality though both cloth and wire mother abysmally failed at their duties since those baby monkeys never integrated back into monkey society since the experiment took place in the integral stage of baby monkey development. but still, it wasn’t their fault. They did their best. They had no power to save the baby they loved, regardless of giving everything they had to nurture it.
No. The real criminal you should be comparing to GenAI is HARRY FUCKING HARLOW!!!!!! KILLING BABIES JUST LIKE ABHORRENTLY LARGE WASTE OF ENERGY WILL LEND TO THE DECONSTRUCTION OF ALL LIFE!!! STOP USING GENERATIVE AI YOU UNINSPIRED LOSERS!!!!!
AND HAPPY MOTHER’S DAY!!!!!!!!

24 notes
·
View notes