#Robust AI Models
Explore tagged Tumblr posts
pineappleliar · 1 year ago
Text
Went on a nightmare spiral reading into all the documentation on AI art creation. The training data used is just a black box of 100% copyright infringement where the pseudo transformative nature of the tool makes getting your art removed from their tool nigh impossible (unless you know they actually have it, which again, they won’t tell you).
The other thing is that this typical ‘ai art style’ is a feature, not a bug. You can use the ‘style’ tag to toggle how ‘artistic’ you want a generated image to be, and cranking that up increases that particular aesthetic you’d expect from AI art.
on one hand this is worrying because it reads to me that one of the easiest ways to identify if something is AI made is toggleable. On the other I feel a grim sort of comedy from the fact that a large majority of techbros will jump on this tell anyways because working around this would require a modicum of effort, and the whole point is to avoid needing any effort.
0 notes
reasonsforhope · 1 year ago
Text
It’s an open secret in fashion. Unsold inventory goes to the incinerator; excess handbags are slashed so they can’t be resold; perfectly usable products are sent to the landfill to avoid discounts and flash sales. The European Union wants to put an end to these unsustainable practices. On Monday, [December 4, 2023], it banned the destruction of unsold textiles and footwear.
“It is time to end the model of ‘take, make, dispose’ that is so harmful to our planet, our health and our economy,” MEP Alessandra Moretti said in a statement. “Banning the destruction of unsold textiles and footwear will contribute to a shift in the way fast fashion manufacturers produce their goods.”
This comes as part of a broader push to tighten sustainable fashion legislation, with new policies around ecodesign, greenwashing and textile waste phasing in over the next few years. The ban on destroying unsold goods will be among the longer lead times: large businesses have two years to comply, and SMEs have been granted up to six years. It’s not yet clear on whether the ban applies to companies headquartered in the EU, or any that operate there, as well as how this ban might impact regions outside of Europe.
For many, this is a welcome decision that indirectly tackles the controversial topics of overproduction and degrowth. Policymakers may not be directly telling brands to produce less, or placing limits on how many units they can make each year, but they are penalising those overproducing, which is a step in the right direction, says Eco-Age sustainability consultant Philippa Grogan. “This has been a dirty secret of the fashion industry for so long. The ban won’t end overproduction on its own, but hopefully it will compel brands to be better organised, more responsible and less greedy.”
Clarifications to come
There are some kinks to iron out, says Scott Lipinski, CEO of Fashion Council Germany and the European Fashion Alliance (EFA). The EFA is calling on the EU to clarify what it means by both “unsold goods” and “destruction”. Unsold goods, to the EFA, mean they are fit for consumption or sale (excluding counterfeits, samples or prototypes)...
The question of what happens to these unsold goods if they are not destroyed is yet to be answered. “Will they be shipped around the world? Will they be reused as deadstock or shredded and downcycled? Will outlet stores have an abundance of stock to sell?” asks Grogan.
Large companies will also have to disclose how many unsold consumer products they discard each year and why, a rule the EU is hoping will curb overproduction and destruction...
Could this shift supply chains?
For Dio Kurazawa, founder of sustainable fashion consultancy The Bear Scouts, this is an opportunity for brands to increase supply chain agility and wean themselves off the wholesale model so many rely on. “This is the time to get behind innovations like pre-order and on-demand manufacturing,” he says. “It’s a chance for brands to play with AI to understand the future of forecasting. Technology can help brands be more intentional with what they make, so they have less unsold goods in the first place.”
Grogan is equally optimistic about what this could mean for sustainable fashion in general. “It’s great to see that this is more ambitious than the EU’s original proposal and that it specifically calls out textiles. It demonstrates a willingness from policymakers to create a more robust system,” she says. “Banning the destruction of unsold goods might make brands rethink their production models and possibly better forecast their collections.”
One of the outstanding questions is over enforcement. Time and again, brands have used the lack of supply chain transparency in fashion as an excuse for bad behaviour. Part of the challenge with the EU’s new ban will be proving that brands are destroying unsold goods, not to mention how they’re doing it and to what extent, says Kurazawa. “Someone obviously knows what is happening and where, but will the EU?”"
-via British Vogue, December 7, 2023
10K notes · View notes
eileen-crys · 1 year ago
Text
AI DISTURBANCE "OVERLAYS" DO NOT WORK!
To all the artists and folks who want to protect their art against AI mimicry: all the "AI disturbance" overlays that are circulating online lately DON'T WORK!
Tumblr media
Glaze's disturbance (and now the Ibis Paint premium feature, apparently. Not sure.) modifies the image on a code-level, it's not just an overlayed effect but it actually affects the image's data so AI can't really detect and interpret the code within the image. From the Glaze website:
Can't you just apply some filter, compression, blurring, or add some noise to the image to destroy image cloaks? As counterintuitive as this may be, the high level answer is that no simple tools work to destroy the perturbation of these image cloaks. To make sense of this, it helps to first understand that cloaking does not use high-intensity pixels, or rely on bright patterns to distort the image. It is a precisely computed combination of a number of pixels that do not easily stand out to the human eye, but can produce distortion in the AI's “eye.” In our work, we have performed extensive tests showing how robust cloaking is to things like image compression and distortion/noise/masking injection. Another way to think about this is that the cloak is not some brittle watermark that is either seen or not seen. It is a transformation of the image in a dimension that humans do not perceive, but very much in the dimensions that the deep learning model perceive these images. So transformations that rotate, blur, change resolution, crop, etc, do not affect the cloak, just like the same way those operations would not change your perception of what makes a Van Gogh painting "Van Gogh."
Anyone can request a WebGlaze account for FREE, just send an Email or a DM to the official Glaze Project accounts on X and Instagram, they reply within a few days. Be sure to provide a link to your art acc (anywhere) so they know you're an artist.
Please don't be fooled by those colorful and bright overlays to just download and put on your art: it won't work against AI training. Protect your art with REAL Glaze please 🙏🏻 WebGlaze is SUPER FAST, you upload the artwork and they send it back to you within five minutes, and the effect is barely visible!
Official Glaze Project website | Glaze FAQs | about WebGlaze
807 notes · View notes
optiblog · 10 months ago
Text
OPTİVİSER - GOLD
Tumblr media
Welcome to Optiviser.com, your ultimate guide to navigating the complex world of electronics in 2024. As technology continues to evolve at a rapid pace, finding the right devices that suit your needs can be overwhelming. In this blog post, we’ll harness the power of AI to help you make informed choices with our comprehensive electronics comparison. We’ll take a closer look at the top smart home devices that are revolutionizing how we live and work, providing convenience and efficiency like never before. Additionally, we’ll offer expert laptop recommendations tailored to various lifestyles and budgets, ensuring you find the perfect match for your daily tasks. 
AI-powered Electronics Comparison
In today's fast-paced technological landscape, making informed choices about electronics can be overwhelming. An AI-powered Electronics Comparison tool can help streamline this process by providing insights that cater to specific user needs. These advanced tools utilize algorithms that analyze product features, specifications, and user reviews, resulting in a tailored recommendation for buyers.
As we delve into the world of consumer technology, it's important to highlight the Top Smart Home Devices 2024. From smart thermostats to security cameras, these devices are becoming essential for modern households. They not only enhance convenience but also significantly improve energy efficiency and home safety.
For those looking for a new computer to enhance productivity or gaming experiences, consider checking out the latest Laptop Recommendations. Many platforms, including Optiviser.com, provide comprehensive comparisons and insights that can help consumers choose the best laptop suited to their needs, whether it’s for work, study, or leisure.
Top Smart Home Devices 2024
As we move into 2024, the landscape of home automation is evolving rapidly, showcasing an array of innovative gadgets designed to enhance comfort and convenience. In this era of AI-powered Electronics Comparison, selecting the right devices can be overwhelming, but we've highlighted some of the best Top Smart Home Devices 2024 that stand out for their functionality and user experience.
One of the most impressive innovations for this year is the latest AI-powered home assistant. These devices not only respond to voice commands but also learn your preferences over time, allowing them to offer personalized suggestions and perform tasks proactively. Imagine a device that can monitor your schedule and automatically adjust your home's temperature and lighting accordingly!
Moreover, security remains a top priority in smart homes. The Top Smart Home Devices 2024 include state-of-the-art security cameras and smart locks that provide robust protection while ensuring ease of access. With features like remote monitoring through your smartphone or integration with smart doorbells, keeping your home safe has never been easier. For more details on the comparisons and recommendations of these devices, you can check out Optiviser.com.
Laptop Recommendation
In today's fast-paced world, choosing the right laptop can be a daunting task. With numerous options available in the market, it's essential to consider various factors such as performance, portability, and price. At Optiviser.com, we provide an insightful guide to help you navigate through the vast array of choices. To streamline your decision-making process, we have developed an AI-powered Electronics Comparison tool that allows you to compare specifications and features of different laptops side by side.
This year, we have seen a surge in innovative laptops that cater to diverse needs. Whether for gaming, business, or everyday use, our top recommendations include models that excel in battery life, processing power, and display quality. For instance, consider the latest models from top brands, which have integrated the best features of Top Smart Home Devices 2024 trends, ensuring seamless connectivity and advanced functionalities.
Additionally, if you're looking for a laptop that can handle multitasking effortlessly, we suggest models equipped with the latest processors and ample RAM. Our detailed Laptop Recommendation section on Optiviser.com includes expert reviews and user feedback to help you choose a laptop that not only fits your budget but also meets your specific requirements.
674 notes · View notes
bethanyberg · 1 year ago
Text
Tumblr media Tumblr media
GLAZE 2.0 IS OUT TODAY FOR DOWNLOAD! Go protect your art!
NEW: Glaze 2.0 (Apr 14) We are excited to announce the release of version 2.0 of Glaze. This new version significantly improved Glaze robustness against the newest AI models, and requires less time to glaze images. Most notably:
Significantly improved robustness against Stable Diffusion 1, 2, SDXL, especially for smooth surface art (e.g. anime, cartoon).
Less noticeable modification to images, addressed non-convergent patch artifacts
Improved computational efficiency on most platforms (~ 50% speed up)
Built in Mac GPU support -- Over 5X speed up on Mac M-CPUs.
766 notes · View notes
Text
forever tired of our voices being turned into commodity.
forever tired of thorough medaocrity in the AAC business. how that is rewarded. How it fails us as users. how not robust and only robust by small small amount communication systems always chosen by speech therapists and funded by insurance.
forever tired of profit over people.
forever tired of how companies collect data on every word we’ve ever said and sell to people.
forever tired of paying to communicate. of how uninsured disabled people just don’t get a voice many of the time. or have to rely on how AAC is brought into classrooms — which usually is managed to do in every possible wrong way.
forever tired of the branding and rebranding of how we communicate. Of this being amazing revealation over and over that nonspeakers are “in there” and should be able to say things. of how every single time this revelation comes with pre condition of leaving the rest behind, who can’t spell or type their way out of the cage of ableist oppression. or are not given chance & resources to. Of the branding being seen as revolution so many times and of these companies & practitioners making money off this “revolution.” of immersion weeks and CRP trainings that are thousands of dollars and wildly overpriced letterboards, and of that one nightmare Facebook group g-d damm it. How this all is put in language of communication freedom. 26 letters is infinite possibilities they say - but only for the richest of families and disabled people. The rest of us will have to live with fewer possibilities.
forever tired of engineer dads of AAC users who think they can revolutionize whole field of AAC with new terrible designed apps that you can’t say anything with them. of minimally useful AI features that invade every AAC app to cash in on the new moment and not as tool that if used ethically could actually help us, but as way of fixing our grammar our language our cultural syntax we built up to sound “proper” to sound normal. for a machine, a large language model to model a small language for us, turn our inhuman voices human enough.
forever tired of how that brand and marketing is never for us, never for the people who actually use it to communicate. it is always for everyone around us, our parents and teachers paras and SLPs and BCBAs and practitioners and doctors and everyone except the person who ends up stuck stuck with a bad organized bad implemented bad taught profit motivated way to talk. of it being called behavior problems low ability incompetence noncompliance when we don’t use these systems.
you all need to do better. We need to democritize our communication, put it in our own hands. (My friend & communication partner who was in Occupy Wall Street suggested phrase “Occupy AAC” and think that is perfect.) And not talking about badly made non-robust open source apps either. Yes a robust system needs money and recources to make it well. One person or community alone cannot turn a robotic voice into a human one. But our human voice should not be in hands of companies at all.
(this is about the Tobii Dynavox subscription thing. But also exploitive and capitalism practices and just lazy practices in AAC world overall. Both in high tech “ mainstream “ AAC and methods that are like ones I use in sense that are both super stigmatized and also super branded and marketed, Like RPM and S2C and spellers method. )
359 notes · View notes
hms-no-fun · 7 months ago
Note
Hi! I just read your post about your opinion on "AI" and I really liked it. If it's no bother, what's your opinion on people who use it for studying? Like writing essays, solving problems and stuff like that?
I haven't been a fan of AI from the beginning and I've heard that you shouldn't ask it for anything because then you help it develop. But I don't know how to explain that to friends and classmates or even if it's true anymore. Because I've seen some of the prompts it can come up with and they're not bad and I've heard people say that the summaries AI makes are really good and I just... I dunno. I'm at a loss
Sorry if this is a lot or something you simply don't want to reply to. You made really good points when talking about AI and I really liked it and this has been weighing on me for a while :)
on a base level, i don't really have a strongly articulated opinion on the subject because i don't use AI, and i'm 35 so i'm not in school anymore and i don't have a ton of college-aged friends either. i have little exposure to the people who use AI in this way nor to the people who have to deal with AI being used in this way, so my perspective here is totally hypothetical and unscientific.
what i was getting at in my original AI post was a general macroeconomic point about how all of the supposed efficiency gains of AI are an extension of the tech CEO's dislike of paying and/or giving credit to anyone they deem less skilled or intelligent than them. that it's conspicuous how AI conveniently falls into place after many decades of devaluing and deskilling creative/artistic labor industries. historically, for a lot of artists the most frequently available & highest paying gigs were in advertising. i can't speak to the specifics when it comes to visual art or written copy, but i *can* say that when i worked in the oklahoma film industry, the most coveted jobs were always the commercials. great pay for relatively less work, with none of the complications that often arise working on amateur productions. not to mention they were union gigs, a rare enough thing in a right to work state, so anyone trying to make a career out of film work wanting to bank their union hours to qualify for IATSE membership always had their ears to the ground for an opening. which didn't come often because, as you might expect, anyone who *got* one of those jobs aimed to keep it as long as possible. who could blame em, either? one person i met who managed to get consistent ad work said they could afford to work all of two or three months a year, so they could spend the rest of their time doing low-budget productions and (occasionally) student films.
there was a time when this was the standard for the film industry, even in LA; you expected to work 3 to 5 shows a year (exact number's hard to estimate because production schedules vary wildly between ads, films, and tv shows) for six to eight months if not less, so you'd have your bills well covered through the lean periods and be able to recover from what is an enormously taxing job both physically and emotionally. this was never true for EVERYONE, film work's always been a hustle and making a career of it is often a luck-based crapshoot, but generally that was the model and for a lot of folks it worked. it meant more time to practice their skills on the job, sustainably building expertise and domain knowledge that they could then pass down to future newcomers. anything that removes such opportunities decreases the amount of practice workers get, and any increased demand on their time makes them significantly more likely to burn out of the industry early. lower pay, shorter shoots, busier schedules, these aren't just bad for individual workers but for the entire industry, and that includes the robust and well-funded advertising industry.
well, anyway, this year's coca-cola christmas ad was made with AI. they had maybe one person on quality control using an adobe aftereffects mask to add in the coke branding. this is the ultimate intended use-case for AI. it required the expertise of zero unionized labor, and worst of all the end result is largely indistinguishable from the alternative. you'll often see folks despair at this verisimilitude, particularly when a study comes out that shows (for instance) people can't tell the difference between real poetry and chat gpt generated poetry. i despair as well, but for different reasons. i despair that production of ads is a better source of income and experience for film workers than traditional movies or television. i despair that this technology is fulfilling an age-old promise about the disposability of artistic labor. poetry is not particularly valued by our society, is rarely taught to people beyond a beginner's gloss on meter and rhyme. "my name is sarah zedig and i'm here to say, i'm sick of this AI in a major way" type shit. end a post with the line "i so just wish that it would go away and never come back again!" and then the haiku bot swoops in and says, oh, 5/7/5 you say? that is technically a haiku! and then you put a haiku-making minigame in your crowd-pleasing japanese nationalist open world chanbara simulator, because making a haiku is basically a matter of selecting one from 27 possible phrase combinations. wait, what do you mean the actual rules of haiku are more elastic and subjective than that? that's not what my english teacher said in sixth grade!
AI is able to slip in and surprise us with its ability to mimic human-produced art because we already treat most human-produced art like mechanical surplus of little to no value. ours is a culture of wikipedia-level knowledge, where you have every incentive to learn a lot of facts about something so that you can sufficiently pretend to have actually experienced it. but this is not to say that humans would be better able to tell the difference between human produced and AI produced poetry if they were more educated about poetry! the primary disconnect here is economic. Poets already couldn't make a fucking living making poetry, and now any old schmuck can plug a prompt into chatgpt and say they wrote a sonnet. even though they always had the ability to sit down and write a sonnet!
boosters love to make hay about "deskilling" and "democratizing" and "making accessible" these supposedly gatekept realms of supposedly bourgeois expression, but what they're really saying (whether they know it or not) is that skill and training have no value anymore. and they have been saying this since long before AI as we know it now existed! creative labor is the backbone of so much of our world, and yet it is commonly accepted as a poverty profession. i grew up reading books and watching movies based on books and hearing endless conversation about books and yet when i told my family "i want to be a writer" they said "that's a great way to die homeless." like, this is where the conversation about AI's impact starts. we already have a culture that simultaneously NEEDS the products of artistic labor, yet vilifies and denigrates the workers who perform that labor. folks see a comic panel or a corporate logo or a modern art piece and say "my kid could do that," because they don't perceive the decades of training, practice, networking, and experimentation that resulted in the finished product. these folks do not understand that just because the labor of art is often invisible doesn't mean it isn't work.
i think this entire conversation is backwards. in an ideal world, none of this matters. human labor should not be valued over machine labor because it inherently possesses an aura of human-ness. art made by humans isn't better than AI generated art on qualitative grounds. art is subjective. you're not wrong to find beauty in an AI image if the image is beautiful. to my mind, the value of human artistic labor comes down to the simple fact that the world is better when human beings make art. the world is better when we have the time and freedom to experiment, to play, to practice, to develop and refine our skills to no particular end except whatever arbitrary goal we set for ourselves. the world is better when people collaborate on a film set to solve problems that arise organically out of the conditions of shooting on a live location. what i see AI being used for is removing as many opportunities for human creativity as possible and replacing them with statistical averages of prior human creativity. this passes muster because art is a product that exists to turn a profit. because publicly traded companies have a legal responsibility to their shareholders to take every opportunity to turn a profit regardless of how obviously bad for people those opportunities might be.
that common sense says writing poetry, writing prose, writing anything is primarily about reaching the end of the line, about having written something, IS the problem. i've been going through the many unfinished novels i wrote in high school lately, literally hundreds of thousands of words that i shared with maybe a dozen people and probably never will again. what value do those words have? was writing them a waste of time since i never posted them, never finished them, never turned a profit off them? no! what i've learned going back through those old drafts is that i'm only the writer i am today BECAUSE i put so many hours into writing generic grimdark fantasy stories and bizarrely complicated werewolf mythologies.
you know i used to do open mics? we had a poetry group that met once a month at a local cafe in college. each night we'd start by asking five words from the audience, then inviting everyone to compose a poem using those words in 10 to 15 minutes. whoever wanted to could read their poem, and whoever got the most applause won a free drink from the cafe. then we'd spend the rest of the night having folks sign up to come and read whatever. sometimes you'd get heartfelt poems about personal experiences, sometimes you'd get ambitious soundcloud rappers, sometimes you'd get a frat guy taking the piss, sometimes you'd get a mousy autist just doing their best. i don't know that any of the poetry i wrote back then has particular value today, but i don't really care. the point of it was the experience in that moment. the experience of composing something on the fly, or having something you wrote a couple days ago, then standing up and reading it. the value was in the performance itself, in the momentary synthesis between me and the audience. i found out then that i was pretty good at making people cry, and i could not have had that experience in any other venue. i could not have felt it so viscerally had i just posted it online. and i cannot wrap up that experience and give it to you, because it only existed then.
i think more people would write poetry if they had more hours in a day to spare for frivolities, if there existed more spaces where small groups could organize open mics, if transit made those spaces more widely accessible, if everyone made enough money that they weren't burned the fuck out and not in the mood to go to an open mic tonight, if we saw poetry as a mode of personal reflection which was as much about the experience of having written it as anything else. this is the case for all the arts. right now, the only people who can afford to make a living doing art are already wealthy, because art doesn't pay well. this leads to brain drain and overall lowering quality standards, because the suburban petty bouge middle class largely do not experience the world as it materially exists for the rest of us. i often feel that many tech CEOs want to be remembered the way andy warhol is remembered. they want to be loved and worshipped not just for business acumen but for aesthetic value, they want to get the kind of credit that artists get-- because despite the fact that artists don't get paid shit, they also frequently get told by people "your work changed my life." how is it that a working class person with little to no education can write a story that isn't just liked but celebrated, that hundreds or thousands of people imprint on, that leaves a mark on culture you can't quantify or predict or recreate? this is AI's primary use-case, to "democratize" art in such a way that hacks no longer have to work as hard to pretend to be good at what they do. i mean, hell, i have to imagine every rich person with an autobiography in the works is absolutely THRILLED that they no longer have to pay a ghost writer!
so, circling back around to the meat of your question. as far as telling people not to use AI because "you're just helping to train it," that ship has long since sailed. getting mad at individuals for using AI right now is about as futile as getting mad at individuals for not masking-- yes, obviously they should wear a mask and write their own essays, but to say this is simply a matter of millions of individuals making the same bad but unrelated choice over and over is neoliberal hogwash. people stopped masking because they were told to stop masking by a government in league with corporate interests which had every incentive to break every avenue of solidarity that emerged in 2020. they politicized masks, calling them "the scarlet letter of [the] pandemic". biden himself insisted this was "a pandemic of the unvaccinated", helpfully communicating to the public that if you're vaccinated, you don't need to mask. all those high case numbers and death counts? those only happen to the bad people.
now you have CEOs and politicians and credulous media outlets and droves of grift-hungry influencers hard selling the benefits of AI in everything everywhere all the time. companies have bent over backwards to incorporate AI despite ethics and security worries because they have a fiduciary responsibility to their shareholders, and everyone with money is calling this the next big thing. in short, companies are following the money, because that's what companies do. they, in turn, are telling their customers what tools to use and how. so of course lots of people are using AI for things they probably shouldn't. why wouldn't they? "the high school/college essay" as such has been quantized and stripmined by an education system dominated by test scores over comprehension. it is SUPPOSED to be an exercise in articulating ideas, to teach the student how to argue persuasively. the final work has little to no value, because the point is the process. but when you've got a system that lives and dies by its grades, within which teachers are given increasingly more work to do, less time to do it in, and a much worse paycheck for their trouble, the essay increasingly becomes a simple pass/fail gauntlet to match the expected pace set by the simple, clean, readily gradable multiple choice quiz. in an education system where the stakes for students are higher than they've ever been, within which you are increasingly expected to do more work in less time with lower-quality guidance from your overworked teachers, there is every incentive to get chatgpt to write your essay for you.
do you see what i'm saying? we can argue all day about the shoulds here. of course i think it's better when people write their own essays, do their own research, personally read the assigned readings. but cheating has always been a problem. a lot of these same fears were aired over the rising popularity of cliffs notes in the 90s and 2000s! the real problem here is systemic. it's economic. i would have very little issue with the output of AI if existing conditions were not already so precarious. but then, if the conditions were different, AI as we know it likely would not exist. it emerges today as the last gasp of a tech industry that has been floundering for a reason to exist ever since the smart phone dominated the market. they tried crypto. they tried the metaverse. now they're going all-in on AI because it's a perfect storm of shareholder-friendly buzzwords and the unscientific technomythology that's been sold to laymen by credulous press sycophants for decades. It slots right into this niche where the last of our vestigial respect for "the artist" once existed. it is the ultimate expression of capitalist realism, finally at long last doing away with the notion that the suits at disney could never in their wildest dreams come up with something half as cool as the average queer fanfic writer. now they've got a program that can plagiarize that fanfic (along with a dozen others) for them, laundering the theft through a layer of transformation which perhaps mirrors how the tech industry often exploits open source software to the detriment of the open source community. the catastrophe of AI is that it's the fulfillment of a promise that certainly predates computers at the very least.
so, i don't really know what to tell someone who uses AI for their work. if i was talking to a student, i'd say that relying chatgpt is really gonna screw you over when it comes time take the SAT or ACT, and you have to write an essay from scratch by hand in a monitored environment-- but like, i also think the ACT and SAT and probably all the other standardized tests shouldn't exist? or at the very least ought to be severely devalued, since prep for those tests often sabotages the integrity of actual classroom education. although, i guess at this point the only way forward for education (that isn't getting on both knees and deep-throating big tech) is more real-time in-class monitored essay writing, which honestly might be better for all parties anyway. of course that does nothing to address research essays you can't write in a single class session. to someone who uses AI for research, i'd probably say the same thing as i would to someone who uses wikipedia: it's a fine enough place to start, but don't cite it. click through links, find sources, make sure what you're reading is real, don't rely on someone else's generalization. know that chatgpt is likely not pulling information from a discrete database of individual files that it compartmentalizes the way you might expect, but rather is a statistical average of a broad dataset about which it cannot have an opinion or interpretation. sometimes it will link you to real information, but just as often it will invent information from whole cloth. honestly, the more i talk it out, the more i realize all this advice is basically identical to the advice adults were giving me in the early 2000s.
which really does cement for me that the crisis AI is causing in education isn't new and did not come from nowhere. before chatgpt, students were hiring freelancers on fiverr. i already mentioned cliffs notes. i never used any of these in college, but i'll also freely admit that i rarely did all my assigned reading. i was the "always raises her hand" bitch, and every once in a while i'd get other students who were always dead silent in class asking me how i found the time to get the reading done. i'd tell them, i don't. i read the beginning, i read the ending, and then i skim the middle. whenever a word or phrase jumps out at me, i make a note of it. that way, when the professor asks a question in class, i have exactly enough specific pieces of information at hand to give the impression of having done the reading. and then i told them that i learned how to do this from the very same professor that was teaching that class. the thing is, it's not like i learned nothing from this process. i retained quite a lot of information from those readings! this is, broadly, a skill that emerges from years of writing and reading essays. but then you take a step back and remember that for most college students (who are not pursuing any kind of arts degree), this skillset is relevant to an astonishingly minimal proportion of their overall course load. college as it exists right now is treated as a jobs training program, within which "the essay" is a relic of an outdated institution that highly valued a generalist liberal education where today absolute specialization seems more the norm. so AI comes in as the coup de gras to that old institution. artists like myself may not have the constitution for the kind of work that colleges now exist to funnel you into, but those folks who've never put a day's thought into the work of making art can now have a computer generate something at least as good at a glance as basically anything i could make. as far as the market is concerned, that's all that matters. the contents of an artwork, what it means to its creator, the historic currents it emerges out of, these are all technicalities that the broad public has been well trained not to give a shit about most of the time. what matters is the commodity and the economic activity it exists to generate.
but i think at the end of the day, folks largely want to pay for art made by human beings. that it's so hard for a human being to make a living creating and selling art is a question far older than AI, and whose answer hasn't changed. pay workers more. drastically lower rents. build more affordable housing. make healthcare free. make education free. massively expand public transit. it is simply impossible to overstate how much these things alone would change the conversation about AI, because it would change the conversation about everything. SO MUCH of the dominance of capital in our lives comes down to our reliance on cars for transit (time to get a loan and pay for insurance), our reliance on jobs for health insurance (can't quit for moral reasons if it's paying for your insulin), etc etc etc. many of AI's uses are borne out of economic precarity and a ruling class desperate to vacuum up every loose penny they can find. all those billionaires running around making awful choices for the rest of us? they stole those billions. that is where our security went. that is why everything is falling apart, because the only option remaining to *every* institutional element of society is to go all-in on the profit motive. tax these motherfuckers and re-institute public arts funding. hey, did you know the us government used to give out grants to artists? did you know we used to have public broadcast networks where you could make programs that were shown to your local community? why the hell aren't there public youtube clones? why aren't there public transit apps? why aren't we CONSTANTLY talking about nationalizing these abusive fucking industries that are falling over themselves to integrate AI because their entire modus operandi is increasing profits regardless of product quality?
these are the questions i ask myself when i think about solutions to the AI problem. tech needs to be regulated, the monopolies need breaking up, but that's not enough. AI is a symptom of a much deeper illness whose treatment requires systemic solutions. and while i'm frustrated when i see people rely on AI for their work, or otherwise denigrate artists who feel AI has devalued their field, on some level i can't blame them. they are only doing what they've been told to do. all of which merely strengthens my belief in the necessity of an equitable socialist future (itself barely step zero in the long path towards a communist future, and even that would only be a few steps on the even longer path to a properly anarchist future). improve the material conditions and you weaken the dominance of capitalist realism, however minutely. and while there are plenty of reasons to despair at the likelihood of such a future given a second trump presidency, i always try to remember that socialist policies are very popular and a *lot* of that popularity emerged during the first trump administration. the only wrong answer here is to assume that losing an election is the same thing as losing a war, that our inability to put the genie back in its bottle means we can't see our own wishes granted.
i dunno if i answered your question but i sure did say a lot of stuff, didn't i?
112 notes · View notes
mariacallous · 8 months ago
Text
On Saturday, an Associated Press investigation revealed that OpenAI's Whisper transcription tool creates fabricated text in medical and business settings despite warnings against such use. The AP interviewed more than 12 software engineers, developers, and researchers who found the model regularly invents text that speakers never said, a phenomenon often called a “confabulation” or “hallucination” in the AI field.
Upon its release in 2022, OpenAI claimed that Whisper approached “human level robustness” in audio transcription accuracy. However, a University of Michigan researcher told the AP that Whisper created false text in 80 percent of public meeting transcripts examined. Another developer, unnamed in the AP report, claimed to have found invented content in almost all of his 26,000 test transcriptions.
The fabrications pose particular risks in health care settings. Despite OpenAI’s warnings against using Whisper for “high-risk domains,” over 30,000 medical workers now use Whisper-based tools to transcribe patient visits, according to the AP report. The Mankato Clinic in Minnesota and Children’s Hospital Los Angeles are among 40 health systems using a Whisper-powered AI copilot service from medical tech company Nabla that is fine-tuned on medical terminology.
Nabla acknowledges that Whisper can confabulate, but it also reportedly erases original audio recordings “for data safety reasons.” This could cause additional issues, since doctors cannot verify accuracy against the source material. And deaf patients may be highly impacted by mistaken transcripts since they would have no way to know if medical transcript audio is accurate or not.
The potential problems with Whisper extend beyond health care. Researchers from Cornell University and the University of Virginia studied thousands of audio samples and found Whisper adding nonexistent violent content and racial commentary to neutral speech. They found that 1 percent of samples included “entire hallucinated phrases or sentences which did not exist in any form in the underlying audio” and that 38 percent of those included “explicit harms such as perpetuating violence, making up inaccurate associations, or implying false authority.”
In one case from the study cited by AP, when a speaker described “two other girls and one lady,” Whisper added fictional text specifying that they “were Black.” In another, the audio said, “He, the boy, was going to, I’m not sure exactly, take the umbrella.” Whisper transcribed it to, “He took a big piece of a cross, a teeny, small piece … I’m sure he didn’t have a terror knife so he killed a number of people.”
An OpenAI spokesperson told the AP that the company appreciates the researchers’ findings and that it actively studies how to reduce fabrications and incorporates feedback in updates to the model.
Why Whisper Confabulates
The key to Whisper’s unsuitability in high-risk domains comes from its propensity to sometimes confabulate, or plausibly make up, inaccurate outputs. The AP report says, "Researchers aren’t certain why Whisper and similar tools hallucinate," but that isn't true. We know exactly why Transformer-based AI models like Whisper behave this way.
Whisper is based on technology that is designed to predict the next most likely token (chunk of data) that should appear after a sequence of tokens provided by a user. In the case of ChatGPT, the input tokens come in the form of a text prompt. In the case of Whisper, the input is tokenized audio data.
The transcription output from Whisper is a prediction of what is most likely, not what is most accurate. Accuracy in Transformer-based outputs is typically proportional to the presence of relevant accurate data in the training dataset, but it is never guaranteed. If there is ever a case where there isn't enough contextual information in its neural network for Whisper to make an accurate prediction about how to transcribe a particular segment of audio, the model will fall back on what it “knows” about the relationships between sounds and words it has learned from its training data.
According to OpenAI in 2022, Whisper learned those statistical relationships from “680,000 hours of multilingual and multitask supervised data collected from the web.” But we now know a little more about the source. Given Whisper's well-known tendency to produce certain outputs like "thank you for watching," "like and subscribe," or "drop a comment in the section below" when provided silent or garbled inputs, it's likely that OpenAI trained Whisper on thousands of hours of captioned audio scraped from YouTube videos. (The researchers needed audio paired with existing captions to train the model.)
There's also a phenomenon called “overfitting” in AI models where information (in this case, text found in audio transcriptions) encountered more frequently in the training data is more likely to be reproduced in an output. In cases where Whisper encounters poor-quality audio in medical notes, the AI model will produce what its neural network predicts is the most likely output, even if it is incorrect. And the most likely output for any given YouTube video, since so many people say it, is “thanks for watching.”
In other cases, Whisper seems to draw on the context of the conversation to fill in what should come next, which can lead to problems because its training data could include racist commentary or inaccurate medical information. For example, if many examples of training data featured speakers saying the phrase “crimes by Black criminals,” when Whisper encounters a “crimes by [garbled audio] criminals” audio sample, it will be more likely to fill in the transcription with “Black."
In the original Whisper model card, OpenAI researchers wrote about this very phenomenon: "Because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself."
So in that sense, Whisper "knows" something about the content of what is being said and keeps track of the context of the conversation, which can lead to issues like the one where Whisper identified two women as being Black even though that information was not contained in the original audio. Theoretically, this erroneous scenario could be reduced by using a second AI model trained to pick out areas of confusing audio where the Whisper model is likely to confabulate and flag the transcript in that location, so a human could manually check those instances for accuracy later.
Clearly, OpenAI's advice not to use Whisper in high-risk domains, such as critical medical records, was a good one. But health care companies are constantly driven by a need to decrease costs by using seemingly "good enough" AI tools—as we've seen with Epic Systems using GPT-4 for medical records and UnitedHealth using a flawed AI model for insurance decisions. It's entirely possible that people are already suffering negative outcomes due to AI mistakes, and fixing them will likely involve some sort of regulation and certification of AI tools used in the medical field.
87 notes · View notes
malfunctioncrash · 3 months ago
Text
(NC, pain play)
I buy my sexbot second hand. It's an older model, less streamlined than the new droids they're coming out with- Clunky, dusty, but it still serves it's purpose extremely well. It comes with a charging cord and a case of different cocks I can manually change out.
It doesn't have the robust AI newer models are required to run with. Safety concerns make such basic, barebones programming illegal to manufacture now a days. But it's cheap, and I don't care about anything fancy. I just want to get fucked. I use it without issue a handful of times. Sometimes if I push a session too long the thrusting mechanism gets a little rough, jerky instead of smooth. The room fills with heat, and there's a smell other than sex, sweat, and lube- something thick and hot and industrial.
It doesn't respond to my body or the way I moan. It doesn't touch me gently or prep my hole before it slides it's cock inside. It doesn't love me. It's a machine, built to fuck and nothing more. It listens and processes basic verbal input, but there's no complex machine thinking past that. That's part of the draw to me.
One day I'm on my hands and knees with it mounting me from behind. It's segmented fingers are wrapped so tight around my hips I'll be covered in bruises once our session is over. I've spoken a long string of commands to it today. Faster, harder, deeper, harder, harder, fuck me harder- until it's pounding me with the custom made thick, fat cock I've got it equipped with so rough I know It'll be hard to sit the next day.
There's a fizzle and a pop somewhere behind me, but I don't notice over the sound of metal slamming against my ass. Modern sexbots have dozens of failsafes to stop a session at anytime. Older models like mine have only one point of failure- verbal commands are the only way to control it. And that pop was the sound of its microphone reciever going out.
I don't notice until the second time I cum, riding out the thrusts as I come down and finally decide I'm done. Slow down I command, content to lazily fuck the last bit of pleasure out of my overworked body. But it doesn't slow down.
I repeat myself once, twice, three times. Stop! I order but my sexbot just keep reaming me with the brutal, steady pace I'd demanded. It hurts now, pleasure bleeding into pain. I struggle and thrash and writhe but it's grip on my hips keeps me in place as it continues to fuck me. It never once stutters its pace or it's depth, shoving itself into me to the root of its cock every single time. Please I beg but it's rudimentary AI wouldn't know how to respond to that even if it's audio receptors were in perfect working order.
Hours pass. My hole is raw and chafed. My arms and legs have given out so I'm collapsed on the floor in a heap. My sexbot is still railing me, holding me up by my hips as it mercilessly ruins me. The custom cock is so fucking big. Why did I make it so thick? So long? The room stinks of overworked machinery. Everytime I start to grow numb the the consistent strokes the bots hips skip and stutter and I'm forced to stay present. My phone is ringing but I can't reach it. Everything hurts.
Finally, it runs out of battery. It returns to its neutral position as the power flickers out, releasing my hips and unceremoniously popping its cock out of my ruined hole as it heads to the other room to await charging, just like it was hard programmed to do.
My hole gapes open even as swollen as it is. I've already cried out all my tears but one more rolls down my cheek in relief. It's over. I pass out on the floor.
A month later, I'm back on my hands and knees with my bot mounting me from behind. This time, I've rigged it's microphone to turn off on purpose.
35 notes · View notes
rideboomindia · 11 months ago
Text
Tumblr media
Based on the search results, here are some innovative technologies that RideBoom could implement to enhance the user experience and stay ahead of ONDC:
Enhanced Safety Measures: RideBoom has already implemented additional safety measures, including enhanced driver background checks, real-time trip monitoring, and improved emergency response protocols. [1] To stay ahead, they could further enhance safety by integrating advanced telematics and AI-powered driver monitoring systems to ensure safe driving behavior.
Personalized and Customizable Services: RideBoom could introduce a more personalized user experience by leveraging data analytics and machine learning to understand individual preferences and offer tailored services. This could include features like customizable ride preferences, personalized recommendations, and the ability to save preferred routes or driver profiles. [1]
Seamless Multimodal Integration: To provide a more comprehensive transportation solution, RideBoom could integrate with other modes of transportation, such as public transit, bike-sharing, or micro-mobility options. This would allow users to plan and book their entire journey seamlessly through the RideBoom app, enhancing the overall user experience. [1]
Sustainable and Eco-friendly Initiatives: RideBoom has already started introducing electric and hybrid vehicles to its fleet, but they could further expand their green initiatives. This could include offering incentives for eco-friendly ride choices, partnering with renewable energy providers, and implementing carbon offset programs to reduce the environmental impact of their operations. [1]
Innovative Payment and Loyalty Solutions: To stay competitive with ONDC's zero-commission model, RideBoom could explore innovative payment options, such as integrated digital wallets, subscription-based services, or loyalty programs that offer rewards and discounts to frequent users. This could help attract and retain customers by providing more value-added services. [2]
Robust Data Analytics and Predictive Capabilities: RideBoom could leverage advanced data analytics and predictive modeling to optimize their operations, anticipate demand patterns, and proactively address user needs. This could include features like dynamic pricing, intelligent routing, and personalized recommendations to enhance the overall user experience. [1]
By implementing these innovative technologies, RideBoom can differentiate itself from ONDC, provide a more seamless and personalized user experience, and stay ahead of the competition in the on-demand transportation market.
57 notes · View notes
deepdreamnights · 7 months ago
Text
youtube
Tribute AMV for Dr. Underfang and Mrs. Natalie Nice/Nautilus.
From TyrannoMax and the Warriors of the Core, everyone's favorite Buzby-Spurlock animated series.
After all, who doesn't love a good bad guy, especially when they come in pairs?
Process/Tutorial Under the Fold.
This is, of course, a part of my TyrannoMax unreality project, with most of these video clips coming from vidu, taking advantage of their multi-entity consistency feature (more on that later). This is going to be part of a larger villain showcase video, but this section is going to be its own youtube short, so its an video on its own.
The animation here is intentionally less smooth than the original, as I'm going for a 1980s animated series look, and even in the well-animated episodes you were typically getting 12 FPS (animating 'on twos'), with 8 (on threes) being way more common. As I get access to better animation software to rework these (currently just fuddling along with PS) I'm going to start using this to my advantage by selectively dropping blurry intermediate frames.
I went with 12 since most of these clips are, in the meta-lore, from the opening couple of episodes and the opening credits, where most of the money for a series went back in the day.
Underfang's transformation sequence was my testing for several of my techniques for making larger TyrannoMax videos. Among those was selectively dropping some of the warped frames as I mentioned above, though for a few shots I had to wind up re-painting sections.
Multi-entity consistency can keep difficult dinosaur characters stable on their own, but it wasn't up to the task of keeping the time-temple accurate enough for my use, as you can see here with the all-t-rex- and-some-moving-statues, verses the multi-species effort I had planned:
Tumblr media Tumblr media
The answer was simple, chroma-key.
Tumblr media Tumblr media
Most of the Underfang transformation shots were done this way. The foot-stomp was too good to leave just because he sprouted some extra toes, so that was worth repainting a few frames of in post.
Tumblr media Tumblr media
Vidu kind of over-did the texturing on a few shots (and magenta was a poor choice of key-color) so I had to go in and manually purple-ize the background frame by frame for the spin-shot.
This is on top of the normal cropping, scaling, color-correcting, etc that goes into any editing job of this type.
Tumblr media
It's like I say: nearly all AI you see is edited, most of it curated, even the stuff that's awful and obvious (never forget: enragement is engagement)
Multi-Entity Consistency:
Tumblr media
Vidu's big advantage is reference-to-video. For those who have been following the blog for awhile, R2V is sort of like Midjourney's --cref character reference feature. A lot of video AIs have start-end frame functionality, but being able to give the robot a model sheet and effectively have it run with it is a darn nice feature for narrative.
Unlike the current version of Midjourney's --cref feature, however, you can reference multiple concepts with multiple images.
It is super-helpful when you need to get multiple characters to interact, because without it, they tend to blend into each other conceptually.
I also use it to add locations, mainly to keep them looking appropriately background-painting rather than a 3d background or something that looks like a modded photo like a lot of modern animation does.
The potential here for using this tech as a force multiplier for small animation projects really shines through, and I really hope I'm just one of several attempting to use it for that purpose.
Music:
The song is "The Boys Have a Second Lead Pipe", one of my Suno creations. I was thinking of using Dinowave (Let's Dance To) but I'm saving that for a music video of live-action dinosovians.
Prompting:
You can tell by the screenshot above that my prompts have gotten... robust. Vidu's prompting system seems to understand things better when given tighter reigns (some AIs have the opposite effect), and takes information with time-codes semi-regularly, so my prompts are now more like:
low-angle shot, closeup, of a green tyrannosaurus-mad-scientist wearing a blue shirt and purple tie with white lab coat and a lavender octopus-woman with tentacles growing from her head, wearing a teal blouse, purple skirt, purple-gray pantyhose. they stand close to each other, arms crossed, laughing evilly. POV shot of them looming over the viewer menacingly. The background is a city, in the style of animation background images. 1986 vintage cel-shaded cartoon clip, a dinosaur-anthro wearing a lab coat, shirt and tie reaches into his coat with his right hand and pulls out a laser gun, he takes aim, points the laser gun at the camera and fires. The laser effect is short streaks of white energy with a purple glow. The whole clip has the look and feel of vintage 1986 action adventure cel-animated cartoons. The animation quality is high, with flawless motion and anatomy. animated by Tokyo Movie Shinsha, studio Ghibli, don bluth. BluRay remaster.
While others approach the scripted with time-code callouts for individual actions.
23 notes · View notes
aefensteorrra · 28 days ago
Text
it feels so insulting when you have experience with building language models from scratch, not using libraries, defining functions literally just by implementing the maths behind it, to have using a certain generative LLM suggested to you to automate an aspect of your job when the reason you do it manually is because you appreciate the rigour and certainty behind doing it that way... like I know I understand generative AI better than anyone at my job who does not have a degree in computer science, meaning I understand why no matter how robust you think your prompts are it is never quite as robust as you think and you will have to painstakingly analyse the output anyway. so I will keep doing it myself
17 notes · View notes
ixnai · 2 months ago
Text
The allure of speed in technology development is a siren’s call that has led many innovators astray. “Move fast and break things” is a mantra that has driven the tech industry for years, but when applied to artificial intelligence, it becomes a perilous gamble. The rapid iteration and deployment of AI systems without thorough vetting can lead to catastrophic consequences, akin to releasing a flawed algorithm into the wild without a safety net.
AI systems, by their very nature, are complex and opaque. They operate on layers of neural networks that mimic the human brain’s synaptic connections, yet they lack the innate understanding and ethical reasoning that guide human decision-making. The haste to deploy AI without comprehensive testing is akin to launching a spacecraft without ensuring the integrity of its navigation systems. The potential for error is not just probable; it is inevitable.
The pitfalls of AI are numerous and multifaceted. Bias in training data can lead to discriminatory outcomes, while lack of transparency in decision-making processes can result in unaccountable systems. These issues are compounded by the “black box” nature of many AI models, where even the developers cannot fully explain how inputs are transformed into outputs. This opacity is not merely a technical challenge but an ethical one, as it obscures accountability and undermines trust.
To avoid these pitfalls, a paradigm shift is necessary. The development of AI must prioritize robustness over speed, with a focus on rigorous testing and validation. This involves not only technical assessments but also ethical evaluations, ensuring that AI systems align with societal values and norms. Techniques such as adversarial testing, where AI models are subjected to challenging scenarios to identify weaknesses, are crucial. Additionally, the implementation of explainable AI (XAI) can demystify the decision-making processes, providing clarity and accountability.
Moreover, interdisciplinary collaboration is essential. AI development should not be confined to the realm of computer scientists and engineers. Ethicists, sociologists, and legal experts must be integral to the process, providing diverse perspectives that can foresee and mitigate potential harms. This collaborative approach ensures that AI systems are not only technically sound but also socially responsible.
In conclusion, the reckless pursuit of speed in AI development is a dangerous path that risks unleashing untested and potentially harmful technologies. By prioritizing thorough testing, ethical considerations, and interdisciplinary collaboration, we can harness the power of AI responsibly. The future of AI should not be about moving fast and breaking things, but about moving thoughtfully and building trust.
8 notes · View notes
reikiajakoiranruohoja · 4 months ago
Text
Disassembly Drone headcanons part 1: Make and model
WARNING: Discussion of gore, experimentation, AI-abuse, pregnancy horror and realistic predator behaviour. Please take care.
Section 1: Make and model
-To start with, Disassembly Drones (from here just DD for my sanity) were not made by a company. They had no QA period, and they had no long-term update timeline. The DD were made by a reality-warping non-omniscient creature that would have done better with purely biological creations. With access to unlimited resources of flesh, drone bodies and a 3D plastic and metal printer.
-Your basic DD or an older Solver Drone counts as a drone purely because they are still running software and not wetware. You could replace the core with a human brain and get a sufficient enough cyborg. Less the Terminator and more Andromon.
-Because their basic programming was edited by a flesh-morphing creature, Material Gathering Mode prioritizes flesh over metal. It can and will use metal, but it does better with flesh and bone.
-Due to this hack job of a build, the DDs should be buggy as all hell. However, fleshy parts are more forgiving than metal and gears. Still, the overheating started as a bug that the Solver used to motivate the DDs to hunt.
-Despite what Worker Drones on Copper 9 think, the DDs were originally made for human and war machine killing. They do equally well on blood+hearts as oil+metal.
-The DDs have semi-bionic musculature that they can somewhat train. Yes, they can have abs and other beefy features.
The parts that are purely organic in a DD/older Solver Drone are the lungs, stomach, salivary glands and various veins. The rest is an uncomfortable mix of machine and flesh.
Aside from the golden trio (J, V and N), the Solver randomized the animal brain scans for the rest of the DDs.
Depending on the animal models, different DDs have different ways of pinging. Those with more robust long-distance communication like wolves and birds can send out sentences. Those with more limited communication typically stick to two-word pings.
Their tails are as sensitive as your typical cat/dog/ferret tails, that is, not much until someone else touches them. The habit of keeping them up comes from the ever-present danger a door poses for them. Many DDs have kinked tails from having a heavy door shut on their tail and the healing nanites healing the tail in its twisted shape.
-The female-model DD legs are the result of the Solver wanting to streamline the DD production (read: it wanted to waste less time printing and putting it all together.) Two long stilts take far less time to build than ankles, feet, knees, thigs and shins.
Aside: Drone Reproduction.
Here's where the AI-abuse comes in! Basically, once drones were given sapient enough AI, there suddenly became a need to preserve and replicate that AI. While knowledge could be programmed in, there is a vast gap between pre-installed data and things learned on the job. While drones were not (relatively) expensive, losing an experienced drone was still a significant blow to production. Especially as any replacing drone would need to learn the ropes.
Thus, UNNs became a thing to preserve that hands-on knowledge. Plug your best drone to an UNN and in a few months you won't have to worry about losing them. Some even used it to 'upgrade' older models to newer ones.
Naturally, once the ability to 'breed' drones was created, it wasn't long before people began to plug two or more drones to an UNN to get the best traits into one drone. It not only became common but often expected that a working drone had a 'lineage.'
It also resulted in a lot of embarrassing arguments over ethics and inflammatory headlines.
How this applies to DDs; The Solver is sex repulsed, but it is also not a good enough programmer to undo centuries of careful programming. However, given the DDs are very fleshy, it is a coin toss if a DD can get away with a UNN or if they get to experience pregnancy horror.
11 notes · View notes
nandinishenoy · 3 months ago
Text
Empowering Future Urban Designers: A Vision for Tomorrow
Tumblr media
As globalization accelerates, the world is experiencing rapid urban expansion. Metropolitan city centers serve as the backbone of economic growth and a sustainable future. Urban planning and architectural design play a crucial role in shaping both our current living spaces and future habitats. Therefore, on World Architecture Day, it is essential to reflect on the importance of urban design in our everyday lives and its impact on community development.
The Role of Future Urban Designers and Architects
The next generation urban designers and architects hold the key to shaping tomorrow’s landscapes. Given the complexities of real estate growth, infrastructure networks, and diverse socio-civic amenities in urban hubs, strategic urban design remains at the core. Future leaders must be equipped with analytical thinking, innovative design approaches, technological integration, and policy-driven solutions to craft livable, dynamic urban environments. This blog delves into how we can establish a framework for the future.
Leveraging Technology for Sustainable Urban Design
In today’s digital landscape, technology is revolutionizing urban development. With cutting-edge AI tools, cities are being envisioned using Blockchain, BIM modeling, drones, augmented reality, and GIS mapping. Intelligent technology is essential for addressing urban challenges, climate risks, and resource scarcity. Therefore, urban architectural planning should embrace tech-driven design ideologies, empowering young visionaries to create resilient, smart, and adaptive cities.
Sustainability as a Cornerstone of Urban Growth
The current generation is highly conscious of the pressing need for sustainability in urban planning. As environmental concerns escalate, the future of urban development depends on sustainable practices. Green building methods, energy-efficient technologies, and eco-conscious designs must become fundamental to city landscapes. The mixed-use integrated townships by the Hiranandani Communities exemplify this future-forward approach. With meticulously planned socio-civic infrastructure, precise architectural execution, and state-of-the-art engineering, these townships seamlessly blend aesthetics, functionality, and sustainability.
Balancing Aesthetics with Practicality
A well-designed city is more than just a cluster of towering buildings. It must thoughtfully integrate aesthetics with functionality to create a sustainable urban ecosystem. Efficient transportation networks, abundant green spaces, essential services, and robust infrastructure contribute to a higher quality of life for residents.
Community Engagement in Urban Design
Empowering the next generation also involves fostering participatory urban planning. When communities are actively involved in the design process, cities evolve to better serve their residents. Young minds bring innovative perspectives, allowing for the creation of inclusive, forward-thinking urban spaces. By promoting collaborative planning, architects and urban designers can develop cities that are both user-centric and environmentally sustainable.
Government Backing for Urban Sustainability
A sustainable urban future requires strong government policies that support and incentivize progressive building initiatives. Developers prioritizing eco-friendly, inclusive, and functional designs should be encouraged through financial incentives and tax benefits. By implementing favorable design policies, governments can inspire developers to construct landmark structures that enrich cityscapes and promote urban tourism.
Conclusion: Building the Cities of the Future
As urban design and architecture continue to evolve, they must prioritize both societal and environmental sustainability. Architects, developers, and policymakers must collaborate to pioneer innovative solutions. By engaging young professionals and integrating emerging technologies, we can create inclusive, resilient communities that cater to present needs while shaping the future of urban living.
8 notes · View notes
mushiemellows · 11 months ago
Text
On Rivd and AI
So last night I made this post and said I'd elaborate more in the morning and when I had the time to do a bit of research. Upon doing said research, I realized that I had misunderstood the concerns being raised with the Rivd situation, but that isn't the case any more. However, some of my thoughts on ai still stand. Heads up, this is going to be a long post. Some actual proper blogging for once wow.
I'm going to discuss the Rivd phishing scam, what can be done for fic writers as ai begins to invade fan spaces, and my elaborated thoughts on Language Learning Models. Warning for transparency: I did utilize chat gpt for this post, NOT for the text itself but to provide examples of the current state of LLMs. Some articles I link to will also be ai generated, and their generated quality is part of what I'll be warning about. This is not a generated post and you can tell because I've got those nifty writing things called "voice" and "style."
ANYWAYS:
Okay so what was the Rivd situation? So two days ago this post was uploaded on tumblr, linking back to a twitter thread on the same topic. I saw it late last night because I was traveling. A reddit post was also uploaded 3 days ago. According to google trends, there was a slight uptick in search traffic the week of June 23rd, and a more severe uptick last week (June 30th-July 6th). That's all to say, this website did not exist until last week, caused a stir, and immediately was put down.
Rivd is not longer up. Enough people contacted its web hosting service Cloudflare and they took the site down. This happened yesterday, from the looks of it.
So, then, what was Rivd? And more importantly, what was the point of scraping a chunk of ao3 and re-uploading it? There seems to be 2 possible theories.
1) The more innocent of the two: they genuinely want to be an ao3 competitor. I can't look at the website any more, and very little positive results appear when googled, but I did find one ai-generated puff piece called "Exploring Rivd: The Premier Platform for Movie-Based Fanfiction" posted to Medium last week by one "Steffen Holzmann" (if that is your real name... x to doubt). This account appeared the same week that Rivd had that first little uptick in google queries, so it is undoubtedly made by the people running the website themselves to create an air of legitimacy. Medium appears to be a safe enough website that you can click that link if you really want to, but you shouldn't. It's a bad generated article, there's very little to glean from it. But it is a remnant source on what Rivd was claiming to be, before it was taken down. Here's the conclusion from the article, the only portion that gave any actual information (and it barely offers any):
Rivd is the ultimate platform for movie-based fanfiction, offering a diverse range of content, a supportive community, and robust interactive features. Whether you’re a writer looking to share your work or a reader seeking new adventures in your favorite movie universes, Rivd provides the perfect platform to engage with a passionate and creative community. Start your journey on Rivd today and immerse yourself in the world of fanfiction.
There's a second article by Holzmann titled "Mastering the Art of Fanfiction Writing in 2024" that's essentially similar ai bull, but trades explaining that fans can write Star Wars fic for explaining that you can make OC's and maybe get a beta (not that that's advice I've ever heeded. Beta? Not in this house we don't.) This was posted six days ago and similarly spends half the time jerking Rivd off. That's all to say, if they are to be believed at face value, this website wanted to just be a fic hosting site. Scraping Ao3 would have made it seem like there was already an active user base for anyone they were attempting to attract, like buying your first 50,000 instagram followers. Anyone actually looking to use this as a fic site would have quickly realized that there's no one on it and no actual fan engagement. There's already fan community spaces online. This website offers nothing ao3 or ffn or wattpad or livejournal or tumblr or reddit didn't already.
Similarly, it reeks of tech bro. Between the scraping and the ai articles, the alarms are already going off. According to that Reddit thread, they were based out of Panama, though that doesn't mean much other than an indicator that these are the type of people to generate puff articles and preemptively base their business off-shore. Holzmann, it should be mentioned, also only has 3 followers, which means my tiny ass blog already has more reach than him. Don't go following that guy. The two comments on the first article are also disparaging of Rivd. This plan didn't work and was seen right through immediately.
If fan communities, and those who write fic know anything, it's how to sniff out when someone isn't being genuine. People write fic for the love of the game, at least generally. It's a lot of work to do for free, and it's from a place of love. Ao3 is run on volunteers and donations. If this genuinely is meant to be a business bro website to out-compete ao3, then they will be sorely disappointed to learn that there's no money in this game. It would be short lived anyway. A website like this was never going to work, or if it was, it would need to ban all copyrighted and explicit materials. You know, the pillars of fic.
So then what was the point of all of this? Unless there was a more nefarious plan going on.
2) Rivd was a phishing scam. This is so so so much more likely. The mark for the scam isn't fic readers, it's fic writers. Here's how it works: they scrape a mass of ao3 accounts for their stories, you catch it, you enter a lengthy form with personal info like your full name and address etc. requesting they take your work down, they sell your data. Yes, a lot of personal info is required to take copyrighted materials down on other sites, too. That's what makes it a good scam. Fic already sits in a legal grey area (you have a copyright over your fic but none of the characters/settings/borrowed plot within it. You also CANNOT make money off of fic writing). So the site holds your works ransom, and you can't go to Marvel or Shueisha or fuck it the ghost of Ann Rice herself to deal with this on your behalf. Thankfully, enough people were able to submit valid DMCA's to Cloudflare to deal with the issue from the top.
Remember this resolution for the next time this situation arises (because of course there will be a next time). Go through higher means rather than the site itself. These scams are only getting more bold. Me personally? I'm not going to give that amount of personal info to a website that shady. Be aware of the warning signs for phishing attacks. Unfortunately, a lot of the resources online are still around text/email phishing. We live in a time where there's legal data harvesting and selling, and illegal data harvesting and selling, and the line in between the two is thin and blurry. Here's an FTC article on the signs of phishing scams, but again, it's more about emails.
I should note, I do not think that Rivd is connected to the ransomware virus of the same name that popped up two or three years ago [link is to Rivd page on PCrisk, a cypersecurity/anti-malware website]. It's probably just coincidence.... but even so. A new business priding itself on SEO and all that tech guy crap should know not to name itself the same thing as a literal virus meant to scam out out of a thousand dollars.
That's all to say, this was absolutely a scam meant to take personal info from ao3 writers. And that blows. To love art and writing and creation so much just to have your works held hostage over data feels really bad. Of course it does!
So, should you lock down your ao3 account?
That, to me, is a little trickier. You can do as you please, of course, and you should do what makes you feel safest. Me personally, though, I don't plan on it. I really, really like that guests can interact with my work from the outside. Ao3 still uses an invite system, so a lot of regular users still don't have accounts for any number of reasons. I read a lot of the time logged out anyways. I started writing again last year after all the info on the ao3 Language Learning Model training had already come out. Like I talked about in my last post, I set out to write things that a computer couldn't produce. (from here on out, I'm going to be using One Piece fic as an example, simply because that's what I write. It's also in a really prime position for a lot of ai discussion due to the size of the fandom and the size of the original work)
I'm going to be honest, I'm not all that afraid of my work being scraped for LLMs to train off of. Text generators have existed for a decade now, but they really popped off last year. LLM's are at a point, though, where the tech is sort of stagnating after the initial boom. The flaws of current ai chat bots are so baked into the flaws of ai itself that they won't be ironed out soon. They make up info, they're easily manipulated, they simply are not people. The flaw here is deep- you can trace it to the fact that we don't know enough about neuroscience itself, and even then neuroscientists don't make ai. Ai programers do. Complex info already has to make one synaptic jump without losing any understanding from person to person, and then it has to be perfectly abstracted for a machine to comprehend. It can't be done with the tools we currently have. I DO hate to admit it, but there are some decent uses for ai. if you are specifically a python programmer. The rest of us? Maybe not.
I don't like generated fic, I don't read it on principle. Checking on ao3, though, there's only 4 total generated op fics (3 in spanish, one in english, 3 xreader, 1 zosan but 2nd person zosan? and the author edited the framework the computer generated text. I still didn't enjoy the fic, tho.) But that breaks down to .005% of all op fics. I went into chat gpt and asked it to generate a drabble for me based on One Piece, and this is what it wrote:
Tumblr media
[Image ID: A screenshot of chat GPT. The first line shows my prompt. It reads: You- write me a short 100 word fanfiction based on the manga One Piece. Next line, Chat GPT- "On the shores of an uncharted island, the Straw Hat Pirates stumbled upon a hidden treasure trove. Among the glittering gold and jewels, Luffy discovered an old, weathered journal. Inside, he found sketches and notes detailing the adventures of a legendary pirate who once roamed the Grand Line. Excited by the prospect of a new adventure, the crew set sail. following clues left in the journal. Along the way, they faced formidable foes and forged new alliances, unraveling the mysteries of the lost pirate's legacy. With each island they conquered, they grew closer to uncovering the ultimate treasure- a legacy that would shape their own journey." End ID]
Okay so like, this blows, yeah? 75,000 One Piece fics to train off of on ao3 and it still can't like, write a decent little story. This reads more like an episode summary or a book dust jacket blurb. Drabbles have an internal structure that make them interesting, much like a single sentence short story. You have to get very baby shoes never worn with them, yeah? Ai doesn't understand that, it just knows how many words it has to generate. It doesn't have any sense for creative language, metaphor, simile, poetry. I asked it to write me a poem and it wrote me the same poem with thesaurus replacements each time it re-generated one. A lot about dreams and stars and midnight and whispers, over and over again. The same rhyme scheme and cadence too.
It's also- vitally- so unhorny that it's almost a crime. For a test, I asked it to generate a story about a uhhhhh robot and an archeologist kissing. Just to see what it would give me. I ran 4 tests, a robot falling in love with an archeologist, a robot kissing an archeologist, an archeologist kissing a robot, and an archeologist falling in love with a robot. All 4 were incredibly chaste, quiet little drabbles about the innocence of discovery or whatever the fuck. Here's one of them:
Tumblr media
[Image ID: A screenshot of chat GPT. The first line shows my prompt. It reads: You- write me a short 100 word story about an archeologist kissing a robot. Next line, Chat GPT- "Int he dimly lit chamber, Professor Evelyn Porter stared in awe at the ancient artifact- a remarkably lifelike robot buried for millennia. As she leaned closer to examine its intricacies, a surge of curiosity and admiration swept over her. In a moment of spontaneous fascination, she brushed her lips against the cool metal of its cheek, a gesture of reverence and connection across ages. The robot, dormant for eons, stirred faintly in response to the unexpected contact. It was fleeting, almost surreal exchange between past and future, bridging the gap between human exploration and technological marvels of ancient civilizations." End ID]
So like, some of these phrases gets to what's vaguely intriguing about this dynamic, sure. But at the end of the day it's still uhhhh chaste as hell. A reverent kiss??? Not what I want in my fic!!!! This is all to say, LLM's can scrape ao3 all they want, that doesn't mean they can USE what they're stealing. The training wheels have to stay on for corporate palatability. I'm stealing, I'm taking these dolls out of Shueisha's grubby hands and I'm making them sexy kiss for FREE.
In my opinion, the easiest way to keep your work out of the hands of ai is to write something they cannot use. If the grey area of copyright lies in how much is transformed, then motherfucking TRANSFORM it. Write incomprehensible smut. Build surreal worlds. Write poems and metaphors and flush out ideas that a computer could never dream of. Find niches. Get funky with it. Take it too far. and then take it even farther. Be a little freaking weirdo, you're already writing fic so then why be normal about it, you know? Even if they rob you, they can't use it. Like fiber in the diet, undigestible. Make art, make magic.
Even so, I don't mind if the computer keeps a little bit of my art. If you've ever read one of my fics, and then thought about it when you're doing something else or listening to a song or reading a book, that means something I made has stuck with you just a little bit. That;'s really cool to me, I don't know you but I live in your brain. I've made you laugh or cry or c** from my living room on the other side of the world without knowing it. It's part of why I love to write. In all honesty, I don't mind if a computer "reads" my work and a little bit of what I created sticks with it. Even if it's more in a technical way.
Art, community, fandom- they're all part of this big conversation about the world as we experience it. The way to stop websites like Rivd is how we stopped it this week. By talking to each other, by leaning on fan communities, by sending a mass of DMCA's to web host daddy. Participation in fandom spaces keeps the game going, reblogging stuff you like and sending asks and having fun and making art is what will save us. Not to sound like a sappy fuck, but really caring about people and the way we all experience the same art but interpret it differently, that's the heart of the whole thing. It's why we do this. It's meant to be fun. Love and empathy and understanding is the foundation. Build from there. Be confident in the things you make, it's the key to having your own style. You'll find your people. You aren't alone, but you have to also be willing to toss the ball back and forth with others. It takes all of us to play, even if we look a little foolish.
25 notes · View notes