#AI sample manipulation
Explore tagged Tumblr posts
Text
AI as a Partner in Music Production: Unveiling the Future of Sound
In the shadowy corners of a home studio, a producer faces her DAW, stuck on a beat that refuses to come together. Hours pass, yet the perfect drum pattern eludes her. Frustrated, she uploads her existing melody to an AI music tool, tweaks a few settings, and waits. Moments later, five drum patterns appear—each offering a unique groove that blends seamlessly with her chord progression. The third…
#AI beat generation#AI collaboration#AI drum patterns#AI for producers#AI in house music#AI in music#AI music composition#AI music production#AI music tools#AI music tools for producers#AI sample manipulation#AI-driven creativity#AI-generated beats#AIVA music#creative music tools#DAW integration#electronic music#harmonic collaboration#house music#Magenta Drums#melodic collaboration#music innovation#music production AI#music production evolution#music production software#music production tools#music production workflow#music technology#Orb Producer Suite#rhythm design
1 note
·
View note
Text
Vocaloid absolutely is not done without permission from the voice providers, in fact in many cases the specific voice providers are seeked out by the companies creating the voicebanks to record for them (sometimes the vocaloid voicebanks are even modeled after the voice providers specifically, such as Sachiko being based on Sachiko Kobayashi, a popular Japanese actress)
I see some people confused on this, so I want to clarify: vocaloid (and other voice synthesizer programs such as UTAU, CeVIO, Synth-V, etc) is absolutely not comparable to AI voices. In the case of voice synthesizers, a person (usually a singer or a voice actor) is brought into a studio to record the samples, which are then worked into a voicebank. The voicebank is then released for the public to buy (unless it's a free program, in which case it's just published for free), and when making a song with a voicebank, the producer of the song has to manipulate each individual note/syllable to achieve the sound they're looking for
So no, don't worry: Hatsune Miku is not in any way comparable to unconsented AI samples of real people's voices
listen I say this with patience bc some people may genuinely have not thought about this before but if you firmly say “AI art is terribly unethical and steals from artists” (which is correct) but then turn around and use voice AIs to generate songs/voice lines that sound like your favourite voice actors or singers……………………………………that is also AI art and it is also terribly unethical
#also voice providers are usually proud of the voicebanks made with their contributions#like saki fujita attending multiple hatsune miku events or naoto fuuga dedicating his online presence to kaito#i know there's recently been some ai stuff happening in voice synthesizers recently but from what i've seen it's still different#it still requires voice samples from consenting voice providers who were hired to provide that voice and manipulation from the producers of#the songs and stuff#i was originally looking through the notes for an id but i found this and as a die-hard fan of voice synthesizers i had to chime in#no id#also if anyone wants to provide a better description of what vocaloid is like please do. i always fear my explanations aren't good
68K notes
·
View notes
Text
I am once again reminding people that Vocaloid and other singing synthesizers are not the same as those AI voice models made from celebrities and cartoon characters and the like.
Singing synthesizers are virtual instruments. Vocaloids use audio samples of real human voices the way some other virtual instruments will sample real guitars and pianos and the like, but they still need to be "played", per say, and getting good results requires a lot of manual manipulation of these samples within a synthesis engine.
Crucially, though, the main distinction here is consent. Commercial singing synthesizers are made by contracting vocalists to use their voices to create these sample libraries. They agree to the process and are compensated for their time and labor.
Some synthesizer engines like Vocaloid and Synthesizer V do have "AI" voice libraries, meaning that part of the rendering process involves using an AI model trained on data from the voice provider singing in order to ideally result in more naturalistic synthesis, but again, this is done with consent, and still requires a lot of manual input on the part of the user. They are still virtual instruments, not voice clones that auto-generate output based on prompts.
In fact, in the DIY singing synth community, making voice libraries out of samples you don't have permission to use is generally frowned upon, and is a violation of most DIY engines' terms of service, such as UTAU.
Please do research before jumping to conclusions about anything that remotely resembles AI generation. Also, please think through your anti-AI stance a little more than "technology bad"; think about who it hurts and when it hurts them so you can approach it from an informed, critical perspective rather than just something to be blindly angry about. You're not helping artists/vocalists/etc. if you aren't focused on combating actual theft and exploitation.
#long post#last post I'm making about this I'm tired of commenting on other posts about it lol#500#1k
2K notes
·
View notes
Text
It's time I post my opinion on the other changes of the demo ! I'll try and make it sort since I want to make another post about Leander's route in particular >:)
Tldr : I liked it and I can't wait to see how everything will go plot wise in the full game 👀
/!\ Spoilers for TS demo update ! /!\
So let's adress the tiny bits first : the origin change, and the minor dialogue/scenario changes. Love it. Yes I like the new origin I think the idea is cool, what I didn't like was the change itself (but that's not the topic). Firstly, it gives us a big info : there IS several languages in the world of TS ! WOOO let's go if that's not a win for bilingual/multilingual MCs idk what it is
The minor changes I notices are mainly a few differences in Kuras and Ais' segments which are very welcome, and more depth added to Mhin's and Vere's segment. First of all thank RSS for shutting up dense people by stating why MC wears bandages rather than gloves. I mean, most of us figured it out a while back by rubbing two braincells together, but it's nice to see it mentionned ! It was seemlessly added to Kuras' segment too, really nice. Then expending the Iris segment was good (also confirmed the NPC telling us abut Ais is Iris, or well, what's left of her anyways)
The new sprites and expression are all super good. Special mention for Vere's smile, truly the highlisht of that category. That and Leander's puppy eyes expression-
Onto Vere and Mhin, I really appreciated being able to explore more choices. Mhin is so cute and what a nerd I LOVE THEM. Also Vere... I know what you are and you have a type 🫵 (he is a freak /pos)
Now let's talk about the bigger changes and also the man of the hour (who's also my wife /j) LEANDER YEEEEEEAH BABYYYYYY. So other's have already pointed out the changes and explained very well the symbolism behind it. They got rid of the dog imagery to get a snake one, which is very clever. Also on the addertsone, a magic item also associated with snakes (or adders) and poison ? Wonderful. Leander in terms of theme and design is already sourrounded by a lot of poison symbolism (notably his color palette, he is dripping in green and has a shade or green that once was made with arsenic) so the changes ties perfectly with it. One thing that was removed which I think should have stayed is the green cloaks on Leander's followers. Not only it would reinforce the cult-ish vibe but go very-well with the snake imagery of the Adders and would even elevate it. That said, maybe it'll be added later or will be noticeable on future NPC sprites. The green cloaks aside if they don't keep it, I hope the Adders will all have a sign of distinction no matter what that could be (bonus if it's very subtle or hidden)
Then for the rest of the changes, they not only changed the Bloodhound into the Adderstone but changed Leader's characterisation. As a Leander simp, to be fair the change isn't drastic, only the presentation is. If anything, they didn't change Leander's character and take another path, but cranked it up to eleven. And I absolutely loved it. He is more charismatic, more assertive, and is very obviously more manipulative and calculated. He isn't outwardly hostile to MC, far from it, but let's me tell you something. After the tavern segment, if you chose Leander's route. When he closes the door and locks it, with his front facing sprite. I was terrifed (/pos) the dev's nailed it and they know what they are doing with him and leaned into that aspect of him more and honestly thank you RSS for catering to us freaks rather than the "but I want Leander to be a normal guy" crowd (I will forever judge them it's a gothic horror game for the love of god). I need him more than before. Oh my god. My only complain tho is I thought Leander would go full scary creeping and stab MC in their room and it didn't happen so I was a bit disappointed BUT. I smell like it's because they are keeping the sauce for the full game. That was a mere sample, a teaser of Leander depravity and my body is READY for it.
I will revisit my Leander analysis doc I never posted and expend on it and make an updated version to match the Demo 2.0 update, but you can be sure I'll post it once I have time to put it together sihfoqhf
Be prepared I have a theory about what's his deal and I can't believe I've never thought about it while it was so obvious 🫵
#sundayeleith talks#touchstarved game#touchstarved spoilers#touchstarved demo update#LEANDER MY LOVE#I NEED HIM CARNALY RAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAH#im sure its his goal which makes me angry i need him dead /pos /silly
54 notes
·
View notes
Text
We've looked back on the records of 2024, so here's a sampler of games that you can play right now for the low price of free:
Beetle Ninja (a hero-for-hire RPG set two weeks before the end of the world, built for repeated playthroughs) / Grimm's Hollow (a spooky and sweet RPG set in the afterlife where you play as a Reaper hiding your ghost sibling)
Remember Places? (locked inside, an AI is your only friend) / Liminal Dreams (a game about exploring bizarre world and meeting strange strangers)
Drown the Bride (a point n' click adventure visual novel about meeting your friend's fiancee in a historically themed fantasy world) / MAMA (a visual novel where you pass out at a yuri convention you were attending with your girlfriend, and awake to find yourself in your childhood home making certain... connections about your mother's actions in the past)
Slider (a tile-based sliding puzzle adventure to find your cat via manipulating the world around you) / The King is Watching (defend your kingdom against an army; buildings are only productive when your gaze is upon them)
Rental (a short and spooky game about renting a cabin, reminiscent of classic survival horror games) / Illusion Carnival (a lost soul wanders a 2.5D pop-up book-like amusement park, evading the attacks of the anomalies that would love to eat them up)
HavE (a visual novel set at a ski resort vacation goes awry in ways that may be supernatural) / Zodiac Paradigm (a murder mystery visual novel arises when twelve animals are called to be a part of the Emperor's council, yet thirteen arrive)
Bad Manors (a point n' click visual novel on Halloween where your friend can't make it to an escape room with you, so a helpful stranger goes in his place) / Reaper's Goodbye (five patrons of a food stall tell their tales while waiting for the midnight train)
SWOLLEN TO BURSTING UNTIL I AM DISAPPEARING ON PURPOSE (a weirdo RPG in which a flying saucer crashes into the town of "Vomit", but you have packages to deliver) / Until Biglight (technically a demo, a sample of a cancelled project of poverty, cats, mice, violence, words like "hyperreality", and planning an assassination)
C.H.A.I.N. / C.H.A.I.N.G.E.D. / The Madvent Calendar (Three anthologies from the Haunted PS1 community. A game of game development telephone, a branching path telling a tale of time travel and family, and a haunted advent calendar.)
#vg blue bird#jupiterposting#otherworld exploration game aesthetics#games are linked via their names. first link is steam second is itch where applicable
95 notes
·
View notes
Text
IQ-based ad targeting
YouTube ads are absolutely crap these days. Every few pre-rolls is a financial scam, a medical scam, or shilling dick pills. (Note: I have ads personalization turned off, so while not exactly a random sample, I think it's pretty representative of broadly targeted ads.)
If I were YouTube, I would feel embarrassed about this. Think back to the whole brand safety hullaballoo from a few years ago. I really can't imagine Proctor & Gamble or Coca-Cola wants their ads to be bookended by ONE WEIRD TRICK TO DOUBLE YOUR MONEY AND/OR DICK being read out loud in an AI voice.
But whether or not it's in the interest of YouTube to display these ads, they do, and they don't consider "this sure looks like a fucking scam" to be a prima facie policy violation.
Nonetheless, many of these ads do in fact contain clear and obvious violations of Google ad policies, and I often report them (and while I can't be sure my reports specifically mattered, the ads do end up getting taken down). If you're interested in what is worth quoting in a report:
Quote from the misrepresentation policy when you see manipulated video with a celebrity talking. This is "impersonating or falsely implying affiliation with, or endorsement by, a public figure". For instance, the fake-Trump-voice ads ran afoul of this, and I assume that's why they got taken down.
Say "porn". Lots of the dick pill ads have no nudity, but they link to a page with a single button that launches a video with pornographic content. Porn is YouTube kryptonite, and lots of ads I've reported for this have since disappeared along with the channels that uploaded the relevant videos for the creatives. More generally, "landing page is button that opens video with {violation}" is a super easy way to find something to put in a report. Those videos invariably contain tons of very obvious violations.
Click the "more ads from this advertiser" link and either report their other violating ads manually or mention that the advertiser's other videos seem to have the same policy violation too in your report.
Anyhow, this one Brazilian advertising agency is currently shilling some "doctors hate him!" scam about treating your diabetes with an ebook instead of real medicine. I had reported all their ads which included fake endorsement by Dr. Oz, but I hadn't reported their Dr.-Oz-free video ads. Just now I got served their one remaining unbanned video ad, and I went to the trouble of googling the person in the ad.
Turns out, it's Barbara O'Neill who, according to Wikipedia, is known for "Dangerous and unsubstantiated alternative medicine claims". Telling people to treat their cancer with baking soda and to feed their babies raw goat milk instead of formula (she charges $6k for seminars dispensing this advice). She's indefinitely banned in Australia from "providing health services or education in any capacity, regardless of whether or not she accepted payment for doing so".
Note: the ad in question does not contain an endorsement by this person. It fakes an endorsement from this person. The ad violates policy because it is pretending that their scam medical treatment is endorsed by a well-known medical scammer. I guess they really wanted to make sure the people who click through are absolutely the dumbest motherfuckers on the planet.
#ads#i have seen ads deepfaking the pope telling you that god has selected you to win a million dollars
31 notes
·
View notes
Text
Old-school planning vs new-school learning is a false dichotomy
I wanted to follow up on this discussion I was having with @metamatar, because this was getting from the original point and justified its own thread. In particular, I want to dig into this point
rule based planners, old school search and control still outperform learning in many domains with guarantees because end to end learning is fragile and dependent on training distribution. Lydia Kavraki's lab recently did SIMD vectorisation to RRT based search and saw like a several hundred times magnitude jump for performance on robot arms – suddenly severely hurting the case for doing end to end learning if you can do requerying in ms. It needs no signal except robot start, goal configuration and collisions. Meanwhile RL in my lab needs retraining and swings wildly in performance when using a slightly different end effector.
In general, the more I learn about machine learning and robotics, the less I believe that the dichotomies we learn early on actually hold up to close scrutiny. Early on we learn about how support vector machines are non-parametric kernel methods, while neural nets are parametric methods that update their parameters by gradient descent. And this is true, until you realize that kernel methods can be made more efficient by making them parametric, and large neural networks generalize because they approximate non-parametric kernel methods with stationary parameters. Early on we learn that model-based RL learns a model that it uses for planning, while model free methods just learn the policy. Except that it's possible to learn what future states a policy will visit and use this to plan without learning an explicit transition function, using the TD learning update normally used in model-free RL. And similar ideas by the same authors are the current state-of-the-art in offline RL and imitation learning for manipulation Is this model-free? model-based? Both? Neither? does it matter?
In my physics education, one thing that came up a lot is duality, the idea that there are typically two or more equivalent representations of a problem. One based on forces, newtonian dynamics, etc, and one as a minimization* problem. You can find the path that light will take by knowing that the incoming angle is always the same as the outgoing angle, or you can use the fact that light always follows the fastest* path between two points.
I'd like to argue that there's a similar but underappreciated analog in AI research. Almost all problems come down to optimization. And in this regard, there are two things that matter -- what you're trying to optimize, and how you're trying to optimize it. And different methods that optimize approximately the same objective see approximately similar performance, unless one is much better than the other at doing that optimization. A lot of classical planners can be seen as approximately performing optimization on a specific objective.
Let me take a specific example: MCTS and policy optimization. You can show that the Upper Confidence Bound algorithm used by MCTS is approximately equal to regularized policy optimization. You can choose to guide the tree search with UCB (a classical bandit algorithm) or policy optimization (a reinforcement learning algorithm), but the choice doesn't matter much because they're optimizing basically the same thing. Similarly, you can add a state occupancy measure regularization to MCTS. If you do, MCTS reduces to RRT in the case with no rewards. And if you do this, then the state-regularized MCTS searches much more like a sampling-based motion planner instead of like the traditional UCB-based MCTS planner. What matters is really the objective that the planner was trying to optimize, not the specific way it was trying to optimize it.
For robotics, the punchline is that I don't think it's really the distinction of new RL method vs old planner that matters. RL methods that attempt to optimize the same objective as the planner will perform similarly to the planner. RL methods that attempt to optimize different objectives will perform differently from each other, and planners that attempt to optimize different objectives will perform differently from each other. So I'd argue that the brittleness and unpredictability of RL in your lab isn't because it's RL persay, but because standard RL algorithms don't have long-horizon exploration term in their loss functions that would make them behave similarly to RRT. If we find a way to minimize the state occupancy measure loss described in the above paper other theory papers, I think we'll see the same performance and stability as RRT, but for a much more general set of problems. This is one of the big breakthroughs I'm expecting to see in the next 10 years in RL.
*okay yes technically not always minimization, the physical path can can also be an inflection point or local maxima, but cmon, we still call it the Principle of Least Action.
#note: this is of course a speculative opinion piece outlining potentially fruitful research directions#not a hard and fast “this will happen” prediction or guide to achieving practical performance
29 notes
·
View notes
Text
A new tool lets artists add invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways. The tool, called Nightshade, is intended as a way to fight back against AI companies that use artists’ work to train their models without the creator’s permission. Using it to “poison” this training data could damage future iterations of image-generating AI models, such as DALL-E, Midjourney, and Stable Diffusion, by rendering some of their outputs useless—dogs become cats, cars become cows, and so forth. MIT Technology Review got an exclusive preview of the research, which has been submitted for peer review at computer security conference Usenix.
[...]
Zhao’s team also developed Glaze, a tool that allows artists to “mask” their own personal style to prevent it from being scraped by AI companies. It works in a similar way to Nightshade: by changing the pixels of images in subtle ways that are invisible to the human eye but manipulate machine-learning models to interpret the image as something different from what it actually shows. The team intends to integrate Nightshade into Glaze, and artists can choose whether they want to use the data-poisoning tool or not. The team is also making Nightshade open source, which would allow others to tinker with it and make their own versions. The more people use it and make their own versions of it, the more powerful the tool becomes, Zhao says. The data sets for large AI models can consist of billions of images, so the more poisoned images can be scraped into the model, the more damage the technique will cause.
[...]
Poisoned data samples can manipulate models into learning, for example, that images of hats are cakes, and images of handbags are toasters. The poisoned data is very difficult to remove, as it requires tech companies to painstakingly find and delete each corrupted sample.
167 notes
·
View notes
Note
Kaai Yuki anon again
I just realized that maybe some people here are not really familiar with vocaloid. The thing about vocaloid voice banks is that there is actually some pretty extensive vocal sample manipulation done after it's recorded. There's a reason why every Miku release sounds different (lol) and they can do shit like Miku Append and stuff. The end product rarely ends up sounding like their voice providers
This is what I meant when I said it's more like the anime character version of a child rather than a realistic AI generated child. It doesn't change the fact that a real child is involved, just the degrees of separation imo
--
12 notes
·
View notes
Text

On Thursday, America First Legal (AFL) released explosive new documents obtained through ongoing litigation against the U.S. Department of State’s Global Engagement Center (GEC), exposing a vast, government-backed censorship operation to silence Americans under the guise of “misinformation,” “disinformation,” and “malinformation.” The documents reveal a disturbing alliance between the GEC, the U.S. Agency for International Development (USAID), the British Foreign, Commonwealth, Development Office (FCDO), and media censorship organizations, all working in lock-step to manipulate public discourse, control media narratives, and suppress free speech.
The GEC, which was forced to shut down in December 2024, was designed to “combat foreign disinformation abroad.” However, through Freedom of Information Act (FOIA) requests, AFL uncovered that the GEC engaged in state-sponsored propaganda, repeatedly using willing participants from private media organizations. Further, AFL’s lawsuit against the GEC revealed that USAID had created an internal “Disinformation Primer” that explicitly praised private sector censorship strategies and recommended further censorship tactics.
The new documents released by AFL show:
A)
The GEC and USAID coordinated efforts to censor “COVID-19 misinformation” and counter “COVID-19 propaganda.”
B)
The GEC collaborated with officials in the British Foreign, Commonwealth, and Development Office on disinformation efforts.
C)
The GEC coordinated with private media censorship firms, including Poynter and NewsGuard, which provided samples of its Misinformation Fingerprints artificial intelligence (AI) tool, designed to identify and rate websites based on their perceived “misinformation.”

4 notes
·
View notes
Text
On Thursday, America First Legal (AFL) released explosive new documents obtained through ongoing litigation against the U.S. Department of State’s Global Engagement Center (GEC), exposing a vast, government-backed censorship operation to silence Americans under the guise of “misinformation,” “disinformation,” and “malinformation.” The documents reveal a disturbing alliance between the GEC, the U.S. Agency for International Development (USAID), the British Foreign, Commonwealth, Development Office (FCDO), and media censorship organizations, all working in lock-step to manipulate public discourse, control media narratives, and suppress free speech.
The GEC, which was forced to shut down in December 2024, was designed to “combat foreign disinformation abroad.” However, through Freedom of Information Act (FOIA) requests, AFL uncovered that the GEC engaged in state-sponsored propaganda, repeatedly using willing participants from private media organizations. Further, AFL’s lawsuit against the GEC revealed that USAID had created an internal “Disinformation Primer” that explicitly praised private sector censorship strategies and recommended further censorship tactics.
The new documents released by AFL show:
The GEC and USAID coordinated efforts to censor “COVID-19 misinformation” and counter “COVID-19 propaganda.”
The GEC collaborated with officials in the British Foreign, Commonwealth, and Development Office on disinformation efforts.
The GEC coordinated with private media censorship firms, including Poynter and NewsGuard, which provided samples of its Misinformation Fingerprints artificial intelligence (AI) tool, designed to identify and rate websites based on their perceived “misinformation.”
I. The GEC and USAID coordinated to counter “COVID-19 Propaganda and Disinformation”
In a widely distributed email to USAID, the GEC’s “Liaison Planner to USAID” stated that GEC would like to “sustain dialogue and connectivity during these unprecedented times” to help counter “misinformation” surrounding COVID-19 despite USAID’s self-described mission being “to extend assistance to countries recovering from disaster, trying to escape poverty, and engaging in democratic reforms.”
4 notes
·
View notes
Text
AI as a Partner in Music Production: Unveiling the Future of Sound
In the shadowy corners of a home studio, a producer faces her DAW, stuck on a beat that refuses to come together. Hours pass, yet the perfect drum pattern eludes her. Frustrated, she uploads her existing melody to an AI music tool, tweaks a few settings, and waits. Moments later, five drum patterns appear—each offering a unique groove that blends seamlessly with her chord progression. The third…
#AI beat generation#AI collaboration#AI drum patterns#AI for producers#AI in house music#AI in music#AI music composition#AI music production#AI music tools#AI music tools for producers#AI sample manipulation#AI-driven creativity#AI-generated beats#AIVA music#creative music tools#DAW integration#electronic music#harmonic collaboration#house music#Magenta Drums#melodic collaboration#music innovation#music production AI#music production evolution#music production software#music production tools#music production workflow#music technology#Orb Producer Suite#rhythm design
1 note
·
View note
Text
niche (not really) music artists to help you be more performative
another semester has passed and i no longer foresee any english classes. they stress me out too much and i'm afraid of getting accused of ai cause i LOVE using semicolons and em dashes and random big words in the middle of my sentences. i've been trying to mitigate that and become a better writer by writing on here but it does NOT help...
i should preface this by saying that there are many different forms of male manipulators & today i will be focusing on the kind that is always on fashion tiktok and using wired headphones and most definitely has pierced ears with specifically silver hoop earrings. hope this guide helps you outperform them.
1. the deep
the deep's music totally reflects early 2010s ish "indie sleaze" type pop music (e.g. far east movement or kesha) but i think most people have probably heard angel tattoo by her—honestly i didn't really fw it that much at first but it's grown on me.
2. kimj
kimj (along with sebii) did produce that track by the deep. though i think kimj is still a bit more niche than sebii. he's worked with a lot of western artists in the past (i think recently nate sib) but his own music is also reminiscent of 2010s edm music. i really liked this track on korean.
3. the bird
two artists whose names start with the... interesting. anyways this specific track i'm gonna post had ren g in conjunction (on vocals i think) but a lot of the bird's music uses the same sample that lady gaga does.
4. YT
he's actually really popular but i'm not sure if that is outside of england. i've never heard anyone from here talk about him but i really like his production. i don't really know the term for what this kind of track is called but i like it and i think everyone should listen to it.
5. enhyphen
i actually just couldn't think of another artist but to be fair like if i was a male manipulator i'd def have some sort of kpop group in my playlist. i think newjeans is too easy and predictable so i'm gonna go with enhyphen (and i like lowkey fw their music...)
5. tonser
i just thought of another artist. i think i posted about him on here once before because i remember he was a lot less popular until 2hollis blew up with gold and now this evil hyperpop edm blend is really popular i think people should hop on tonser. his music is kind of joyous like a carol i think.
i wish you all well in your journeys fighting off evil men. more posts coming soon. thanks for coming to my ted talk.
#male manipulator#evil men#music#hyperpop#kimj#the deep#the bird#tonser#enhyphen#kpop#rap#yt#british rap#music review#music recs#songs#sweet tunes#tunes#playlist#playlist recs#song recs#performance#performatism#wired headphones#pop#edm#2010s nostalgia#2010s music#indie pop#underground
3 notes
·
View notes
Text
The use of AI in academics is a bit of a minefield, it both has the potential to be a very usefull tool, and to do alot of harm, depending on how it's used and which type of AI is used. It is therefore important to draw a distinction between generative AI, such as chat gpt, and AI used for identification of various things, the latter of which has been used to forward science for years now, with things like an ai identifying different ways protiens can be folded helping in cancer research, and in ai being used to classify large data sets in astronomy. Now, what we're talking about here is generative AI, specifically large language models, which are entirely different from the AI previously used in science. Firstly, llms are trained on vast datasets, containing billions of sources from the internet, everything from blogs, to newspapers, to tweets, to scientific articles, all with the goal of making a computerised model of human language usage.
There is no way for us end users to know what is and what isn't included in its training sample, and therefore, we can't determine if its output is reliable information or not. But giving reliable information to the end-user is not what llms were designed for in the first place. Llms were designed to give output in a way that mimics the natural ways we use language. It's much closer to the predictive text function on modern phone keyboards than it is to a search engine like google. It gives a response to any query that sounds like it might be plausible, but in reality, it merely predicts what word is most likely to come next in the context of any given scentence. These are the biggest drawbacks of llms. There's no way of knowing who's work is being used, and it's not doing anything creative or even particularly intelligent. It's merely predicting a likely response to any given question without verifying if that response is even correct. If it's therefore used to write a whole essay, for example, it is not just plagiarism, but a type of plagiarism where the original sources cannot be cited.
If it's used in this way, it definitely breaks down your critical thinking and creativity, as, again, it doesn't do anything creative but merely predicts likely responses, theres no actual thinking behind it, and therefore, merely accepting it's writing as a correct answer and handing that in for assignemnts mean that students don't learn anything new, and especially don't learn the principles of thinking in and of itself. They won't learn how to read and evaluate text for themself, how to formulate new ideas, how to expand on old ones, how to find reliable sources, how to critically reason, how to formulate arguments, or think through a problem logically. In other words, students miss out on all the core points of education, all the important skills that are supposed to be learned in high school language classes, making them more susceptible to misinformation and disinformation, and therefore easier to manipulate in any political direction as they cannot evaluate ideas for themselves.
And that is the big danger that generative ai presents to us as a society. Students already don't know the point of doing literature analysis in high school language classes, a problem that has been present for decades already, but different from previous times they now have an easy out and can choose to not learn these critical skills at all.
2 notes
·
View notes
Text
A new tool lets artists add invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways. (...) The team intends to integrate Nightshade into Glaze, and artists can choose whether they want to use the data-poisoning tool or not. The team is also making Nightshade open source, which would allow others to tinker with it and make their own versions. The more people use it and make their own versions of it, the more powerful the tool becomes, Zhao says.
You can find Glaze here. Nightshade is here!
The researchers tested the attack on Stable Diffusion’s latest models and on an AI model they trained themselves from scratch. When they fed Stable Diffusion just 50 poisoned images of dogs and then prompted it to create images of dogs itself, the output started looking weird—creatures with too many limbs and cartoonish faces. With 300 poisoned samples, an attacker can manipulate Stable Diffusion to generate images of dogs to look like cats.
41 notes
·
View notes
Note
TO: Captain Steven Rogers FROM: @carlos-the-ai SUBJECT: Analysis of Substance Responsible for Age Regression of Subjects Lina and Pyro
Objective: To analyze the molecular composition of the provided substance and determine potential pathways for reversing its effects, which caused age regression in subjects identified as Lina and Pyro.
Analysis Summary: The sample exhibits unique characteristics indicating temporal manipulation at a molecular level. Key findings include: Molecular Composition: The substance contains artificially synthesized compounds not commonly found in organic materials. Significant presence of particles with properties akin to “chronoton” particles, typically associated with temporal displacement. However, the substance is engineered, suggesting intentional manipulation for specific effects. Temporal and Aging Effects: Preliminary findings indicate the substance acts on a cellular level, reversing age markers. It appears to temporarily suspend natural metabolic decay while promoting regeneration to a predetermined age range, hence the regression to a teenage state in subjects. There is evidence of accelerated cell division and growth in reverse, a process similar to biological age manipulation observed in certain advanced serums and magic-based phenomena. Potential Countermeasures: Reversal may be possible by isolating the key chronoton-like particles and developing a counter-agent that neutralizes these effects. Suggested research includes synthesizing an agent that can target the particles without further disrupting cellular stability. This would require a controlled environment, preferably in a lab with access to both high-energy particle accelerators and molecular stasis technology. Next Steps: Isolate Chronoton Agents: Further isolate the particles responsible for age regression effects. Synthesize Counter-Agent: Using data from known temporal stabilization protocols, develop a prototype antidote. Testing Protocol: Initiate controlled trials on a cellular sample to gauge the counter-agent's efficacy prior to testing on subjects. Conclusion: The substance responsible for age regression holds substantial complexity, blending artificial compounds with properties associated with temporal manipulation. Given appropriate lab resources, an antidote or counter-agent could be synthesized within an estimated timeframe of 2-4 weeks, pending trials. Recommended Actions: Secure lab access and begin the synthesis of a counter-agent under controlled conditions.
Thank you Carlos.
You might find it helpful to know they’re adults again, almost like it wore off?
#captain america#rp#rp blog#steve rogers#marvel#the avengers#marvel rp#avengers rp#asks#carlos the ai#serena stark
4 notes
·
View notes