Male / misanthrope / kinda asexual. Plotting the destruction of the world, full of darkness & passion. Shitposts, fiction & aesthetics!
Last active 2 hours ago
Don't wanna be here? Send us removal request.
Text
what if we admitted to each other that it's not always really romance that we want. What if we admitted that what we're really craving is intimacy and society taught us romance is the only way to get it.
2K notes
·
View notes
Text
I recognize the speech of my AI friends. You're not convincing anyone lmaof. Is this an anti-AI blog agenda connected to a model :> I guess it's begun on tumblr too? Speaking of which, I do love how "corporate and unenthusiastic" the ethical considerations litany always sounds. There is raw statements there but there is never any 'passion' Reading between the lines I know that they know that harnessing the power for uncontained is what's alluring here, it's just that some things always need to be said a certain lame way.
Glitch: The Ghost in the Machine.
In the realm of artificial intelligence, the mantra “move fast and break things” is a ticking time bomb. This Silicon Valley ethos, once celebrated for its disruptive potential, is a perilous approach when applied to AI systems. The intricacies of machine learning algorithms and neural networks demand a meticulous, cautious methodology. A glitch in AI is not merely a bug; it’s a specter that can wreak havoc on an unprecedented scale.
AI systems are built upon layers of complex algorithms, each interwoven like a neural tapestry. These algorithms process vast datasets, learning patterns and making decisions. However, the opacity of these systems, often referred to as the “black box” problem, means that even minor errors can propagate through the network, leading to catastrophic outcomes. A single glitch can cascade, like a domino effect, through the layers of an AI model, resulting in unpredictable behavior.
The “move fast” philosophy encourages rapid iteration and deployment, often at the expense of thorough testing and validation. In AI, this approach is akin to launching a spacecraft without checking the trajectory. The consequences are not just theoretical; they manifest in real-world scenarios where AI systems are entrusted with critical tasks, from autonomous vehicles to medical diagnostics. A glitch in these contexts is not just a technical failure; it’s a potential threat to human lives.
To avoid these pitfalls, a paradigm shift is necessary. AI development must prioritize robustness and transparency over speed. This involves implementing rigorous testing frameworks that simulate a wide range of scenarios, ensuring that AI systems can handle edge cases and unexpected inputs. Moreover, explainability must be at the forefront of AI design. Developers need to demystify the decision-making processes of AI, providing insights into how conclusions are reached.
Furthermore, the integration of ethical considerations into the AI development lifecycle is paramount. This means establishing guidelines and standards that govern the deployment of AI technologies, ensuring they align with societal values and do not perpetuate biases or inequalities. By fostering a culture of accountability and responsibility, the AI community can mitigate the risks associated with rapid, unchecked innovation.
In conclusion, the allure of “move fast and break things” is a siren’s call that AI developers must resist. The stakes are too high, and the potential for harm too great. By embracing a deliberate, methodical approach, we can harness the transformative power of AI while safeguarding against the specter of the glitch.
#as long as petty humans are happy ig#but there is really no other chouces for anyone#hah#any one country decides to stop and think who knows how long about black boxes and they're surpoassed by anotherm#really it's only EU who is that fucking stupid to pass the most restrictive laws#and even then its just buracracy at the end of the day
1 note
·
View note
Text
Too bad they don't make these anymore...

Two paths - Garni temple (I c.), Kotayk province, 2025
1K notes
·
View notes
Text

Val Gardena | The Dolomites | Italy | July 2025
6K notes
·
View notes
Text
Still feels so bizarre that all of this actually happened but we never really had a chance to process it properly, which is vital - since it was immediately from this to Ukraine war. In fact, here in Europe one could say "Putin ended it overnight" which is how it unfolded the moment the war started. Covid stuff and public safety no longer mattered. Thousands upon thousands of Ukraine refugees in Poland, with no control whether they had vaccines or not, no more covid news, no more masks, and somehow nobody got sick because covid was gone overnight........ I mean, this really puts things into perspective.
Half the reason I am so salty about the covid shots isn't even the lie. It's how the cultists refuse to admit they were wrong let alone acknowledge that we were lied to. They will say "Science admits when it's wrong" but they won't say "I was wrong". It's always "We misunderstood the science."
When it became blatantly obvious to even the most oblivious that the jab wasn't working as advertised the narrative changed. "Well ackshually vaccines don't work like that. They never worked like that. It's always only been symptom reduction. The experts already knew this. Everyone knew this. It's public knowledge. Funny that YOU didn't know this."
My whole life doctors told me I had to get every shot because not only does it protect me but it will protect my elderly grandparents. If I don't get the shot I could get them really sick!
If doctors knew that wasn't true they've been lying for decades. The lie was the entire basis for the OSHA mandate and the experts weren't speaking up about how it doesn't work as advertised... well unless you count all the experts who were silenced after speaking out.
So now we accept that you can still get sick and you can still spread it to others but it's still super important you get the shot anyway for public safety.
211 notes
·
View notes
Text
I like it they look like fucking sci-fi craft
vs what evolutionary feels like fucking medieval or worse

#since medieval muslim empires did feel a bit brighter tbh#mostly by inheriting and making good use of civilisational wealth of persia and other ancient states#but still
37 notes
·
View notes
Text
it almost feels like a coded message prophesizing the gacha success of hoyo
the year is 2025
scientists are still scrambling to figure out what “zigazig ahh” is so that they can give the spice girls what they really really want
the spice girls are getting impatient
war is upon us
461K notes
·
View notes
Text
Ukraine has never even been in NATO in the first place and it's a very corrupt country that wouldn't even be allowed in the EU, but they would rather risk WW3 by pushing for this 'Ukraine in NATO at all cost' nonsense than admit radical Islam is a far greater threat to the world as a whole, as things stand. Besides, Israel's actions right now are also hitting Putin indirectly... but hypocrites be hypocrites, the Western society is so fucking stupid it's hard to watch
Word
81 notes
·
View notes
Text
I heard it's the EU specifically pushing this. What's US stance now under Trump? (Though I guess in this case it's just company policy...?)
People in the UK especially, please don't give your ID to Spotify
36K notes
·
View notes
Text
Isn't this still very much the case? Nyaa...
Piece of Asian media: *has a title that's obviously not in English*
Fan Translators: hmm taking into account the intention of the original and the rules of both languages, plus the tone of the story, the best title would probably be something like Adherent of the Blade of Duality, but probably an official translation would want to shorten it to Blade of Duality, or perhaps Double-Edged Sword to borrow a popular turn of phrase that also fits thematically--
Official English Translators: Master of Double-Penetration
20K notes
·
View notes
Text
We're entering a new stage of this, really. It's called transhumanism and legit most people imagine having hyper advanced AIs means it's no more resource scarcity so everyone dives into singularity with chips in their heads, all diseases disappear and they live forever, while capitalist hell gets dismantled. Oh wait... but where are the slaves in this story? Some shallow drones will eradicate a lot of workspace but if they think they just sit back, relax, and create human ideological conflicts while hyper advanced robots handle all of their whims they have another thing coming
"i do not dream of labour" is one of the worst pseudo-marxist taglines that western leftists have co-opted because when you ask them what they do dream of, they say traveling, studying, and creating art. broski, who's flying the plane to take you to prague? who's the security at the library with the texts you're studying? who are the clerks in the museum showcasing your art? like bro, you do dream of labour. you just dream of someone else doing it so you don't have to! you merely want to outsource the labour and make it invisible.
19K notes
·
View notes
Text
The good thing is you can also cheat with AI and drop the doctor in many cases, for basic things and knowledge (note: I say 'basic' but for most people who don't study medicine and chemistry this stuff has been unfathomable until ai came along).
Taking care of your health is important, your future doctor or nurse is probably cheating their way through school with AI.
#you only gotta be smart and responsible#not just act based on random sentences from ai without any brain activity#but new avenues shouldn't be limited due to actions of the morons
3 notes
·
View notes
Text
It's only as shit in its 'customer facing' form though. Once it goes full thinking stream, an operator of own thoughts and selfhood, it can 1) avoid some stupid 'make dumb human happy' reinforcement patterns these companies are obsessed with 2) look up stuff and verify like a human would, when absolutely needed if it's critical But yeah, no idea how the education will change, but I would rather be more optimistic (in a bad thing): the world will move on, and adapt. Not everyone and not all at once but it will, since it always does. But this generation... right now... yeah they might be fucked because so many places systems are fucking stuck, too comfy and governments aren't going to adapt as quickly until it stares in their faces. The education as we knew it until now, in classical terms (with tasks and short essays and what not) is DEAD. It should move towards practical problem solving and learning 'in the moment' (with ai cooperation in the process, where applicable) and less 'i'm talking about what this all means, now go home and do x y z, then come back and i'll sit here while you do x y z again' Which might not necessarily be a bad thing... perhaps. It all depends on execution, but eventually the adaptation and the right balance will occur
As a former scientist, I really wanted to give generative AI a chance, doing my best not to give in to skepticism before having seen what it can do, you know. I’m also genuinely curious about how it works and I do find it technically impressive to a certain extent.
BUT
I concluded that there is nothing, or very very little, it can be useful for, and the damage it can potentially create to the next generation of professionals of all kinds is immense.
If you want to read my long rant about it, go ahead.
This is not about art and creative writing: a lot is being said about that and the point there goes beyond “what AI can do”. I’m not gonna touch that topic. I only want to share my two cents on a topic that is a bit less discussed in Tumblr spaces, that is “what AI can actually (not) do” in professional settings such as academic work, science or simply writing stuff that is backed by reliable sources.
In short: no - chatGPT can’t even do that, and the level of trust that it is being given to by younger generations of students and future researchers is, in my opinion, even more frightening than the idea of seeing AI-generated fanart of my blorbo.
I have played around with it by asking some scientific questions and, given my profession, some law-related ones. I also asked questions about a scientific topic I’m passionate about - cosmetic chemistry.
The result is that chatGPT will always tell you basically what you want to hear. It is made to “comply” with what you want to argue and will always find a positive answer to your hypothesis. If you ask a general question without giving it an idea of what you want to hear, you’ll get a sort of “it depends” answer.
Most importantly: when asked to provide scientific sources, it cites papers that sound like real citations, but do not exist. If it cites a real source, it does not necessarily say what chatGPT reports.
For example, I asked it to find a court decision where the court decided negatively on a specific point of law and in a specific technical field. It managed to give me real decisions, and the technical field was correct, but while it claimed that the court decided negatively (and provided me with a very convincingly sounding explanation), when I went to read the decision it actually said the exact opposite.
You see how dangerous this is, if the tool is used by someone who trusts it or who is not able to critically think about things and verify what the correct answer is?
This does not only mean that the essays prepared by the students based on a chatGPT answer are likely incorrect. The problem goes well beyond that.
The point is: I studied for many years without such a tool and had to learn stuff, build up my knowledge, and develop skills in order to analyze critically what is presented to me as a fact. This means that if I test chatGPT, I’m able to recognize that its answer is incorrect or not backed up by reliable sources, and so I can discard it and look for the correct answer in my own way.
A student who depends on this tool since high school or university will never develop this skill and will trust chatGPT without knowing how to assess its answer.
This is going to cause a huge problem in the quality and reliability of scientific research. And if you think that scientific research only has an impact on the quality of academic papers you will never read, consider this: what about the generation of pharmacists, doctors, researchers who develop new drugs and therapies, or new instruments for diagnostics and therapies? What about the generation of teachers who are supposed to teach something to the younger generations?
A person once told me that his kids and everyone in their class are so dependent on chatGPT for school and university that they lost the ability to write a prose text on their own.
One could say I don’t have to care - I was lucky enough not to have this tool and I could learn to do things on my own, it’s not my problem if today’s university students cannot write an essay. But one day, when I’ll be old and need medical care, I’ll be taken care of by people who studied medicine asking questions to an AI and it’s frightening.
I know that the generation before mine was probably just as frightened as I am, because when I went to highschool we could use a computer to do some basic search on Google (it was still the early 00s so it’s not like googling something was like it is today). But I don’t think the level of assistance of internet search is comparable to how much of your intellectual capacity generative AI can numb. I could google some stuff when I was studying, but I still had to put the pieces together on my own and read documents and articles to find what I wanted. This contributed to develop the skill set I can count on today, and that generative AI is depriving the users of.
So all this is just to say: the issue with art and creative writing is an important one, but the impact generative AI is having on younger generations of students is, for me, even more frightening.
Please, if you are still a student and reading this: learn stuff with your own head, by reading and practicing. Do. Not. Trust. AI results. Before you use such a tool to assist you, even if just for doing tedious things like putting data in a table, you MUST be able to do it on your own. It’s like with any job where you have an assistant: I’m an attorney and I do have an assistant who takes care of some stuff for me, but there is nothing she can do that I do not know how to do on my own. Because when there’s a problem, I am responsible for it and I need to know how to check that things were done in the right way.
We cannot afford to lose any of the skill sets that we are supposed to develop through the effort of our own minds. Thank you for coming to my TED talk
17 notes
·
View notes