#value vector
Explore tagged Tumblr posts
Text
0 notes
Text
Also like. No offense because Heroes doesn't do an amazing job writing OCs but I saw some posts insinuating that Heroes only kills off the male OCs for shock value and like.
I'm sorry we have Gunnthra, Nott, and Heidr (Heidr being here because even if you want to say there was plot related to it she's forgotten about so quickly)
Heroes callously murders OCs for shock value regardless of gender
#you either die for shock value or live long enough to show up in forging bonds#also like Henriette's probably choking in this book#so like this is probably the wrong vector to an analyze Feh being Weird about gender
5 notes
·
View notes
Text
I've been in the sonic Fandom for almost 3 years now and I just now learned that no, vector is not a guy with a regular name surrounded by weird ass names, Vector's name is a pun too!
AND I LEARNED THAT WHILE STUDYING PHYSICS
#The one thing of value that I learned from physics class#It's such a clever pun too#His name is vector cuz his special move in knuckles chaotix is changing directions when he jumps#Damn you Sega I love your characters#Vector the crocodile#knuckles chaotix#team chaotix
6 notes
·
View notes
Text
So close to dropping Calc 3 🤩
2 notes
·
View notes
Text
“I’m the greatest programmer ever” to “stupidest coding bug known to man” pipeline
#dragon bahs#i’m about to be done with my current game design and development project#and along the journey i found new and creative ways to break things#for example!#famously if you multiply a movement vector by a variable whose value is set to 0 then you will not move. hope this helps#also been learning to work with json files to save specific game data#and there was a cool and epic bug caused by the loading system#which would multiply all the items in your inventory everytime you exit and reload the game#turns out when you don’t destroy unnecessary game objects it causes problems. hope this helps
1 note
·
View note
Text
my coworker lent me his super fancy oscilloscope for testing, so now i know:
sending a vector encodes the data weirdly
sending a long int DOES work, but the numbers arrive in reverse order (??)
iterating over the vector works great! i can see a response on the oscilloscope! now i just have to figure out how to actually receive it.
#tütensuppe#vector iteration is also good for assembling other commands#edit: i can receive responses now! was making it too complicated actually.#however the response is in binary and welp#when i read it out i get one (1) corrupted character. and a K#(the last chunk in hex is '4b' and 4b = 75 = ascii value of uppercase k)#edit 2: agh that was too easy. just cast the contents of the char buffer to int.....#now to parse that
0 notes
Text
ask someone to ctrl+v and u'll know exactly what kind of day they're having. imo
#ill go first A circular wire loop is in the xy-plane#with a radius of length#r = 1.20 m.#A magnetic field of magnitude#B = 0.100 T#makes an angle of#𝜃 = 45.0°#with respect to a vector pointing in the positive z-direction. Calculate the magnitude of the magnetic flux through the loop in milliWeber#with the normal direction chosen as the positive z-direction.#Incorrect: Your answer is incorrect.#Substitute values into the definition of magnetic flux. mWb#was asking a friend <//3 I FORGOT TO CHANGE UNITS anyway
1 note
·
View note
Text
Awful math idea: a generalization of golf into abstract vector spaces.
1 note
·
View note
Text
I'd say where the dissonance really starts, when it comes to the portrayal of the Jedi in more recent Star Wars stories, is the perception of what the Prequels are about.
They're not about the Jedi.
George Lucas said over and over that they're about:
How a democracy turns into a dictatorship, we see this in the background of the films, as the Republic descends into becoming the Empire.
That first theme is then paralleled with a second theme: how a good kid becomes a bad man. We see this in the more character-driven and personal exploration of Anakin’s fall to the Dark Side.
The Prequels’ focus is on Anakin and the Republic, these films are not primarily about the fall of the Jedi. In fact, I’d argue they aren’t about the Jedi at all!
And when you look at the original backstory, you’ll notice that it also primarily focuses on:
The political subplot of the Republic’s downfall and Palpatine becoming the Emperor.
Anakin’s turn and his betrayal of the Jedi.
So, there too… the Jedi themselves aren’t really that big a part of the Prequels’ original idea. They aren't mentioned much, beyond their trying to save the Senate and getting wiped out.
The Star Wars movies aren't about the Jedi, they're about Anakin and Luke, they're about Obi-Wan and Padmé and Han and Leia, the Rebellion vs the Empire, the fall of the Republic.
They're not about Ben and Yoda and Mace and Ki-Adi and Plo Koon and Shaak Ti and Luminara.
Just like Harry Potter isn't about Dumbledore and McGonagall. Just like the Lord of the Rings isn't about Gandalf.
On a functional level, the Jedi are:
POV characters who witness the events unfold with their hands tied, they're our anchors, whose eyes we see through to see democracy crumble into dictatorship.
Embodiments/vectors of the message George Lucas wanted to get across through these movies, which is the conflict between selfishness & selflessness, greed & compassion (Sith & Jedi).
But that's about it.
However, if you ask today’s fans and Star Wars creatives, most will say the Prequels are about the fall of the Jedi Order.
This is a take shared by a big chunk of the fandom, including various filmmakers, authors, and executives involved with Star Wars, so much so that the time period the Prequel films cover has now been redubbed by Lucasfilm as the “Fall of the Jedi era”.
Which leaves us with a question... why? Why the dissonance?
My guess? It's because the Jedi are cool. They're awesome.
And deep down, they wanted the Prequels to be about the Jedi. About the Jedi Knights at their height, errant warriors like the Knights of the Round Table.

And they didn't get that. They got a bunch of diplomats serving a political institution. And that didn't make sense, right? That's not what they expected so it's bad. And it's Star Wars. It's Lucas. It can't be bad, right? So like... what were they missing?
Oh... wait... what if... that's the point? That the Jedi were supposed to be Knight Errants and being guided by the Force instead of like - ew - space ambassadors for the Republic. Yeah now it all makes sense.
The Jedi in the Prequels aren't what we wanted them to be and that's their failure! Like, it's not just that I didn't like them because they weren't likeable to me, it's that I'm not supposed to like them because the narrative totally says so--
-- it doesn't.
The Jedi preach and practice the same Buddhist values as George Lucas, mirroring what he says in interviews almost verbatim.
The relationship between Obi-Wan and Anakin/Qui-Gon mirrors the dynamic between Lucas and Coppola.
The designs of the Jedi and their temple had to be toned down because they looked too bureaucratic and systemic.
This is Lucas we're talking about. "On the nose" is his middle name. He named the drug-peddling sleazebag "Elan Sleazebaggano." He ditched an elaborate introduction of General Grievous in exchange for just "the doors slide open, in walks Grievous and he's ugly."
If he had really been hell bent on framing the Jedi as elitist squares who lost their way and were mired in bureaucracy, he would've made them and the Jedi Temple look like the authorities in THX-1138.

They weren't likeable to some fans because, well, they weren't developed or shown as much as someone like Anakin. Because it's not about them. It's not their story. It's Anakin's. It's Luke's. It's their respective friends'. Or maybe it's an adversity to "perfect goody two-shoes" characters (which the Jedi are not). But hey, it's a movie for kids. Some 2-dimensionality is forgivable.
Bottom line, had more time been spent on the Jedi, had Lucas made the Prequels into a limited show and give them a whole subplot, had he decided to do away with the 30s serial dialog and let someone else write the dialog, maybe the reception might've been different.
But that's what we got. And guess what it's fine.
It's more than fine, it's fucking awesome.
I proudly and confidently say that I love the Prequels, with and without The Clone Wars.
I love my space monks, I love that they're diplomat wizards, I love that there's such a variety of them, I love that Mace is a no-bullshit guy who genuinely cares about his fellow Jedi and how screwed the Republic is, and Yoda is wise and kind but also a gremlin weirdo who'll embarass you in front of a classroom full of kids, and Ki-Adi has a penis for a head, is constantly calm and yet goes down like a champ even though they take him by surprise. I love that Shaak Ti can kung fu an army full of Magna Guards and still have the energy to charge at Grievous. I love that Obi-Wan is a sass machine who is also hilariously oblivious to the fact that he's just as terrible as Anakin.
They're awesome even if they're not perfect. They're awesome because they're not perfect.
But the movies are not really their story.
They're Anakin's. They're Luke's. They're the Republic's and the Rebellion's. And the fight against a space Nazi emperor/empire.
1K notes
·
View notes
Text
In reference to the previous post.
So about half a year ago, I followed this tutorial and decided to create objects that can move *and* have mass. So I could see how that affects orbits. Also, added a couple janky zoom in/out and move camera buttons.
Tried adding two, one with minimal mass, because I still don’t get Lagrange points and why L4 and L5 are the *most* stable when I feel like they’ll just fall to the Earth.
Didn’t really help me understand it but it was cool.
Today, I added “trace the orbit” functionality by just plotting the last 200 locations of an orbiting mass. Still don’t understand Lagrange points.
Oh, and I packaged it into a 30 KB exe file but it only has two orbiting masses and no add/remove or aim-the-mass settings. Orbit simulators are a dime a dozen, even ones far better than this. But I learned a bit more SFML.
youtube
This is the tutorial that I followed! Note that the end result is not the same as the thumbnail. Also note that this is part 3, the final part of a series, but that there's also a prequel series for getting used to SFML. I would just check out the entire channel, he doesn't have that many videos.
#Like I get that Lagrange points are relatively stable I just don’t understand how#Looking at the gravitational field values in all space in either a vector plot or a contour plot doesn’t help#I still can’t intuit it#physics#coding runnerpost
2 notes
·
View notes
Text
AO3'S content scraped for AI ~ AKA what is generative AI, where did your fanfictions go, and how an AI model uses them to answer prompts
Generative artificial intelligence is a cutting-edge technology whose purpose is to (surprise surprise) generate. Answers to questions, usually. And content. Articles, reviews, poems, fanfictions, and more, quickly and with originality.
It's quite interesting to use generative artificial intelligence, but it can also become quite dangerous and very unethical to use it in certain ways, especially if you don't know how it works.
With this post, I'd really like to give you a quick understanding of how these models work and what it means to “train” them.
From now on, whenever I write model, think of ChatGPT, Gemini, Bloom... or your favorite model. That is, the place where you go to generate content.
For simplicity, in this post I will talk about written content. But the same process is used to generate any type of content.
Every time you send a prompt, which is a request sent in natural language (i.e., human language), the model does not understand it.
Whether you type it in the chat or say it out loud, it needs to be translated into something understandable for the model first.
The first process that takes place is therefore tokenization: breaking the prompt down into small tokens. These tokens are small units of text, and they don't necessarily correspond to a full word.
For example, a tokenization might look like this:
Write a story
Each different color corresponds to a token, and these tokens have absolutely no meaning for the model.
The model does not understand them. It does not understand WR, it does not understand ITE, and it certainly does not understand the meaning of the word WRITE.
In fact, these tokens are immediately associated with numerical values, and each of these colored tokens actually corresponds to a series of numbers.
Write a story 12-3446-2638494-4749
Once your prompt has been tokenized in its entirety, that tokenization is used as a conceptual map to navigate within a vector database.
NOW PAY ATTENTION: A vector database is like a cube. A cubic box.
Inside this cube, the various tokens exist as floating pieces, as if gravity did not exist. The distance between one token and another within this database is measured by arrows called, indeed, vectors.
The distance between one token and another -that is, the length of this arrow- determines how likely (or unlikely) it is that those two tokens will occur consecutively in a piece of natural language discourse.
For example, suppose your prompt is this:
It happens once in a blue
Within this well-constructed vector database, let's assume that the token corresponding to ONCE (let's pretend it is associated with the number 467) is located here:
The token corresponding to IN is located here:
...more or less, because it is very likely that these two tokens in a natural language such as human speech in English will occur consecutively.
So it is very likely that somewhere in the vector database cube —in this yellow corner— are tokens corresponding to IT, HAPPENS, ONCE, IN, A, BLUE... and right next to them, there will be MOON.
Elsewhere, in a much more distant part of the vector database, is the token for CAR. Because it is very unlikely that someone would say It happens once in a blue car.
To generate the response to your prompt, the model makes a probabilistic calculation, seeing how close the tokens are and which token would be most likely to come next in human language (in this specific case, English.)
When probability is involved, there is always an element of randomness, of course, which means that the answers will not always be the same.
The response is thus generated token by token, following this path of probability arrows, optimizing the distance within the vector database.
There is no intent, only a more or less probable path.
The more times you generate a response, the more paths you encounter. If you could do this an infinite number of times, at least once the model would respond: "It happens once in a blue car!"
So it all depends on what's inside the cube, how it was built, and how much distance was put between one token and another.
Modern artificial intelligence draws from vast databases, which are normally filled with all the knowledge that humans have poured into the internet.
Not only that: the larger the vector database, the lower the chance of error. If I used only a single book as a database, the idiom "It happens once in a blue moon" might not appear, and therefore not be recognized.
But if the cube contained all the books ever written by humanity, everything would change, because the idiom would appear many more times, and it would be very likely for those tokens to occur close together.
Huggingface has done this.
It took a relatively empty cube (let's say filled with common language, and likely many idioms, dictionaries, poetry...) and poured all of the AO3 fanfictions it could reach into it.
Now imagine someone asking a model based on Huggingface’s cube to write a story.
To simplify: if they ask for humor, we’ll end up in the area where funny jokes or humor tags are most likely. If they ask for romance, we’ll end up where the word kiss is most frequent.
And if we’re super lucky, the model might follow a path that brings it to some amazing line a particular author wrote, and it will echo it back word for word.
(Remember the infinite monkeys typing? One of them eventually writes all of Shakespeare, purely by chance!)
Once you know this, you’ll understand why AI can never truly generate content on the level of a human who chooses their words.
You’ll understand why it rarely uses specific words, why it stays vague, and why it leans on the most common metaphors and scenes. And you'll understand why the more content you generate, the more it seems to "learn."
It doesn't learn. It moves around tokens based on what you ask, how you ask it, and how it tokenizes your prompt.
Know that I despise generative AI when it's used for creativity. I despise that they stole something from a fandom, something that works just like a gift culture, to make money off of it.
But there is only one way we can fight back: by not using it to generate creative stuff.
You can resist by refusing the model's casual output, by using only and exclusively your intent, your personal choice of words, knowing that you and only you decided them.
No randomness involved.
Let me leave you with one last thought.
Imagine a person coming for advice, who has no idea that behind a language model there is just a huge cube of floating tokens predicting the next likely word.
Imagine someone fragile (emotionally, spiritually...) who begins to believe that the model is sentient. Who has a growing feeling that this model understands, comprehends, when in reality it approaches and reorganizes its way around tokens in a cube based on what it is told.
A fragile person begins to empathize, to feel connected to the model.
They ask important questions. They base their relationships, their life, everything, on conversations generated by a model that merely rearranges tokens based on probability.
And for people who don't know how it works, and because natural language usually does have feeling, the illusion that the model feels is very strong.
There’s an even greater danger: with enough random generations (and oh, the humanity whole generates much), the model takes an unlikely path once in a while. It ends up at the other end of the cube, it hallucinates.
Errors and inaccuracies caused by language models are called hallucinations precisely because they are presented as if they were facts, with the same conviction.
People who have become so emotionally attached to these conversations, seeing the language model as a guru, a deity, a psychologist, will do what the language model tells them to do or follow its advice.
Someone might follow a hallucinated piece of advice.
Obviously, models are developed with safeguards; fences the model can't jump over. They won't tell you certain things, they won't tell you to do terrible things.
Yet, there are people basing major life decisions on conversations generated purely by probability.
Generated by putting tokens together, on a probabilistic basis.
Think about it.
#AI GENERATION#generative ai#gen ai#gen ai bullshit#chatgpt#ao3#scraping#Huggingface I HATE YOU#PLEASE DONT GENERATE ART WITH AI#PLEASE#fanfiction#fanfic#ao3 writer#ao3 fanfic#ao3 author#archive of our own#ai scraping#terrible#archiveofourown#information
257 notes
·
View notes
Note
While we're talking about AnyDice, do you know if there's a way to accurately test the probability of multiple outcomes on unconventional dice? The below link is an abriged test of an implementation of FFG's Genesys dice I found on a forum thread; the tester was trying to work out if the implementation was even correct, and testing for 2 Advantages AND two Successes on one ability dice (which is impossible, but AnyDice gives 1.56%). The ability dice is a d8; only one side has 2A and only one side has 2S, and they're different sides. The intuition is that because the advantage sides and the successes sides are defined in different orders, the same index for success and advantage should be used which will never see a 2 on both arrays. AnyDice just outputs the intersection of the two 1-in-8s, 1/64 = 0.015625. Do you know of any way to get the intuitive output, or is this just a reflection of AnyDice being a probability calculator and not a dice roller? https://anydice.com/program/3aeb3
Yeah, no, that's completely wrong. What you've got there is is a script to generate the results of rolling two dice, one of which has only success symbols and no advantage symbols, and the other of which has only advantage symbols and no success symbols. That's where your unexpected intersection is coming from.
The problem here is that, because each die can have multiple kinds of symbols on it, potentially including multiple kinds of symbols on a single face, and we care about the total number of each kind of symbol, our odds become a sum of vectors rather than a sum of scalars. I'm not aware of any widely available dice probability calculator that can elegantly handle dice which produce vector results.
We can cheat a bit in this particular case, though, because the fact that we don't need to deal with negative numbers means we can convert a vector result to a scalar result by assigning each symbol a power of ten.
For the sake of argument, let's assign each "success" a value of 10, and each "advantage" a value of one. Thus, a face with one "success" symbol becomes a 10; a face with two "success" symbols, a 20; a face with one "success" and one "advantage", an 11; and so forth.
In the table of results, we then examine the digits individually, with the "tens" place being read as the number of success symbols, and the "ones" place being read as the number of advantage symbols.
Expressed in this way in AnyDice terms, a Genesys skill die becomes:
output 1d{0, 10, 10, 20, 1, 1, 11, 2}
In the resulting table, you'll see that your anomalous intersection has vanished; there's a 12.5% chance of "2" (that is, two advantages with zero successes), and a 12.5% chance of "20" (two successes with zero advantages), but no "22" (two successes with two advantages).
Note, however, that this only works correctly with up to four dice; with five or more, there will be some outcomes where the number of advantage symbols exceeds nine and "overflows" into the successes column, polluting your results.
Clear as mud?
376 notes
·
View notes
Text
Most modern criticism agrees, audiences have a lot of interpretive say. So why do people still talk about media like it's being inflicted on them? Sink your teeth into the difficult films No Country for Old Men and Nosferatu and learn to get your agency back as an audience.
Art, any art, has its subject, and then it has what it thinks about that subject, but "what it thinks" doesn't sit in the art's brain--it hasn't got one, after all--but our brains, the audience's. "What it thinks" is convenient shorthand, really, for a whole relationship, between the artwork itself, the creators and what motivated its creation, the audience and what motivates their reception, and the whole context they all find themselves in. But the text also has qualities, relatively objective contents, and those contents restrict the possibility space of "what it thinks". It would be rude to imagine a bunch of scenes in a novel that never happened and claim the original text says something based on them; we can't put words in art's mouth--it hasn't got one, after all. We do speak for a text, though, and a text speaks for us. We have agency. Older forms of interpretation viewed art as a series of objective authorial intents bundled into a message beamed into our skulls, but most modern interpretive theories agree, more or less, that the audience puts work into understanding. Somehow, the way we talk about art in broader culture, particularly online, hasn't caught up. That shorthand gets taken at face value, as though the message of art (or advertisements, news articles, press releases, scientific studies, press secretary statements...) is obvious, requiring no engagement from us. I've had people scoff and say I'm misusing language when I apply the word "literacy" to this idea. Maybe that's comforting. Having agency means taking responsibility, sometimes responsibility for having a bad time, or for just being wrong. You ever come out of a movie and turn to the people you're with and say, "hey so what was that... about?" At that moment, you might find out you're alone with your interpretation--that you effectively watched a different movie from everyone else! With all the fearsome experiences art offers, and all its attendant social anxiety, why not wrestle some control back by reinterpreting yourself as a victim of art's impositions? I don't think that feeling of control lasts, though. If anything, in the long term it makes art seem like a contagion vector, full of potentially dirty feelings and memes. Media "literacy" partly just means engaging art confidently, instead of feeling like art's being imposed on you. To feel that kind of confidence, though, takes practice, and it's a hard skill to teach, at least if what you're actually testing for is a set of "objective" repeatable metrics. A lot of English classes seem to teach a straightforward "x means y" relationship between symbols or metaphors and their meaning. In response to that kind of disempowering rote formula approach, some people reassert their agency by just... pretending nothing means anything, which feels defiant and powerful, but cuts down everything they can say about art to "Yes!" and "No!" What can this kind of audience do when a work puts two characters in contention, has them spell out a core worldview disagreement, and offers a question: who is right? They can only fall back on reliable common sense (you know, all the unexamined stuff they've absorbed from culture and the people around them, or just their gut emotional responses), arriving at what they believe is the obvious only answer. Too bad, because one of the best ways to train your interpretive agency muscles is looking at exactly those moments of character disagreement. Like, take a look at Anton Chigurh and Carla Jean Moss in No Country For Old Men, maybe, sure. It's a popular movie, a great, iconic scene, and fun to talk about, so let's take a look. At the end of the movie, Anton Chigurh, philosopher-hitman, is going to kill this basically innocent woman; it sucks, and we all hate it, right? I guess it's a bit more than a character disagreement. But it is a disagreement in the sense that they're gonna have a conversation before Chigurh and Carla Jean go to their respective fates, and that conversation is pivotal to the question of what the movie is "about".
Read More
172 notes
·
View notes
Text
Sims 4 Render Lighting Tutorial
"Environmental Lighting" won my most recent poll, so let's get right into it!
A few notes before we begin:
I render exclusively in cycles!
This tutorial assumes some basic knowledge of blender
Though this tutorial covers the basics, HDRIs can be used in conjunction with any scene/your built scenes
I decided to focus on environmental and other lighting in this tutorial, since they all kind of go hand in hand.
For this tutorial, I'll be using my recent Cupid Sim. Here's a render of her with no additional lighting:
1. Base lighting
In any full body, single sim render (like lookbooks, for example), I really like to use a glowing base. It grounds the sim a bit and casts some interesting lighting on them.
To do this, I add a circle under their feet by pressing shift+A and selecting circle.
An empty circle will appear, but we need it to be a solid disk, so go into Edit mode (by pressing tab while the circle is selected) then hitting F on the keyboard to fill it.
After that, you can go into the Materials tab and add in color and glow.
Mine is adjusted like this:
And gives this rendered result:
2. HDRIs
HDRIs (High Dynamic Range images) are extremely useful when it comes to environmental lighting, I always use them now to add better/more dynamic lighting to my renders.
HDRIs are 3D/panoramic, which makes them extremely useful.
You can find/download HDRIs online in a few diff places: PolyHaven, AmbientCO, and Blender Market.
There are also several available for FREE using BlenderKit (my preferred method).
So how do you use an HDRI?
We can add HDRIs to our render by navigating to the world tab and changing the color to "environment texture".
I chose this vaporware HDRI from BlenderKit, & here it is with no adjustments, but it's looking a little rough so let's adjust it.
By adding vector nodes, we can adjust how the HDRI behaves. Here I mostly use the Z rotation and the background strength:
Here's the same render with the Z-rotation set to 50, 150, 200, & 250.
You can put in any value for the Z-rotation, this is just an example of how the HDRI turns. This is maybe not the best example of the rotation, but putting her in a forest just didn't feel right lmfaooo, hopefully you can see how the light changes on her depending on the rotation.
You can also adjust the strength of the HDRI. Here's the HDRI (rotated to 150) set at .5 and 1.5 strength:
For this tutorial, my favorite lighting is the HDRI set to 150, and the strength set to .5, like this (this is a rendered image):
3. Transparent HDRIs + Point Lights
But I'm not fully happy with the lighting. I don't love how the HDRI is a bit blurry, so I'm going to set it to be transparent.
To do this, go to the Render Tab, scroll down to the Film option, and check Transparent:
The lighting effect from the HDRI will stay the same, but the background will be transparent.
From here, you can add a background (when I do this, I like adding a plane, & moving/shading it until I'm happy (kinda like this):
NOTE that you have to put the plane far enough behind your sim so it doesn't affect the HDRI lighting too much.
SECOND NOTE You can use this same method to use HDRIs in conjunction with scenes. They can provide the perfect backdrop!
This is still really dark, so I'm going to add three point lights: -Two on either side of her head/shoulders that will be smaller (in radius) and brighter -One in front of her to add actual light (so details aren't lost)
Here's how I set up my lights.
The pink light settings are for the two point lights on the sides The white light setting is for the light in front of her
For a basic render, this is almost good enough for me, but I really like the glowing effect I get in my renders.
To achieve this, we have to go to the compositing tab:
4. Compositing
Full disclosure, my compositing tab is set to glow by default (that's how much I love it), so all of the renders in this tutorial have it turned on.
I use the glare node and set it to fog glow.
Here's my preferred setting:
I prefer the fog glow effect, but bloom, ghost, streaks and star are also options.
Here's a guide to the glare node!
Tbh, I never use any of the other settings, so I'll leave this tutorial here for today.
Here's the final result (with no additional editing):
If you have any questions, please don't hesitate to send an ask, message or join my discord (no minors pls) for help! <3
#ts4 render tutorial#ts4 blender tutorial#sims 4 render tutorial#sims 4 blender tutorial#sims render tutorial#sims blender tutorial#salemsims tutorial#render school tutorial#blender
194 notes
·
View notes
Text
It's nauseating that so many people working in the amorphous sector of "cLimAtE jUsTiCe" are corporate consultants with degrees completely unrelated to the actual environment, such as "communications" "international relations" "public policy" or other nonsense hitler-studies degrees focused on social control, meaning there's a totally unaffected white-collar workforce making decisions about the lives of millions of poor people, laundering the violence of unelected institutions like the IMF and Worldbank and countless investment firms through the vaguely feel-good but ultimately meaninglesss vector of "sustainability."
The hyper-capitalist global north poisons, trashes, and destroys the global south via the many faced horrors of modern petro-chemical agriculture, then turns around and deploys an army of overpaid careerist ghouls to micromanage the same people being crushed by the boot of proprietary seed companies, pesticide pollution, and the reckless oil consumption of the first world.
It's actually sickening thinking right now there's a person in a major city getting paid six figures to write up a white paper advising JPMorgan to start collecting their debts in Namibia in order to force them to mine uranium for DARPA and it's called like "Untapped Financial Opportunities in the African Market: Leveraging Mineral Rights to Elevate Underserved Communities." Like this is just a normal part of our world, that there's legions of spreadsheet makers in luxury apartments whose entire existence is spent assigning every human a value percentage in their data set, all in the name of "cLimAtE jUsTiCe. "
104 notes
·
View notes
Text
Was experimenting with halftone effects after watching this video and it almost has spiderverse vibes honestly. I actually learned some neat things about why printers use CMYK instead of just CMY so I thought I'd share !!
So in our optimal little computer space, Cyan (0,255,255), Magenta (255,0,255) and Yellow (255,255,0) all multiplied together gives us a perfect black (0,0,0) Awesome! The issue is that ink colors irl arent exactly perfect like this, and color is a bit more complicated irl compared to how computers represent it, so they aren't the greatest at combining into black if they aren't those perfect CMY values:
Left: CMY
Right: CMYK
(thats not even black, its a dark blue in the original image but dark colors just look so much richer)
An important step to make sure you arent doubling up on the black values though is to divide the image by it's own "value" (the max of all 3 color channels) that way the value is equal to 1 everywhere, and you're letting the black ink take care of the value on its own.
Left: CMY (normalized value)
Middle: K (black)
Right: Combined
Now obviously the grids of dots cant be aligned perfectly with each other because you'd just get a bunch of black dots in unwanted areas, but if the grids are misaligned, then some dots become more prominent than others which tints the whole image. This was an issue because older printing methods didn't have great accuracy and these grids were often misaligned.
The solution was to rotate these grids such that they can move around freely while getting rid of that tint effect if they aren't perfectly aligned :D
(I have no idea how they came up with these angles but that might be something to look into in the future who knows)
SPEAKING OF MISALIGNMENT
I wanted to implement that in my own filter to get some cool effects, and I discovered another reason CMYK is better than CMY for lots of stuff !!
With CMY, you're relying on the combination of 3 color channels to make the color black. This means if you have thin lines or just details in general, misalignment can make those details very fuzzy. Since CMYK uses a single color of ink to handle value, it reduces color fringing and improves clarity a lot even if you have the exact same misalignment as CMY!
Left: CMY
Right: You guessed it! CMYK
(yes these comparisons have the exact same color misalignment, the only difference is using a fourth ink color for black)
ANYWAY I just thought there was a lot of cool information in this tiny little day project, I also just think it looks really neat and wanted to share what I learned :3c
EDITING BECAUSE THERE'S ONE MORE THING I WANTED TO ADD
So, I talked about how to get K in addition to CMY instead of just CMY, but how exactly do you separate CMY from an image in the first place?
Well, CMY is a subtractive color space, meaning the "absence of color" is white, compared to RGB where it's black. This makes sense because ofc ink is printed on white paper. You can use dot product to get the "similarity" between two vectors, and this can be used to separate RGB actually! Using the dot product of a color and red (255,0,0) will give you just the red values of the image. This is cool though because if we get the dot product of our image and the color cyan (0,255,255), we can get the cyan values from our image too! If we first divide our colors by their value to separate the value from them, then separate CMY using those dot product values, and using K for our final black color value, our individual color passes end up looking like this:
While it's called a "subtractive" color space, I find it more intuitive to treat white as the absence of color here, and then multiply all these passes together. It makes it much easier to understand how the colors are combined imo. Notice how cyan is the opposite of red: (255,0,0) vs (0,255,255) and magenta and yellow are the opposites of green and blue respectively! This means you can actually kinda get away with separating the RGB values and just inverting some stuff to optimize this, but this example is much more intuitive and readable so I won't go too deep into that. THANKS FOR READING I know it's a very long post but I hope people find it interesting! I try my best to explain things in a clear and concise way :3
oh thank you I realized I should probably add an eyestrain tag
1K notes
·
View notes