#Diffusive Flux
Explore tagged Tumblr posts
Text

(For real though, I totally understand people's concern and why many people hate it, but I do want to let more people know that AI text-to-image isn't just writing a prompt. There are ControlNets, IP adapter, regional prompting, layer diffusion, IC light, RF-Inversion, Redux...lots of fun stuff.☕)
19 notes
·
View notes
Text
17 notes
·
View notes
Text
#ai image#ai girl#ai generated#ai art#ai babe#ai woman#fashion model#stable diffusion#ai model#ai artwork#shiny clothing#fluxai#flux.1
19 notes
·
View notes
Text
We serve the Owl Class
#comfyui#flux#stable diffusion#kool aid#skynet#robotoverlords#ai#aiart#aiartcommunity#generativeart#diffusion#iatemushrooms#OWL#class war
5 notes
·
View notes
Text
#ai#ai art#ai artwork#ai generated#art#digitalart#ai artist#stable diffusion#artists on tumblr#ai image#flux#fluxai
8 notes
·
View notes
Text
Nebula
anima_pencil-XL -> AuraFlow -> blue_pencil-flux1
3 notes
·
View notes
Text
I decided to test the new version of Stable Diffusion, called Flux. The prompt is at least similar to what I tested SDXL 1.0 with, a hippie girl wearing a t-shirt with the words "Everything Will Be Alright".
Well, shit. The thing is an absolute fucking behemoth, with a 11 GB data model and 5 GB dictionary, so it put my 8 GB video card and brand new 32 GB of DDR5 system RAM through a wringer. Rendering just one picture took about forty minutes when, for comparison, both SD 1.5 and SDXL need about a minute and a half. But it does replicate text fairly well, with minor typos if any and solid legibility. I also tried a web-based setup on the developers' site that's incomparably faster but still making typos (for example, I got a period-accurate white t-shirt with the phrase set in Cooper Black typeface, but it said "everything will be arright").
Also, from what I'm seeing, Flux not only messed up the number of fingers only once in a lot of pictures generated by multiple people, but can also, finally, draw guns correctly, something that annoyed me in Stable Diffusion for a long time. However, it's not that good at replicating art, being aggressively photorealistic no matter the prompt - I tried to ask the web-based setup for a replica of Rembrandt's style, something that the previous models did fairly well, got jack shit.
Also, a curious thing: I tried the Glurb prompt, a streak of random nonsense that returned goblins in every previous version of Stable Diffusion no matter the specific model, and the web-based setup returned something much closer to people's attempts in Midjourney instead - meaning that the entire thing was rewritten from scratch and not based on any previous version.
5 notes
·
View notes
Text

Bode's Galaxy (M81, center) and the Cigar Galaxy (M82, bottom left) surrounded by Integrated Flux Nebulae // Khalid Al-Mutairi
#astronomy#astrophotography#galaxy#spiral galaxy#irregular galaxy#star-forming galaxy#bode's galaxy#cigar galaxy#interacting galaxies#messier#messier 81#M81#messier 82#M82#NGC 3031#NGC 3034#nebula#diffuse nebula#integrated flux nebula#ursa major
65 notes
·
View notes
Text
Bufo kermitus idaltu was the more primitive ancestor of the modern Bufo kermitus sapiens
4 notes
·
View notes
Text





8 notes
·
View notes
Text
(I don't wanna post these vent on reddit SD sub because I know some people would be absolutely triggered😅)
Most model trainers don't know what they're doing. There, I said it.
First misconception is just not using any regularization image. Fucking hell I can't believe how many times I've seen people saying "reg is not necessary and only makes your training time longer", THEY ARE WRONG. Reg enables your character model to do styles other than the styles in your dataset in the best way possible. I've seen so many people complaining about their face LoRAs don't mix well with style LoRAs, USE👏THE👏GODDAMN👏REG guys.
Second misconception is only using square images (one aspect ratio) in your dataset. Sure, if you're only gonna make square images in the future then do as you please, but if you wanna do other aspect ratios, add some other aspect ratios. AI can be stupid sometimes, you need to teach it properly. Even Flux is sometimes not smart enough to know how your character should look like from a long distance, not to mention SDXL.
Third misconception is using so many photos with the exact same background and clothing. AI can be VERY stupid sometimes, if you put like 15 images with the same background and clothing inside your 30-image smol dataset, and you're not even use reg, then don't blame the model for not being flexible enough. Captioning/Tagging can only go so far, it's not like you tag all 15 images "blue suit" and it'll just know NOT to generate a suit-ish thing when you asked for a tshirt.
And Flux trainers, check your goddamn anatomy before posting your models!!! If the hands had extra fingers or weird nails at strength1, it means your model is either overcooked or significantly raw. I've seen countless models that are poorly trained, when I used my character model alone it's got perfect hands but when used with some style models the hands became bloody cursed. Don't underestimate the adaptive optimizers, if you can't cook it right then let Prodigy cook for you, it's called Prodigy for a reason.
<End of my vent.>
2 notes
·
View notes
Text
Gulf Of America Taco Sale
#comfyui#skynet#singularity#flux#stable diffusion#gulf of mexico#gulf of america#taco#trump#donald trump#chicken#elon musk#tesla#dada#generativeai
2 notes
·
View notes