Tumgik
#yield nodes results
pizzaronipasta · 1 year
Text
READ THIS BEFORE INTERACTING
Alright, I know I said I wasn't going to touch this topic again, but my inbox is filling up with asks from people who clearly didn't read everything I said, so I'm making a pinned post to explain my stance on AI in full, but especially in the context of disability. Read this post in its entirety before interacting with me on this topic, lest you make a fool of yourself.
AI Doesn't Steal
Before I address people's misinterpretations of what I've said, there is something I need to preface with. The overwhelming majority of AI discourse on social media is argued based on a faulty premise: that generative AI models "steal" from artists. There are several problems with this premise. The first and most important one is that this simply isn't how AI works. Contrary to popular misinformation, generative AI does not simply take pieces of existing works and paste them together to produce its output. Not a single byte of pre-existing material is stored anywhere in an AI's system. What's really going on is honestly a lot more sinister.
How It Actually Works
In reality, AI models are made by initializing and then training something called a neural network. Initializing the network simply consists of setting up a multitude of nodes arranged in "layers," with each node in each layer being connected to every node in the next layer. When prompted with input, a neural network will propagate the input data through itself, layer by layer, transforming it along the way until the final layer yields the network's output. This is directly based on the way organic nervous systems work, hence the name "neural network." The process of training a network consists of giving it an example prompt, comparing the resulting output with an expected correct answer, and tweaking the strengths of the network's connections so that its output is closer to what is expected. This is repeated until the network can adequately provide output for all prompts. This is exactly how your brain learns; upon detecting stimuli, neurons will propagate signals from one to the next in order to enact a response, and the connections between those neurons will be adjusted based on how close the outcome was to whatever was anticipated. In the case of both organic and artificial neural networks, you'll notice that no part of the process involves directly storing anything that was shown to it. It is possible, especially in the case of organic brains, for a neural network to be configured such that it can produce a decently close approximation of something it was trained on; however, it is crucial to note that this behavior is extremely undesirable in generative AI, since that would just be using a wasteful amount of computational resources for a very simple task. It's called "overfitting" in this context, and it's avoided like the plague.
The sinister part lies in where the training data comes from. Companies which make generative AI models are held to a very low standard of accountability when it comes to sourcing and handling training data, and it shows. These companies usually just scrape data from the internet indiscriminately, which inevitably results in the collection of people's personal information. This sensitive data is not kept very secure once it's been scraped and placed in easy-to-parse centralized databases. Fortunately, these issues could be solved with the most basic of regulations. The only reason we haven't already solved them is because people are demonizing the products rather than the companies behind them. Getting up in arms over a type of computer program does nothing, and this diversion is being taken advantage of by bad actors, who could be rendered impotent with basic accountability. Other issues surrounding AI are exactly the same way. For example, attempts to replace artists in their jobs are the result of under-regulated businesses and weak worker's rights protections, and we're already seeing very promising efforts to combat this just by holding the bad actors accountable. Generative AI is a tool, not an agent, and the sooner people realize this, the sooner and more effectively they can combat its abuse.
Y'all Are Being Snobs
Now I've debunked the idea that generative AI just pastes together pieces of existing works. But what if that were how it worked? Putting together pieces of existing works... hmm, why does that sound familiar? Ah, yes, because it is, verbatim, the definition of collage. For over a century, collage has been recognized as a perfectly valid art form, and not plagiarism. Furthermore, in collage, crediting sources is not viewed as a requirement, only a courtesy. Therefore, if generative AI worked how most people think it works, it would simply be a form of collage. Not theft.
Some might not be satisfied with that reasoning. Some may claim that AI cannot be artistic because the AI has no intent, no creative vision, and nothing to express. There is a metaphysical argument to be made against this, but I won't bother making it. I don't need to, because the AI is not the artist. Maybe someday an artificial general intelligence could have the autonomy and ostensible sentience to make art on its own, but such things are mere science fiction in the present day. Currently, generative AI completely lacks autonomy—it is only capable of making whatever it is told to, as accurate to the prompt as it can manage. Generative AI is a tool. A sculpture made by 3D printing a digital model is no less a sculpture just because an automatic machine gave it physical form. An artist designed the sculpture, and used a tool to make it real. Likewise, a digital artist is completely valid in having an AI realize the image they designed.
Some may claim that AI isn't artistic because it doesn't require effort. By that logic, photography isn't art, since all you do is point a camera at something that already looks nice, fiddle with some dials, and press a button. This argument has never been anything more than snobbish gatekeeping, and I won't entertain it any further. All art is art. Besides, getting an AI to make something that looks how you want can be quite the ordeal, involving a great amount of trial and error. I don't speak from experience on that, but you've probably seen what AI image generators' first drafts tend to look like.
AI art is art.
Disability and Accessibility
Now that that's out of the way, I can finally move on to clarifying what people keep misinterpreting.
I Never Said That
First of all, despite what people keep claiming, I have never said that disabled people need AI in order to make art. In fact, I specifically said the opposite several times. What I have said is that AI can better enable some people to make the art they want to in the way they want to. Second of all, also despite what people keep claiming, I never said that AI is anyone's only option. Again, I specifically said the opposite multiple times. I am well aware that there are myriad tools available to aid the physically disabled in all manner of artistic pursuits. What I have argued is that AI is just as valid a tool as those other, longer-established ones.
In case anyone doubts me, here are all the posts I made in the discussion in question: Reblog chain 1 Reblog chain 2 Reblog chain 3 Reblog chain 4 Potentially relevant ask
I acknowledge that some of my earlier responses in that conversation were poorly worded and could potentially lead to a little confusion. However, I ended up clarifying everything so many times that the only good faith explanation I can think of for these wild misinterpretations is that people were seeing my arguments largely out of context. Now, though, I don't want to see any more straw men around here. You have no excuse, there's a convenient list of links to everything I said. As of posting this, I will ridicule anyone who ignores it and sends more hate mail. You have no one to blame but yourself for your poor reading comprehension.
What Prompted Me to Start Arguing in the First Place
There is one more thing that people kept misinterpreting, and it saddens me far more than anything else in this situation. It was sort of a culmination of both the things I already mentioned. Several people, notably including the one I was arguing with, have insisted that I'm trying to talk over physically disabled people.
Read the posts again. Notice how the original post was speaking for "everyone" in saying that AI isn't helpful. It doesn't take clairvoyance to realize that someone will find it helpful. That someone was being spoken over, before I ever said a word.
So I stepped in, and tried to oppose the OP on their universal claim. Lo and behold, they ended up saying that I'm the one talking over people.
Along the way, people started posting straight-up inspiration porn.
I hope you can understand where my uncharacteristic hostility came from in that argument.
160 notes · View notes
alyss-erulisse · 11 months
Text
Tumblr media
Morph Madness!
Fixing Exploding Morphs
Marik's Egyptian Choker is currently in production. It is the first accessory I've made that involves assignment to more than one bone and morphs for fat, fit and thin states. So there is a learning curve, and it is during that learning curve that interesting and unexpected things can happen.
As with my other content, I'm making the choker fit sims of all ages and genders--that's 8 different bodies.
Adding fat, fit and thin morphs multiples this number to 27 different bodies.
I'm also making 3 levels of detail for each of these. The number comes to 81 different bodies, 81 different bodies for which I need to tightly fit a cylinder around the neck and avoid clipping.
Tumblr media
That's a lot of work. I can see why most custom content creators stick with one age, gender and detail level. At least, they did in the past. Our tools are getting better day by day, and that may partly be because of creative, ambitious and somewhat obsessive people like me.
There are usually multiple ways to solve the same problem. Some ways are faster than others. This I've learned from working in Blender3D. You can navigate to a button with your mouse or hit the keyboard shortcut. You can use proportional editing to fiddle around with a mesh or you can use a combination of modifiers.
Tumblr media
If I am going to be creating 81 chokers, I don't want to be fiddling around on each one of them for an hour. I need something automated, repeatable and non-destructive so I can make adjustments later without having to start over from the beginning. I need to work smart rather than just work hard.
This is where modifiers and geometry nodes come in. After you develop a stack to work with one body, the same process pretty much works for the others as well. That is how it became easier for me to model each of the 81 chokers from scratch rather than to use proportional editing to fit a copy from one body to the next.
But I was about to confront an explosive problem…
Anyone who has worked with morphs before probably knows where this story is headed. There is a good reason to copy the base mesh and then use proportional editing to refit it to the fat, fit and thin bodies. That reason has to do with vertex index numbers.
Tumblr media
You see, every vertex in your mesh has a number assigned to it so that the computer can keep track of it. Normally, the order of these numbers doesn't really matter much. I had never even thought about them before I loaded my base mesh and morphs into TSRW, touched those sliders to drag between morph states, and watched my mesh disintegrate into a mess of jagged, black fangs.
Tumblr media
A morph is made up of directions for each vertex in a mesh on where to go if the sim is fat or thin or fit. The vertex index number determines which vertex gets which set of directions. If the vertices of your base mesh are numbered differently than the vertices of your morph, the wrong directions are sent to the vertices, and they end up going everywhere but the right places.
It is morph madness!
When a base mesh is copied and then the vertices are just nudged around with proportional editing, the numbering remains the same. When you make each morph from scratch, the numbering varies widely.
How, then, could I get each one of those 81 meshes to be numbered in exactly the same way?
Their structures and UV maps were the same, but their size and proportions varied a lot from body to body. Furthermore, I'd used the Edge Split modifier to sharpen edges, which results in disconnected geometry and double vertices.
Sorting the elements with native functions did not yield uniform results because of the varying proportions.
The Blender Add-On by bartoszstyperek called Copy Verts Ids presented a possible solution, but it was bewildered by the disconnected geometry and gave unpredictable results.
Fix your SHAPE KEYS! - Blender 2.8 tutorial by Danny Mac 3D
I had an idea of how I wanted the vertices to be numbered, ascending along one edge ring at a time, but short of selecting one vertex at a time and sending it to the end of the stack with the native Sort Elements > Selected function, there was no way to do this.
Of course, selecting 27,216 vertices one-at-a-time was even more unacceptable to me than the idea of fiddling with 81 meshes in proportional editing mode.
So… I decided to learn how to script an Add-On for Blender and create the tool I needed myself.
A week and 447 polished lines of code later, I had this satisfying button to press that would fix my problem.
Tumblr media
Here are the index numbers before and after pressing that wonderful button.
Tumblr media
My morphs are not exploding anymore, and I am so happy I didn't give up on this project or give myself carpal tunnel syndrome with hours of fiddling.
Tumblr media
Marik's Egyptian Choker is coming along nicely now. I haven't avoided fiddling entirely, but now it only involves resizing to fix clipping issues during animation.
Unfortunately, I'll have to push the release date to next month, but now, I have developed my first Blender Add-On and maybe, after a bit more testing, it could be as useful to other creators in the community as its been to me.
Looking for more info about morphing problems? See this post.
See more of my work: Check out my archive.
Join me on my journey: Follow me on tumblr.
Support my creative life: Buy me a coffee on KoFi.
76 notes · View notes
sunshinesmebdy · 6 months
Text
Power Surge for Business: Moon in Aries Meets Mercury, Eclipse & Destiny (April 8th)
Get ready for a firestorm of opportunity, because on April 8th, we experience a powerful convergence of astrological forces that can dramatically impact your business and finances. Buckle up, entrepreneurs and go-getters, because I’m here to guide you through this dynamic cosmic dance!
Triple Threat Tuesday: Aries Takes Charge
Three key transits collide on this potent Tuesday, each influencing your business and financial landscape:
Moon Conjunct Mercury in Aries: This dynamic duo ignites clear and concise communication. Negotiations flow effortlessly, and innovative ideas spark like wildfire. Action Item: Schedule important meetings, pitches, or presentations for this day.
Business Implications:
Effective Communication: Use this alignment to express your ideas clearly and persuasively. Collaborate with colleagues, clients, and partners to convey your vision.
Swift Decision-Making: Decisions made during this transit may be impulsive but can yield positive results. Trust your instincts.
Marketing and Sales: Promote your products or services boldly. Engage with customers through direct communication channels.
Financial Impact:
Increased Transactions: Expect brisk financial activity — sales, contracts, and deals.
Risk-Taking: Be cautious of impulsive financial decisions. Balance boldness with practicality.
Moon in Aries Conjunct Mercury in Aries:
Effective Communication:
During this alignment, communication becomes assertive and direct. It’s an excellent time for negotiations, pitching ideas, and closing deals. Business leaders should express their vision clearly to clients, partners, and colleagues.
Swift Decision-Making:
Decisions made now may be impulsive but can lead to positive outcomes. Trust your instincts and act swiftly. Avoid overthinking; seize opportunities promptly.
Marketing and Sales:
Promote your products or services boldly. Use direct channels — emails, phone calls, or face-to-face interactions.
Customers respond well to confident communication.
Aries Solar Eclipse: Eclipses signify new beginnings. The fiery energy of Aries makes this a particularly potent time to launch new ventures, rebrand your business, or unveil a revolutionary product.
New Ventures:
Solar eclipses mark powerful beginnings. Use this energy to launch new projects, products, or marketing campaigns. Set intentions for growth and innovation.
Self-Discovery:
Reflect on your business identity. What drives you? What needs healing or transformation? Realign your business goals with your authentic purpose.
Courageous Initiatives:
Aries encourages bold moves. Take calculated risks. Innovate, even if it means stepping out of your comfort zone.
Business Implications:
New Ventures: Launch new projects, products, or marketing campaigns. The eclipse provides a burst of energy.
Self-Discovery: Reflect on your business identity, purpose, and leadership style. What needs healing or transformation?
Courageous Initiatives: Aries encourages bold moves. Take calculated risks.
Financial Impact:
Investments: Consider long-term investments or diversify your portfolio.
Debt Management: Address financial wounds — pay off debts, seek financial advice.
Innovation: Invest in cutting-edge technologies or business models.
Moon Conjunct North Node & Chiron in Aries: The North Node represents your destined path, while Chiron, the “wounded healer,” highlights past challenges. This alignment sheds light on any past financial roadblocks and empowers you to step into your true financial potential. Action Item: Reflect on any past financial hurdles you’ve faced. How can you use those experiences to propel yourself forward?
Moon in Aries Conjunct North Node and Chiron in Aries:
Purpose-Driven Actions: This alignment signifies a karmic turning point. Align your business actions with your soul’s purpose. Seek meaningful work that resonates with your core values.
Healing Leadership:
Address past wounds in your leadership style. Lead with empathy and authenticity. Healing within your organization can positively impact financial outcomes.
Networking and Guidance:
Connect with influential individuals who can guide your path. Collaborate with like-minded businesses for mutual growth.
Business Implications:
Purpose-Driven Actions: Align your business goals with your soul’s purpose. Seek meaningful work.
Healing Leadership: Address past wounds in your leadership style. Lead with empathy.
Networking: Connect with influential individuals who can guide your path.
Financial Impact:
Karmic Financial Shifts: Expect changes in income sources or financial stability.
Self-Worth and Abundance: Heal any scarcity mindset. Value your unique contributions.
Collaborations: Partner with like-minded businesses for mutual growth.
Remember: While the impulsive energy of Aries can be a powerful asset, balance it with careful planning. Do your research before making any major decisions, but don’t let hesitation extinguish your spark. This is a day to seize opportunities, ignite your passion, and take your business to the next level!
14 notes · View notes
baldursyourgate · 1 year
Text
re: the pregnancy plotline
There isn't much about it actually. Out of all we've seen so far, this miiiight actually be to be cut content that's still partially available in game file.
As per usual, giga super mega spoiler under the cut
So first I'm going to show all the lines itself, then I'll examine the whole thing a bit deeper, looking into the flags.
The lines: If you've been around for a while, you might have seen this post with the pregnancy related lines. What I've found bellow is exactly just that, now with the companions' name attached.
Tumblr media
Astarion's lines on Minthara are always my fave lol. The "Link to Node 101" just sends me to nowhere in the end, so no other pregnancy related lines from Astarion.
Tumblr media Tumblr media
Aww godmother Karlach for baby Minthy <33 Cute cute, wholesome! Similar to Astarion, "Link to Node 347" doesn't actually lead to any content.
Tumblr media
Do you think Lae'zel knows only Giths lay egg and not the majority of humanoid species? Something to think about.
And that is all for the dialogue. Now to the less interesting part: Digging around the code.
Tumblr media
So far the keyword "pregnancy" only yields 16 results. That's not a lot at all, as you've seen above, neither Wyll nor Gale has a reaction line. Moreover, no continuation or further elaboration from other companion nor any narrator lines.
Tumblr media
"TG_ORI_Minthara_TalkedAboutPregnancy" flag only has dialogue when its value is "False" and no dialogue in case of "True". Very much incomplete and further points to this plotline being incomplete/dropped/cut but not entirely removed from the code.
"ORI_Minthara_State_ChildIsPlayers" flag only has one instance. Can't seem to find _ChildIsntPlayers or _ChildIsSomeoneElses or anything similar.
And that leaves us to the...
Conclusion: I'm convinced that this plotline is dropped.
Not gonna lie... I'm kind of glad that it is. I'm unsure how it can be handled well, but that might just be my skepticism.
33 notes · View notes
bumblebeeappletree · 3 months
Text
youtube
Woody herbs are staples in most productive gardens. Being woody herbs, it’s not much of a surprise that they can grow woody as the supple young plants you put into the ground become tough and mature. They also can lose their vigour as they become woody, after a few years not bouncing back quite as well as they once did after a hard prune.
Cuttings are the most common way to propagate plants for home gardeners as well as large-scale propagation nursery. Get this technique down, and you can apply it to almost all plants in your garden!
This can be done any time of year, except for the dead of winter. Undertaking it in spring will yield the fastest results.
Step 1: Taking the cutting
- Use sharp, fine-tipped snips to take cuttings. This prevents damage to the plant using blunt force or ripping the stem.
- Look for nice healthy tips to harvest from. You don’t want to take any stems or leaves that are sad or diseased. If your plant is diseased or struggling, taking healthy cuttings can be a good way to give it a fresh lease on life.
- If it is a hot day or you are taking lots of cuttings, it is a good idea to keep them fresh by storing the cuttings in a container with a wet towel to keep them hydrated while you work. If the stems dry out, they won’t strike.
- An ideal length for cuttings is about 10-15 cm long, or with around four nodes. Don’t worry about the length too much as you can always trim it back when you get to the planting phase if they are too long.
Step 2: Trim stems & excess foliage
- Bring your cuttings into your workstation or greenhouse. Now you can clean up the foliage and trim back the length.
- A minimum of four nodes is ideal for sage cuttings. The node is the area where leaves and stem meet. Josh can demonstrate how to find and count the nodes. Ensure the base of your cutting is cut underneath the node. This area has a higher concentration of the plant hormone, auxin, which encourages rooting.
- Trim or gently pull off the leaves from the bottom three nodes, leaving just the foliage at the top growing tip. Any extra foliage will speed up drying of cutting which is not ideal. If the leaves left are quite large, you can cut them in half to reduce the surface area. This will not harm the plant but will reduce water loss.
Step 3: Place in growing medium
- Fill pots with propagation mixture and wet well beforehand. This mix is a bit finer than conventional potting mix, it should be nice and fluffy and hold onto moisture well. Extra perlite mixed in is also a good idea as it allows the developing roots to push through and access air.
- Dip ends of stem in rooting hormone if you have it or would like to, but it is not required. If you do use it, remember that a little goes a long way.
- Stick the stems directly into the pre-prepared pots, up to the bottom of the remaining leaves. You may put several cuttings in the same pot at this early stage.
Step 4: Managing moisture
- Water in well and place the pots in your greenhouse, propagation station, or under a DIY humidity dome such as a plastic container to keep the soil moist. You can take the lid off of the humidity dome every few days to allow fresh air in and prevent root rot, but keeping the soil moist during the initial growth phase is crucial. If the cuttings dry out, the rooting with cease and the cuttings will die.
Step 5: Separate your plants!
- Rooting time required can take a couple of weeks or up to 2 months, depending on the season. How do you know if you have been successful, and the cuttings have set root? If you see new growth of leaves from the top of the plant. Also, you can give them a tug and they should hold nice and firm in the soil.
- Once your baby plants have grown a bit and developed a good root mass in their pots, you can separate them out from each other and pot up individually. After they have grown healthy roots in their individual pots, plant them out in the garden.
2 notes · View notes
oaresearchpaper · 5 months
Link
2 notes · View notes
Text
Cloning cannabis refers to the process of creating genetically identical copies of a cannabis plant. This method is widely used in the cannabis cultivation industry for several reasons, including maintaining the desired characteristics of a particular strain and ensuring consistent quality and potency in the harvested product. Here's an overview of how cloning cannabis typically works:
Selecting a Mother Plant: The process begins by selecting a healthy and robust cannabis plant with desirable traits such as high potency, flavor, aroma, and yield. This chosen plant is referred to as the "mother plant."
Taking Cuttings: Once a suitable mother plant is identified, growers take cuttings or clones from it. A cutting is a small section of a branch or stem, usually 4-8 inches long, that includes at least one node (a small bump where leaves, branches, or roots grow) and a portion of the stem.
Rooting the Cuttings: These cuttings are then prepared for rooting. Growers often dip the cut end of the clone in a rooting hormone to encourage root development. The cuttings are then placed in a growing medium, such as soil, rockwool, or a specialized cloning cube, where they are kept in a controlled environment with high humidity, adequate light, and appropriate temperatures to stimulate root growth.
Transplanting: Once the cuttings have developed roots (usually within a few weeks), they are ready to be transplanted into larger containers or directly into the desired growing medium.
Vegetative Growth: The newly rooted clones are placed in the vegetative growth phase, where they receive a consistent light cycle of 18-24 hours of light per day. During this phase, they grow into mature plants with a strong structure.
Flowering: After the plants have reached a suitable size, they can be induced to enter the flowering phase by adjusting the light cycle to 12 hours of light and 12 hours of darkness per day. This triggers the development of flowers (buds), which contain the desired cannabinoids and terpenes.
Cloning cannabis allows growers to replicate the genetic makeup of a high-quality mother plant, ensuring consistent characteristics and traits in the resulting plants. It also saves time compared to growing from seeds, as clones skip the germination and early growth stages. Additionally, it can be a cost-effective way to expand a cannabis garden or maintain a specific strain.
However, it's essential to note that proper care and attention are required during the cloning process to prevent diseases, pests, and stress that can affect the success of cloning efforts.
6 notes · View notes
seriously-mike · 1 year
Text
What Is ComfyUI and What Does It Do?
Tumblr media
I mentioned ComfyUI once or twice in the context of the AI-generated images I posted. For those who haven't looked it up yet - it's a StableDiffusion power tool: it's fairly complicated, but immensely powerful and can create several things the usual AI image generators can't. It also has plugins that allow for even crazier stuff.
So, first things first, you can download ComfyUI from GitHub. It comes up with all necessary file dependencies and opens the GUI in your browser, much like the popular Automatic1111 frontend. Right off the bat, it does all the Automatic1111 stuff like using textual inversions/embeddings and LORAs, inpainting, stitching the keywords, seeds and settings into PNG metadata allowing you to load the generated image and retrieve the entire workflow, and then it does more Fun Stuff™. For example, you can rig it as a simple GAN-based upscaler with no AI generated prompt whatsoever. Plug any upscaler model from The Upscale Wiki, and you can blow 800x600 images from fifteen years ago up to four times the size with no noticeable quality loss - insanely handy if you want to use ancient stock photos from DA for a project today. Sure, the thing can't handle smaller images as well as commercial software like Topaz Gigapixel AI does, but it works with typical StableDiffusion output just fine. Other things you can do? Area composition, for example. What's that? Run a prompt for a landscape, halt it a quarter way in, run a smaller prompt for a character, halt it a quarter way in, combine the two images by pointing where you want the character placed and run a prompt containing both from where you stopped. Or ControlNets: do you want to recreate a pose from a photo or a movie poster without all the other things the AI might infer from an Image2Image prompt? Draw a quick scribble so the AI knows how to draw a firearm and where, for example? Render a proper hand with the gesture you want? It's possible.
And then, there are plugins. I mostly use two sets of custom nodes:
WAS Node Suite for basically everything: cropping, upscaling, text-related shenanigans like string concatenation, converting string to prompt conditioning (so I can plug a line of additional detail requests into a prompt for the high-res fix, like face swapping) or adding a date and time to the output filename,
Fannovel16's ControlNet Preprocessors for generating ControlNet data from input images and plugging them into a new prompt in one go.
The WAS Node Suite can do more than that: for example, it has Semantic Segmentation-based mask tools that I could use for description-based automatic masking once I figure out how to use it. The ControlNet preprocessors? I used them to help with inpainting on both "The Operator 2.1" and "Tiny Dancer".
Tumblr media Tumblr media
For some reason, using the Protogen Infinity model for inpainting yielded weird results - for example, it consistently added only a small nub to the plate on Operator 2's sleeve that was left as an attachment point for an entire robot arm, forcing me to switch to image2image generation and re-render her as a full cyborg (with hilarious results like massive spotlights on the chest). So I used the preprocessor to read the pose from the base image for Operator 2.1 and feed it into the prompt along with an inpainting mask. With the additional data, Protogen Infinity properly drew a CyborgDiffusion-style left arm, along with that plate on the top and some skin matching the base image.
I described the process of creating "Tiny Dancer" in a separate post - in short, it took inpainting, then inpainting on top of that inpainting, then cleaning everything up in Photoshop.
Tumblr media Tumblr media
Another neat trick you can do with ComfyUI is the high-res fix: instead of rendering the latent image into a human-readable form, you can upscale it and feed it as an input image for another render. This has two distinct advantages over simple upscaling, as you can see comparing "The Boss of the Reformatory" (left), which was rendered in a single pass at 512x768, then upscaled with ESRGAN to 4x size, and "Princess of Darkness" (right) that was rendered in 512x768, then had latent data upscaled to 2x, fed into a shorter, 20-pass render at double size and 50% balance between text prompt and input image (go below that and pixels will start showing, go above and it'll go off-model), and then upscaled to 2x the new size using BSRGAN. Not only the image is sharper, but a re-render at doubled size redraws all the details: just look at the tattoos. The left image has them blurry and blocky, while the right one looks like straight out of a tattoo artist's Instagram. The reason is two-fold: not only I haven't unlearned things from using LAION's default dataset, which was dogshit a few months ago and keeps degrading due to how they're building it (basically, they're hoovering up every image off the internet and oversaturation of AI-generated images with background diffusion-based noise fucks up attempts at making something out of them). I still haven't perfected the process - sometimes upscaling sends the anatomy or the linework off the rails or screws up with textures and details, but the WAS Node Suite has a new tool called Sampler Cycle which somehow performs a less-invasive high-res fix with additional detail injection. It has no manual or even a description of how it works, though, so I have no clue what the parameters do and how to tweak them.
Another cool thing added in one of the recent updates is the optional real-time preview of how your image is created. Sure, it's a resource hog that can double the rendering time, but having that kind of oven window letting you see when the image is done (and when it starts overcooking) can help you optimize the rendering process. Why go with 80 passes when the image looks good after 30? Remember, it's your own computing time. You can also assess whether the model is worth using - if it takes too many steps to be serviceable, just dump it and find another one.
Best of all, though, it's free. What had me swear off online-based generators was that not only most of them offer only base SD1.5 and SD2.1 models, but also have you pay for generating the images - which, combined, make the entire enterprise look like a scam aimed at bilking suckers who have no idea what they're doing. Not just the base models are shit and can't generate anything half-decent even after 50 attempts at fairly high pass count and guidance, but there are also no tooltips, hints, references or anything of the sort at hand to help you. If you want to educate yourself about what's what, you need to go somewhere else entirely. Yeah, I get it, the computing power needed to run those things is immense and bills don't pay themselves, but getting completely basic basics for ten bucks a month kinda stinks. Meanwhile, I fire this thing up on my four-year-old gaming rig (i7 8700K and RTX 2070 Super) and not only I get results faster, I can also achieve much better effect by plugging in additional datasets and models unavailable for website-based generators.
2 notes · View notes
doughnutbooboo · 9 days
Text
星辰大海,终有一别
This tribute is dedicated to May.
Four weeks have passed since May's passing from adenocarcinoma on August 16th at 5:25 PM. The challenge of crafting a suitable tribute has been formidable, with each iteration deemed inadequate. The limitations of language in conveying May's remarkable strength and courage have become apparent. The conventional expressions of condolence have only served to intensify the sense of loss.
Instead of writing how great of a dog May was (which she was. She is the best dog on earth), and how much we love her (which we do. I’ll continue to love her until the end of my life), I will write how she died.
May's symptoms commenced in the beginning of April, characterized by swelling and lameness in her rear left paw. Diagnostic investigations, including X-ray and biopsy, yielded inconclusive results. They showed no sign of bacterial and fungal infection, no sign of cancer. She was prescribed Metacam, which brought the swelling down immediately. By the end of April, the swelling was completely gone and she was using her leg as normal. We thought it was a twisted muscle or tendon and decided to go to China as planned.
The swelling and lameness came back around May 15th. A subsequent biopsy during revealed similar findings, prompting referral to Toronto Veterinary Emergency Hospital for further evaluation. May had her consultation on June 4th and three possibilities were laid in front of us: a foreign object trapped in her leg which caused all the swelling and tenderness, a rare parasite infection, or cancer. A CT scan, followed by an explosive surgery, were performed on her the next day. When the surgeon cut open her leg, she saw perivascular lesions with multi-cystic appearance along the muscle membranes. The oncologist was called over for a consultation. The lesions did not appear to be cancerous to him. Without the presence of any foreign object, they suspected she had a rare parasite infection. Multiple samples from the lesions were taken for further testing.
We got a phone call from the surgeon on June 10th and received the most devastating news. Every single sample she took, including the inguinal lymph node, came back to be adenocarcinoma. If this was not dreadful enough, we were told in most cases, the leg is a metastasized secondary site. The primary tumor is most likely somewhere in the abdomen. This is a stage 4 adenocarcinoma with very poor prognosis.
We were given an option to try an oral chemotherapy drug for one month. This drug has the potential to slow down the progression of the carcinoma, with a small likelihood of shrinking the tumors. By the end of the one month trial, we all vitnessed the further progression of the swelling and lameness, to a degree that she completely stopped using that leg. And she was clearly in pain. During her follow-up appointment with the oncologist, we found out her weight dropped from 36.7 kg to 33 kg in one month. We were suggested to do an ultrasound of her abdomen and a X-ray of her chest. Now looking back, those tests were meant to confirm she had cancer metastasized all over her body, in order to justify euthanasia.
Against all odds, her tests came back clear! No mass was found on the X-ray and the ultrasound. The leg was confirmed to be where the primary tumor was. Amputation of that limb would be a valid treatment, potentially a cure! Both the oncologist and the surgeon thought it was the “devastating case A”, but it turned out to be the “optimistic case B”. On July 18th, May had her rear left limb amputated.
The universe fucked us real hard! One week after the amputation, I noticed redness and swelling along her incision line. We brought her in for a recheck and it was believed to be a start of an infection. With the antibiotics prescribed to her, the “infection” got significantly worse in the next five days. When we brought her in for the oncologist follow-up, we were delivered the most dreadful news once again. The histopathology report found that the adenocarcinoma had metastasized to the left popliteal lymph node, the left inguinal lymph node, and to the skin around the inguinal lymph node. The metastasized area of the skin is extensive, covering her skin near her vagina, her anus, and above her tail. It was not some infection. It was the recurrence of cancer. It was neither “case A”, nor “case B”. It was the insidious “case C” no one has seen it coming.
From July the 31st, the day of the oncologist follow-up of the amputation, to August 16th, the day we euthanized her, the cancer was growing on her skin like a wild fire. There was visible difference every single day. I watched it grow, ulcerate, grow, could no longer maintain the growth rate, and die. The death of the cancer tissue developed into necrosis, which gave out a foul smell, the smell of rotting. May started to constantly lick after her oozing and stains. Leia joined very shortly after. Constantly cleaning after one’s own necrosis stains is not a dignified way to live.
We brought May in on August 14th, knowing in our heart we were in the end game, with the slightest hope that maybe some other chemotherapy protocol could prolong her lifespan. She was seen by another oncologist (hers was on vacation at the time), the surgeon who did the amputation, and the anesthesiologist. They were quite certain that May was reaching her tipping point. They were all shocked at how fast this carcinoma was progressing. I asked them to give her drugs that would make her comfortable for three more days. We didn’t care her kidney or liver might fail at that point. The anesthesiologist gave her a shot of fentanyl and a dose of slow releasing ketamine. With the other pain meds she has already been on, she was on five types of pain medications at the end of her life. We brought her to her vet for euthanasia on August 16th at 4:30 pm. May passed away at 5:25 pm.
Four weeks have passed and I found how fascinating my memory plays tricks on me. Those memories of tiny little details of May started to blur. Her very own unique smell, how she sighs when she’s satisfied, how she sighs when she’s tired and ready to sleep, the sound she makes when she was panting in different ways, the warning bark she gives out when she’s guarding, the happy bark when she’s saying hello, the angry bark before she starts a fight, the sound she makes when she comes upstairs, the sound she makes when she comes upstairs with three legs.
At the same time, some images will stay with me for the rest of my life.
I’ll always remember how she managed to walk up and down the stairs, went to the backyard to shit eight times per night when she was on the chemo drug. She never had a single accident in the house to the last moment of her life, whether was with that cancer leg gave her constant pain, or was with three legs right after her amputation on the same day. She didn’t even poop or urinate herself after she was euthanized. She lived a dignified life.
I’ll always remember the day we picked her up from her amputation surgery. She was spinning in circles with three legs, totally lossing herself with so much joy, just like how she losses her mind and forgets to breathe when we pick her up from the kennel or daycare every single time! July 18th is one of the happiest days in my life! I had so much hope that she was cured from the amputation.
I’ll always remember how her eyes shot wide open that very second she died. I’ll never forget how her body was slowly lossing the temperature and softness one hour after she passed, and that’s when we said the final farewell. Our ten years of life together came to an end. We will never see each other again.
I’m not gonna tell you that I think of her every minute or hour of my life since she passed away. Because I don’t and that’s only gonna drive me insane. I’m not gonna tell you that life will never be the same because life has been surprisingly the same.
People die everyday second of everyday. Animals die everyday. May’s death is like a drop of water into the ocean. It has been an honour and true happiness to spend ten years of my life with May, a brilliant, loyal, ridiculously strong and brave German shepherd, such a wonderful being that as an atheist, the creation of May is a mystery to me.
星辰大海,终有一别。
我会用我的余生来想念你。
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
1 note · View note
haldenlith · 23 days
Text
The Road to Giving Ardwynn [a] Head (Pt 2)
And we're back with part two of this Blender adventure.
Same drill as last time, this is a lot of me bumbling about and leaning really hard on Google searches and Youtube.
But I do also make a lot of progress.
Also, again, long post is long.
Tumblr media
I made a lot of headway (pun intended) with eyelashes and eyebrows, finally, though I definitely had to refer to a tutorial, in the end. There was a lot of eyebrow grooming going on, which felt weird. Thus, I move onwards to another hairy issue: his hair. First attempt doesn't quite look right, but I feel like I'm on the right path, at least. Also, still impressed by the hair engine or whatever that's powering Blender on the backend. Cool stuff.
Something is bugging me about his face, but I can't quite pinpoint it, so I pay it no mind. It could just be my frustration.
Tumblr media
As I was fighting with the hair, a 360 circle around the head to check how it's looking reveals a problem. He's a perceptive guy, but I don't think he's supposed to literally have eyes back there. I know what the issue is, though. I had set up some hidden shapes inside his head as a janky solution to the weird light leak issue I was having with the eyes and eyelids. For some reason, that material is showing through the back. Some geometry node shenanigans in the Shader tab should fix this right up. One of the easier problems to fix. Also, one of the more comedic. Someone on the Discord I'm in likened the eyes to Muppet eyes, and I have to say, I agree.
Tumblr media
Aaaah, hair done. Isn't he handsome? And it's so fluffy. Happy with the hairstyling results (and picking the correct Hair shader, this time, so I'm getting the correct show of the slate blue his hair is), it's time to return to his face. He feels too smooth. I feel like there needs to be more texture or something. Also, there is a noticeable lack of shine to the skin. Or, rather, the right kind of shine.
Also around this time, I start to realize some of my issue: the eyes feel like they're too small. Hmm, that's a fundamental core mesh problem, and, with working with an asymmetrical mesh, isn't the easiest of fixes. I stew on it.
Tumblr media
My attempts at skin texture all fall flat, even the procedural nonsense. I also try multires modifier and sculpting more texture on (I have some brushes), and that doesn't really get me the results I want, either. So I give up and mess with the concept of specular maps, to very minimal success.
Also, the eyes still bug me. Ah, but it's around this time that I notice I'm having issues with Blender slowing down and crashing again, so it's time to do some investigating. This time it isn't due to a ridiculously high poly nightmare of a mesh, because that bad boy is disabled and hidden. So what is the problem...
Well, I find the problem. It is the mesh, but not in the same way. Somewhere along the way, I must've hit CTRL+T, which triangulated my mesh. Normally, this would be less of an issue, but the problem is that I'm not done with the mesh. Now it's a complicated mess of triangles, and doing "Tris to Quads" doesn't yield particularly good results, especially when I'm considering rigging this idiot later for posing. I'm faced with a dilemma -- just deal with it and move on, or remesh the head for a third time. In my moment of crisis, I also took a moment to upgrade to 4.2, in case maybe there's some fancy bell or whistle that would help the issue (there wasn't).
Tumblr media
I end up biting the bullet and remeshing. On the plus side, this being my third round at the rodeo means it goes much faster.
Tumblr media
Consequently, this also meant I also had the opportunity to tweak the base high poly head I was meshing over, and thus I fixed the eyes, touched up the lips, etc. I am much happier with the new result, even if it did mean that I had to also make a new UV map and tweak the texture map thing I had for his skin. It was a pain, but I got there.
Still fighting with specular maps and whatnot.
Tumblr media
Somewhere along the way, I also had to fix some items about the eyes, because 4.2 broke the texture, and then I somehow broke the eye tracking, so that got removed. I have made some headway of the spec map, though, after a lot of research on IOR and spec maps and bla bla bla. Whatever the case, look at that skin. Look that little bit of shine on his nose.
Tumblr media
Considering myself done, I move onwards off of those visual items and on to RIGGING. I had been dreading it for a while, and rightfully so. I was having a hell of a time getting bones to work right, even with tutorials. I threw my hands up in the air and gave up on bones for a bit, and instead something else caught my eye in my recommended on Youtube -- a tutorial on Shape Keys for face rigging. Better yet, it's for 4.2, and not 2.9 or whatever like many tutorials out there.
Shape keys are SO MUCH EASIER, and you can use the sculpt mode to push the face into what you want. I felt so happy and accomplished! My boy was smirking! And I had a slider I could use to make him smirk or be neutral! Amazing!
Then I realized something was missing -- the skin glows, the iconic wisps of Light that dance on Awoken skin. A travesty, truly. I set out with some knowledge I had of doing procedural nonsense with shaders and math, and successfully make an interesting design that glowed. So I test it on his skin and...
Tumblr media
Ah. That's... a problem. He's suddenly way more blue, which is also interfering with the split texture thing I've got going on. (The eye area has different Subsurface Scattering than the rest of the face/body.) I disconnect nodes, reconnect other nodes. I open up Crow's model to see how it's done over there, but to no avail, as it's setup mostly the same. It's a frustrating version of "Violet, you're turning violet!", only with less comedy and experimental candy. I was tempted to just steal the mapping that's on Crow's model for Ardwynn. Halfway to that thought, I go ahead and delete all of the procedural nonsense that makes up the light bands we see up in the picture, and make my own custom map of light shapey wispy nonsense. If that doesn't work, THEN I'll steal the light map from Crow.
I get the same result, but then I notice something. This time, when I disconnect it, the blue stays. Wait, WHAT? So then I madly poke and root around in the textures to figure out what's gone so horribly wrong.
Long story short, it's the Subsurface Scattering (SSS). Something weird has gone very awry. I had ever so slightly increased the Z slider to adjust the scattering, since normally, on a human, there's more red because, you know, blood and meat and all that. Well, I headcanon that Awoken have closer to magenta/violet blood and such, given that a vast majority of them are blue. Silly, but that's how my brain goes. I mess with the sliders to try and fix things and find that I just can't get what I want, nor can I fix the issue, so I just resolve to make my own SSS map. I recalled watching a video about all that stuff for an old version of Blender and, though I blacked out through bits of it because it was a lot of information dumped on my brain, I retained some bits, like separating the XYZ and stuff. Ultimately, I took something that looked a little like this:
Tumblr media
(only a piece of the whole map)
And plugged it in like this. (I also plugged the Emission map back in.)
Tumblr media
And got something along the lines of this.
Tumblr media
I got rid of the thing that stopped the light leak, as well, and did one whole shader for all the skin, not one for the eyelids and one for everything else, as I found that 4.2 must've fixed the light leak issue, and my SSS map allowed the eyes to not have a weird interaction with the eyelid area of skin. New problem: The weighting seems to be wrong, because I was no longer getting the right SSS effect on the ears, especially since he has a light behind him just to keep an eye on that very thing. This is also the most recent test render, so as you can see, I did shift gears to alleviate my exhausted frustration. It just isn't Ardwynn without his necklace, so I got started on trying to model that.
I found something that outlined the proper physics for that, as I had been having a hard time getting cloth modifiers to work, and though it did help a bunch, I ran into a new problem: baking/testing the physics of how the leather string of the necklace falls on Ardwynn takes upwards of 30 minutes to an hour. It'd be one thing if it was the final render or whatever, but just testing some settings? Pain. I spent multiple hours fiddling around, and where I left off, I was close, but it had clipped into his body.
But it was 12AM and I was tired, so I shelved it for the time being.
More progress to be made in Part 3!
1 note · View note
ardhra2000 · 2 months
Text
Node js Developers: Who are they and Why do you Need Them?
Node.js sanctions well planned and adaptable development of web applications with characteristics ranging from event driven,non blocking I/O model and more  As a result, the call for efficient Node.js developers has peaked in recent years.
Node.js developers encompass a vital role in the foundation of effortless and scalable web applications. They master server-side programming employing Node.js, leveraging its event-driven, non-blocking I/O model.
An event-driven, non-blocking I/O model is put to use by Node js . Meaning that in exchange for holding back for I/O operations to finish  before moving on, Node.js can tackle countless requests at once.. This asynchronous nature grants access for highly well planned handling of I/O operations yielding  faster response times.
A large and supportive community of developers is kept intact by Node js putting up to its growth and continuous improvement. Resources, documentation, tutorials, and forums are constantly given out by the community where developers can ask for help, share knowledge, interact..
The advantage of using JavaScript throughout the application stack, with the expertise of Node.js developers, promotes code reusability, streamlined development processes, and seamless collaboration between front-end and back-end teams. 
Node.js is an open-source JavaScript runtime environment that allows server-side scripting. It is built on the V8 JavaScript engine and enables developers to create high-performing web applications. Node.js is crucial for web development as it provides a scalable and efficient solution, facilitating seamless data-inte
0 notes
alliancecleanoxford · 3 months
Text
Exploring Property Management Examples in Oxford: Encounters from Alliance Star Oxford Ltd.
Tumblr media
In the raving atmosphere of Oxford, property management is in the booming business; which in the real-state world is seen as a heart of progress, booming economies, and moving buyers. Alliance Star Oxford Ltd., one of the primary property management companies around, is actually the first line because as a result of this, it is very important that we provide those who own property and investors with the information that they should remember in these examples when it comes to the uniquely Oxford market.
Rising of Prop Tech Plans: In addition, the Headway Company is really a new company born recently to revolutise the Property management affair at large. Oxford is the area in which we make realize the real big difference only. If we are going to succeed to develop future by using smart construction concepts to virtual property experience, the PropTech mission will be to improve those flows, please our visitors. Humanizing the given sentence: Alliance Star Oxford Ltd. has a belief that building the implementations of the progress made should be the path to success and achievement of the higher educational sector’s role.
Revolve around Reasonability: The surroundings of this situation become very creative and autonomous, which turn them into a great place that allows one to get closer and better to the real look of Oxford. More owners of the building concern focus on the competition, improving human resources, security in sports, obtaining green development certificates and eco-friendly comfort to make their business more attractive usually to the more knowledgeable clients. According to the Alliance Star Oxford Ltd., it is highly advised for buildings owners who pursue this kind of revising strategy to keep a large-picture viewpoint and attract larger audiences by taking advantage of current tendencies.
Adaptable Work Areas: The acceleration of remote work has brought about a boost to the training areas in Oxford. The creation of workspaces that appeal to each individual, while shifting some of the hot-desks and shared amenities, makes it rather unambiguous that localized products would work well with an emerging a type of society where people favor flexibility and social interactions. As its primary concern, starching properties in Alliance Star Oxford Ltd takes the privilege of letting clients choose a similar layout of their previous working environment. To achieve this, the company has in place plans that contain the flexible spaces where clients can scan the floor designs into their spaces then matching the new property’s set up with the clients’ current working environment.
Improved Occupant Experience: Rental market is not constant and tend to be in motion. Catering for the best residents becomes the ultimate solution as one seek to retain your tenants and, consequently, advancing the chances of low turnover rate. From Stakeholders to skilled maintenance staff, every WWBRP personnel and their daily maintenance work onsite, decoding the key property management, the WWBRP calculates the real long-term development of property management.
Information Driven Direction: An utmost portion of information evaluation due diligence is increasingly imperative; it is however, the very area that one must work hard to convinces landlords in Oxford. At the touch of the key, creating marketing plans, elucidating the residents` preference and doing functional assessments the way property managers they are, can help them see income shrink by producing high yield. Node Star Oxford Ltd. considers it not fair that the property owners are left out for they also have the potential to improve the offering among the approach of these awarded institutions by utilizing advanced data analytical machines to penetrate the market of Oxford.
Managerial Consistence: It is thus essential to keep tabs on the most recent reports and cases in authorities. Professionals.
This reminds us that remaining organized fetches building management and property owners in Oxford a great deal of satisfaction. With different choices for security questions, password rules, password managers or multifactor authentication, there are already a lot of steps that help in keeping obviousness and, hence, assist people in coming out with solutions. Alliance Star Oxford Ltd leaves this process in the hands of the well-experienced management team who are primed with extensive knowledge and never shy away from all confusion that might arise.
Complement on Security: In light of the significant increase in the risk of a cyber-attack as a reality and of percentage of security losses already incurred, landlords are focusing on new saving devices for defending their investments and people in their apartments. One would suggest both mundane security techniques such as access control structure and social approach for enhancement of security and trust among community members. Thus, Alliance Star Oxford is capable to deliver a unique quality services.
Variation to Section Moves: The demand for the oxford label’s skin care product keeps growing everyday by turning animals parts into fashion stamps. From student rooms to the aging community, property proprietors should absorb the pop revisions and plan their investments to appreciate the people living in Oxford. Alliance Star Ltd. for Oxford is the most applicable information and market leadership to make their customers go ahead of somebody else’s with the new pieces of the market design.
With everything taken into account, property management in Oxford is going through basic changes driven by mechanical turns of events, sensibility drives, and changing customer suspicions. By staying up with the latest with these examples and teaming up with experienced property management firms like Alliance Star Oxford Ltd., property owners and monetary benefactors can investigate the propelling scene with sureness and open up the most extreme limit of their inclinations in this powerful city.
0 notes
The Community Impact of Near Node Deployment: Stories of Transformation
In the ever-evolving world of blockchain technology, the deployment of Near Nodes has emerged as a powerful tool for community empowerment and transformation. Near Node, a vital component of the Near Protocol, decentralizes control and brings transparency, security, and efficiency to various community-centric applications. This article delves into the inspiring stories of individuals and communities who have harnessed the power of Near Node, highlighting the profound social and economic impacts they have experienced.
Understanding Near Node Deployment
Before diving into the stories of transformation, it’s essential to understand what Near Node deployment entails. A Near Node acts as a participant in the Near blockchain network, validating transactions, securing the network, and enabling decentralized applications (dApps) to function smoothly. By deploying a Near Node, individuals and communities can contribute to and benefit from the decentralized ecosystem, promoting autonomy and resilience.
Story 1: Transforming Local Governance in a Small Town
In a small town in Eastern Europe, local governance was often marred by bureaucratic inefficiencies and a lack of transparency. The community members felt disconnected from decision-making processes, leading to distrust and apathy. Inspired by the potential of blockchain, a group of tech-savvy residents decided to deploy a Near Node to address these issues.
By implementing a decentralized voting system powered by Near Node, the town transformed its local elections. The new system ensured that every vote was recorded transparently on the blockchain, eliminating the possibility of fraud. Community members could now participate directly in governance through secure, tamper-proof voting.
The results were remarkable. Voter turnout increased significantly, and the community felt a renewed sense of ownership over local decisions. Trust in local government was restored, and the town saw a surge in civic engagement, with residents actively participating in town hall meetings and local initiatives. The deployment of Near Node not only improved governance but also fostered a stronger, more connected community.
Story 2: Financial Inclusion in Rural Africa
Access to financial services is a critical challenge in many rural areas of Africa. Traditional banking infrastructure is often inadequate, leaving many without basic financial services. In a remote village in Kenya, the deployment of a Near Node brought about a significant change.
Local farmers, who previously struggled to secure loans, were introduced to a decentralized financial application (DeFi) built on the Near Protocol. By deploying a Near Node, the community could access microloans and savings accounts without the need for traditional banks. The blockchain-based system ensured transparency and security in all transactions.
One such farmer, James, shared his story. With a microloan obtained through the Near Node-powered platform, he was able to purchase high-quality seeds and modern farming equipment. This investment led to a substantial increase in his crop yield, boosting his income and improving his family’s living standards. The entire village benefited as more farmers accessed similar opportunities, driving economic growth and reducing poverty.
Story 3: Crowdfunding Renewable Energy in Southeast Asia
In Southeast Asia, a small coastal village faced frequent power outages due to its reliance on an unreliable electricity grid. The community sought a sustainable solution and decided to crowdfund a solar energy project using a Near Node-based platform.
The transparent and decentralized nature of the platform attracted contributions from both local residents and external supporters. The funds were used to install solar panels, providing the village with a stable and sustainable energy source. This project not only improved the quality of life for the villagers but also demonstrated the potential of community-driven initiatives powered by blockchain technology.
The success of the solar energy project inspired neighboring communities to explore similar solutions. The village became a model of how Near Node deployment could empower communities to take charge of their energy needs, promoting sustainability and resilience.
Broader Social and Economic Benefits
The deployment of Near Nodes in these communities has led to broader social and economic benefits. By decentralizing control and providing secure, transparent systems, Near Node has:
Enhanced Trust: Transparency in transactions and governance has restored trust in local institutions.
Increased Participation: Direct involvement in decision-making processes has encouraged greater civic engagement.
Economic Development: Access to financial services and crowdfunding platforms has stimulated economic growth and reduced poverty.
Sustainability: Community-driven projects, such as renewable energy initiatives, have promoted sustainable development.
Conclusion
The stories of transformation from Eastern Europe, Africa, and Southeast Asia highlight the profound impact of Near Node deployment. By empowering individuals and communities with decentralized solutions, Near Node is driving social and economic change. As more communities around the world embrace this technology, the potential for transformation and empowerment continues to grow, promising a brighter future for all.
1 note · View note
govindhtech · 4 months
Text
Next-Gen Tensor G5 TSMC Powers Google Pixel 10!
Tumblr media
Tensor G5 TSMC
A long-standing tradition will be broken by the processor of the next Google Pixel 10. Samsung Foundry makes Google’s Tensor chips, which power Pixel devices like the Pixel 6. Google may choose TSMC to make its Tensor G5 CPU for the Pixel 10. Google’s hardware aspirations have changed, which could affect the Pixel 10’s performance and capabilities.
Google Tensor G5
Regarding the rationale behind the foundry change, Google has not released an official statement. Still, rumours in the industry suggest a few options:
TSMC’s Leading-Edge Manufacturing
TSMC is the world’s leading chip manufacturer, known for its cutting-edge production procedures. Using TSMC’s 4nm (or possibly a more advanced node) technology for the Tensor G5 TSMC could improve performance and power efficiency over Samsung’s chips.
Supplier Diversification
With Samsung as its only supplier for chips, Google may have been subject to restrictions or limitations resulting from Samsung’s production capabilities or goals. Making the move to TSMC provides production process control and diversification.
Potential Tensions
Unconfirmed, but possible Google and Samsung Foundry may have had underlying conflicts or tensions that affected the choice.
Pixel 10
For the Pixel 10, What Does This Mean?
There have been talks concerning the possible effects of the Pixel 10 after the company switched to TSMC:
Performance Boost
Compared to the 5nm process that is probably utilised in the Tensor chips found in the Pixel 6 and Pixel 7, TSMC’s 4nm process offers efficiency and performance gains. Better graphics performance, quicker processor speeds, and longer battery life for the Pixel 10 could result from this.
Tensor G5’s Capabilities
The features of The Tensor G5 TSMC new chip designs or features may be tested by Google thanks to the move to TSMC, which could have implications beyond the production process. Due to Samsung’s limitations in the past, Google may be able to incorporate features thanks to TSMC’s experience producing cutting-edge chips.
Difficulties to Expect
There may be difficulties involved with moving manufacturing to a new foundry, such as TSMC. At the beginning, there could be delays, yield rates (the percentage of chips that function) or even integration problems.
Improved AI and Machine Learning
Google’s Tensor Processing Unit (TPU) powers the Tensor chip’s machine learning capabilities. Google may benefit from TSMC’s experience in TPU optimisation for on-device AI performance.
Additional Control and Customisation
TSMC might provide Google additional chip design and manufacturing control. This may enable greater Tensor G5 TSMC customisation to meet Google’s needs and Pixel device software optimisations.
A Review of the Data
TSMC may have had some role in the Tensor G5 processor in the Pixel 10, despite the fact that Google hasn’t made the foundry change official. As of right now, we know:
Information Leak
From a publicly accessible trade database, tech publications such as Android Authority have uncovered information suggesting that Google is delivering a new chip possibly the Tensor G5 TSMC from Taiwan, home of TSMC, to an Indian company for testing. Reportedly, TSMC is identified as the manufacturer in the chip description.
Codenames and Speculations
The Tensor G5 TSMC codename, “Laguna Beach,” was also made public by leaks, which solidified the device’s affinity with the Pixel 10. A considerable improvement over earlier Pixel devices may be indicated by the rumours that the chip would have 16GB of RAM that have appeared.
The Way Forward
It will take some time before we witness the tangible effects of Google’s decision to move to TSMC, with the Pixel 9 anticipated to ship later in 2024 and the Pixel 10 most likely arriving in late 2025. Watch out for the following:
Announcements from Google
When the Pixel 10 launches, the company is probably going to announce the foundry partner and give more information regarding the Tensor G5 TSMC.
Benchmark Leaks
Shortly before the official debut, performance benchmarks for Tensor G5 TSMC may surface, providing information about possible gains in performance.
Reviews of the Pixel 10 by tech experts
These reviews will offer practical evaluations of the Tensor G5 TSMC capabilities and its relationship to earlier Pixel models.
Google Pixel 10
The Pixel 10’s transfer to TSMC is notable, but it’s just one part. Expect improvements in other areas like:
Camera Technology
Pixel phones are known for their cameras. This tradition may be continued with the Pixel 10 with upgraded sensors, computational photography, and camera technology.
Display Technology
Flagship phones increasingly have high-refresh rate displays. The Pixel 10 may have a smoother, more responsive display to improve user experience.
Google’s Aspirations for Hardware and the TSMC Factor
The company’s expanding goals in the smartphone hardware market are shown by Google’s decision to switch to TSMC. In order to potentially produce more competitive chips for its Pixel handsets, Google is stepping up its partnership with a top foundry, TSMC. Should this change be effective, Google might become a more significant competitor in the high-end smartphone market, which is now controlled by Samsung and Apple thanks to their proprietary chipsets.
To sum up
Excitement and eagerness have been raised by the impending Pixel 10’s move to a Tensor G5 chip made by TSMC. Even yet, it’s unclear exactly what will happen.
Read more on govindhtech.com
0 notes
kitwallace · 5 months
Text
Non-Eulerian paths
I've been doing a bit of work on Non-Eulerian paths.  I haven't made any algorithmic progress with the non-spiraling approach Piotr Waśniowski uses for such paths, but I'm keen to continue the development of the approach using spiral paths since I believe that this yields strong structures. 
I'm using the Hierholzer algorithm to find paths in a Eulerian graph  and I've been looking at the changes needed for non-Eulerian graphs, i.e. those where the order of some vertices is odd. For graphs with only 2 odd nodes, a solution is to use pairs of layers which alternate starting nodes. In the general case  (Chinese Postman Problem)  duplicate edges are added to convert the graph to Eulerian and then Hierholzer used to solve the  resultant graph.  I hadn't actually tried this before but I've now used this approach on some simple cases. 
Tumblr media
(the paths here were constructed via Turtle graphics just to test the printing - in transparent PLA)  
The hard part is to evaluate the alternative ways in which the duplicate edges can be added.  We can minimise the weighted sum of edges but for the rectangle this still leaves several choices and I need to think about how they can be evaluated.  I think immediate retracing of an edge should be avoided so perhaps maximising the distance between an edge and its reverse would be useful.
The duplicate edges cause a slight thickening and a loss of surface quality (so better if they are interior)  but I think that's a small cost to retain the spiral path. Path length for the rectangle is 25% higher  I haven't tried them with clay yet.
Modifying Hierholzer
I had originally thought that to formulate such graphs for solution by Hierholzer, each pair of duplicate edges would require an intermediate node to be added to one of the edges to create two new edges. This would be the case if the graph was stored as an NxN matrix, but my algorithm uses a list of adjacent nodes, since this allows weights and other properties to be included. Removing a node from the matrix is much faster (just changing the entry to -1) than removing the node from a list but for my typical applications efficiency is not a big issue. The list implementation requires only a simple modification to remove only the first of identical nodes. This allows duplicate edges to be used with no additional intermediate nodes.
This is test interface for the Hierholzer algorithm which accepts a list of edges.
Here is an example with three hexagons:
Tumblr media
with graph
Tumblr media
and edge list:
[ [0,1],[1,2],[2,3],[3,4],[4,5],[5,0], [3,2],[2,6],[6,7],[7,8],[8,9],[9,3], [3,9],[9,10],[10,11],[11,12],[12,4],[4,3] ]
Nodes 2,3,4 and 9 are odd. There is only one way to convert to Eulerian. We need to duplicate three edges : [3,4], [3,9],[3,2] so that nodes 4,9, and 2 become order 4 and node 3 becomes order 6. The path used to generate the printed version above was constructed as a Turtle path with only 60 degree turns:
[3, 9, 10, 11, 12, 4, 3, 2, 6, 7, 8. 9, 3, 4, 5, 0, 1, 2]
Hierholzer constructs the following path starting at the same node
[3, 2, 1, 0, 5, 4, 3, 2, 6, 7, 8, 9, 3, 9, 10, 11, 12, 4]
There is a sub-sequence [9,3,9] which indicates an immediate reversal of the path. This creates the possibility of a poor junction at node 3 and is to be avoided.
Furthermore, this path is the same regardless of the starting point. The choice of which edge amongst the available edges from a node at each step is deterministic in this algorithm but it could be non-deterministic. With this addition, after a few attempts we get :
[0, 1, 2, 3, 9, 10, 11, 12, 4, 3, 2, 6, 7, 8, 9, 3, 4, 5]
with no immediately repeated edges
This provides a useful strategy for a generate-test search: repeatedly generate a random path and evaluate the path for desirable properties , or generate N paths and choose the best.
However, this approach may not be very suitable for graphs where all nodes are odd, such as this (one of many ) from Piotr:
Tumblr media
The edge list for this shape is
[0,1],[1,2],[2,0], [0,3],[1,4],[2,5], [3,6],[6,4],[4,7],[7,5],[5,8],[8,3], [9,10],[10,11],[11,9], [6,9],[7,10],[8,11],
duplicate the spokes
[0,3],[1,4],[2,5], [6,9],[7,10],[8,11]
Here every node is odd. The 6 spokes are duplicated. Sadly no path without a reversed edge can be found.
The simpler form with only two triangles and 3 duplicated spokes:
[ [0,1],[1,2],[2,0], [0,3],[1,4],[2,5], [0,3],[1,4],[2,5], [3,4],[4,5],[5,3] ]
does however have a solution with no reversed edges although it takes quite a few trials to find it:
[0,2,5,4,1,2,5,3,0,1,4,3]
Triangles
Tumblr media
Edges can be duplicated in two ways
[[0,1],[1,2],[2,3],[3,4],[4,5],[5,6],[6,7],[7,8],[8,0] ,[2,4],[5,7],[8,1]
a) duplicating the interior edges min 4
[2,4],[5,7],[8,1]
b) duplicating the exterior edges min 6 [1,2],[4,5],[7,8]
Rectangle
Tumblr media
Edges can be duplicated in three different ways
[0,1],[1,2],[2,3],[3,4],[4,5],[5,6],[6,7],[7,0], [1,8],[3,8],[5,8],[7,8],
a) [1,2],[2,3], [5,6],[6,7] min 6 b) [1,8],[8,3], [5,6],[6,7] min 4 c) [1,8],[8,3], [5,8],[8,7] min 4
Automating edge duplication
The principal is straightforward: chose an odd node, find its nearest neighbour and duplicate the connecting edge(s) ;repeat until all odd nodes connected. To test various configurations, allow the choice of node and its nearest neighbour, if several, to be randomised and compute a selection evaluation from the result.
Currently the choice is based on the length of the path from each node to the revisit of that node. Path length of 2 means an immediate return and these should be avoided if possible.
Testing with clay
Whilst tests with PLA show no significant changes in appearance whilst retaining the benefits of a spiral print path, this approach has yet to be tested with clay
Postscript
A side-benefit of this work has been that I've finally fixed an edge case in my Hierholzer algorithm which has been bugging me for some years
0 notes
oaresearchpaper · 5 months
Link
2 notes · View notes