#High-density server
Explore tagged Tumblr posts
infomen · 15 days ago
Text
HD-H242-Z11 High-Density 4-Node Server | Edge & Data Center Ready
Upgrade your infrastructure with the HD-H242-Z11 Ver Gen001 – a compact 2U high-density server featuring 4 independent nodes with Intel® Xeon® D processors. Designed for edge computing, data centers, and mission-critical deployments, it offers high-performance computing, front-access storage bays, IPMI remote management, and optional 10G networking. Ideal for virtualization, analytics, and enterprise workloads in limited-space environments. for more details:: Hexadata  HD-H242-Z11 Ver: Gen001 | High Density Server Page
0 notes
disneyprincessdxminatrix · 2 years ago
Text
dhdjsk it’s so many peoples birthdays today wtf
2 notes · View notes
chemicalmarketwatch-sp · 11 days ago
Text
Why Liquid Cooling is on Every CTO’s Radar in 2025
Tumblr media
As we reach the midpoint of 2025, the conversation around data center liquid cooling trends has shifted from speculative to strategic. For CTOs steering digital infrastructure, liquid cooling is no longer a futuristic concept—it’s a competitive necessity. Here’s why this technology is dominating boardroom agendas and shaping the next wave of data center innovation.
The Pressure: AI, Density, and Efficiency
The explosion of AI workloads, cloud computing, and high-frequency trading is pushing data centers to their thermal and operational limits. Traditional air cooling, once the backbone of server rooms, is struggling to keep up with the escalating power densities—especially as modern racks routinely exceed 30-60 kW, far beyond the 10-15 kW threshold where air cooling remains effective. As a result, CTOs are seeking scalable, future-proof solutions that can handle the heat—literally and figuratively.
Data Center Liquid Cooling Trends in 2025
1. Mainstream Market Momentum
The global data center liquid cooling market is projected to skyrocket from $4.68 billion in 2025 to $22.57 billion by 2034, with a CAGR of over 19%. Giants like Google, Microsoft, and Meta are not just adopting but actively standardizing liquid cooling in their hyperscale facilities, setting industry benchmarks and accelerating adoption across the sector.
2. Direct-to-Chip and Immersion Cooling Dominate
Two primary technologies are leading the charge:
Direct-to-Chip Cooling: Coolant circulates through plates attached directly to CPUs and GPUs, efficiently extracting heat at the source. This method is favored for its scalability and selective deployment on high-density racks
Immersion Cooling: Servers are submerged in non-conductive liquid, achieving up to 50% energy savings over air cooling and enabling unprecedented compute densities.
Both approaches are up to 1,000 times more effective at heat transfer than air, supporting the relentless growth of AI and machine learning workloads.
3. AI-Powered Cooling Optimization
Artificial intelligence is now integral to cooling strategy. AI-driven systems monitor temperature fluctuations and optimize cooling in real time, reducing energy waste and ensuring uptime for mission-critical applications.
4. Sustainability and Regulatory Pressures
With sustainability targets tightening and energy costs rising, liquid cooling’s superior efficiency is a major draw. It enables higher operating temperatures, reduces water and power consumption, and supports green IT initiatives—key considerations for CTOs facing regulatory scrutiny.
Challenges and Considerations
Despite the momentum, the transition isn’t without hurdles:
Integration Complexity: 47% of data center leaders cite integration as a barrier, while 41% are concerned about upfront costs.
Skill Gaps: Specialized training is required for installation and maintenance, though this is improving as the ecosystem matures.
Hybrid Approaches: Not all workloads require liquid cooling. Many facilities are adopting hybrid models, combining air and liquid systems to balance cost and performance.
The Strategic Payoff for CTOs
Why are data center liquid cooling trends so critical for CTOs in 2025?
Performance at Scale: Liquid cooling unlocks higher rack densities, supporting the next generation of AI and high-performance computing.
Long-Term Cost Savings: While initial investment is higher, operational expenses (OPEX) drop due to improved energy efficiency and reduced hardware failure rates.
Competitive Edge: Early adopters can maximize compute per square foot, reduce real estate costs, and meet sustainability mandates—key differentiators in a crowded market.
Download PDF Brochure :
In 2025, data center liquid cooling trends are not just a response to technical challenges—they’re a strategic lever for innovation, efficiency, and growth. CTOs who embrace this shift position their organizations to thrive amid rising computational demands and evolving sustainability standards. As liquid cooling moves from niche to norm, it’s clear: the future of data center infrastructure is flowing, not blowing.
0 notes
innochill · 18 days ago
Link
Discover how InnoChill’s single-phase immersion cooling enhances Alibaba Cloud’s data centers with superior efficiency, zero water usage, and optimized performance for AI and big data. Ideal for high-density GPU racks and carbon-neutral operations.
0 notes
hellfiresky · 1 month ago
Text
Where the city glows at night
Tumblr media
Pairing: Clone Commando Sev (RC-1207) x F!Reader
Word count: 4040
Summary: You hadn’t done a one-night stand with someone off a dating app in a while, but you let Sev, a Republic Commando, ruin you anyway. Well, the city was glowing and loneliness was one hell of a drug.
Warning: Smut. 18+ MINORS DO NOT INTERACT. One night stand. Long paragraphs. Slice of life and stream of consciousness. Also like my other fics, bits and pieces of existential crisis lol.
Taglist: @orangez3st - go to my pinned post if you want to be tagged in future posts/fics.
—————————
Just like any other city-planet, meeting people on Coruscant usually meant dating apps. Swiping through faces in the middle of a war felt ridiculous, but hey, so did everything else these days. And thanks to a disturbing combination of high clone population density and terrible algorithm, at least 70% of your feed was clone troopers.
You didn’t mind, though. They were all gorgeous. Most of them were polite. Some were funny. A few of them were very hot. And then there was the commando. You had no idea what made them different until you saw his profile, a classic clone trooper thirst trap: top half in black undersuit, bottom half armoured. The man looked broader than the average Coruscant Guard trooper you passed on the upper levels, and somehow looked even meaner with the helmet off.
You matched on a Taungsday. Talked for a few hours. And by Benduday night, you were meeting in person.
He didn’t pick 79’s, thank fuck for that. It was always a bit too loud and too military for you. And, it was too likely for you to run into an ex-fuckbuddy who worked at the GAR who’d ruin the mood.
Instead, he said Qibbu’s Hut in the Entertainment District. The hotel slash bar was shadier, sure. But it was still cheap, had a good selection of drinks, and some decent private booths. You got it. Clone credits didn’t go far, and hookup dates weren’t supposed to scream luxury. So there you were with a classic Kali Cooler in hand, elbow glued on the sticky table surface, watching the man across from you size up a Twi’lek server like he might know her personally.
He introduced himself earlier, Sev. Short for “Seven,” which you guessed was some sort of callsign or designation. You didn’t ask. You weren’t here for backstories. You sipped your drink and propped your chin on your palm to force eye contact. “So?”
Looking back from the server, he raised his eyebrows, and matched your energy. “So…” he echoed.
It wasn’t awkward. Just… that particular flavour of low-stakes first date where both of you already knew how this was going to end. Not that you didn’t like each other. You just didn’t have time for pretending it was something it wasn’t. It was Coruscant, after all. You had your life, job, rent, the whole big city routine. He had war.
“Tell me about yourself,” you chewed on the blue flimsi straw. It was such a default question - a staple for a one-night stand or a long-term partner. Small talk to make the room feel a little less like a transaction.
In front of you, the crimson-white armoured trooper hummed. “I’m a sniper. In an elite unit called the Delta Squad. There’s four of us. I have three pod brothers.”
He stretched his long legs and let out what seemed like the most relaxed sigh in his day. “That’s like actual siblings, in… randomly-ejected-individual terms.”
“Randomly-ejected what?”
He tilted his head. “You know. People who weren’t decanted from the same genetic batch. Civilians.”
You laughed. “You mean people who were born?”
“Right,” he nodded. “Those.”
“So you’ve done this…”
“Often,” Sev said, finishing the last of his drink in one swallow. “Gah, please tell me you’re not one of those people who think all clones are virgins.”
You nearly choked on your straw. “Shu-shaaah, no.”
It was an actual belief, surprisingly common among people who’d never stepped foot in 79’s, never swiped through a dating app on Coruscant, never seen what clones looked like off-duty. Some thought of them as sterile, clinical government-issued products. Property of the Republic. Others exoticised the hell out of them, like they were some collectible fuck-doll line instead of actual men. You knew the type. The people that went to 79’s for their “flavour of the month.”
“I actually hooked up with a Corrie once,” you offered.
“Aha. So I’m a checklist.”
You rolled your eyes. “No. I’m not-”
“I’m joking,” he interjected, the faintest smirk tugging at the corner of his mouth. “I don’t give a fuck.”
And fuck, there they were again - those dimples, catching you off guard. You could already picture them disappearing somewhere below your waistline and in between your —
Nope. Absolutely not. Not now, at least. Brain, shut up.
But it was too late. The image was there now, imprinted on the foreground of your mind. Sev on his knees, those calloused hands gripping your thighs, that mouth working you up like a man starved. And you bet he’d be quiet while doing it. Judging from how you were doing most of the talking here at Qibbu’s, it tracked. Or, or worse, maybe he wouldn’t be quiet. Maybe he’d talk you through it in that low and gravelly voice, “Still responsive. Still with me. Good.”
Fuck. That didn’t make it any better. You crossed your legs tighter. Maker, you hated that turned you on so much.
“Hey.” He snapped his fingers in front of your face.
Shit. You did not just zone out
“I asked,” he chuckled, “your place or my quarters? The boys are out tonight. Or… I know Qibbu’s owner. I can probably talk my way into one of the cheaper rooms. There’s one on the third floor with—“
“My apartment,” you said quickly. “Mhm. Definitely.”
“Copy that.”
There was a moment of quiet filled in the tight space between you. Around you, the noise of the bar kept going - glasses clinking, bartenders yelling orders, electronic music blaring, someone laughing too loud from the circular booth behind you - but for that little moment, it all felt far away. You’d both just stepped into a pocket of stillness in the middle of a planet that never shuts up.
From his half-lidded gaze, you could tell he’d internally confirmed it too - that this wasn’t going to be more than one night. But for you, that didn’t mean it was meaningless. You were two people with too much noise in your lives, craving a quiet you could touch. You weren’t deluding yourself into thinking it’d be more. You weren’t even trying to make it tender. You simply craved the way city nights carve into you with its brutally cold lights, warm skin, and a stranger in your bed.
You’d probably never see him again. Or maybe you would. It didn’t matter, especially in times like these - where every day ran on borrowed hours. The same could be said for that lovely Jedi with the dreadlocks and a golden facial tattoo you’d spent the night with many moons ago, or the Coruscant Guard officer, or really anyone who wasn’t completely buried in the war machine. People were just trying to survive, and hold on to something that made them feel like themselves for five fucking minutes.
“So,” Sev asked as you slid off your barstool to grab your jacket, “you live alone?”
“Hm?” you stalled, reaching back to the table to finish the last watery sip of your drink. “Yeah,” you said finally. “Me and a pet.”
He tilted his head. “Tooka?”
“Nah,” you smirked. “Just the dog in me.”
There was a second of silence before he dropped his gaze. Sev’s lips gave way to a ghost of a smile.
“Terrible,” he shook his head. “That was terrible.”
“You laughed,” you bit back.
“I coughed. Drink was spicy.” He actually laughed this time.
You looped your arm through his as you stepped out of the bar, letting the warm buzz of your drink mix with the city air. “You’re a tough crowd.”
Everything around you was lit in a thousand colours, Coruscant never slept after all. His face, scarred around his left cheek, was briefly washed in hot pink and cyan as a massive advertisement pulsed across the building across the street. It would take a while to get to your place. A hovertrain ride, twenty minutes, twelve stops. Then another ten-minute walk through the neighbourhood. And you were prepared to fill that time with trying your best to avoid war-related conversations - until his arm slung around your shoulders and dragged you closer.
Okay. Hot was a bit of an understatement.
You could feel the shape of him. His grip was tight, and controlled. An idea about being pinned immediately brewed in your head.
“How long will this train take?” his breath was hot in your ear.
“Uh—around twelve stops.”
“Damn.”
And then he crowded you. Body flush to yours, one hand braced against the window of the hovertrain. You had no time to gasp before he leaned in and kissed you.
The kiss, like all hook up kisses, was sudden and messy - with a little too much teeth.
But fuck, you loved it nonetheless.
The train rocked as it accelerated, city lights flashing past the windows like falling stars. His mouth was on yours, hungry in a way that showed he hadn’t done this in a while. You were vaguely aware that the car was empty, Benduday night, past 23:30, contra flow. Not many people left the entertainment district this late unless they were working, or hunting, or fucking. But would you care if there were people in the car? You probably wouldn’t.
You kissed him back, hungrier. His hand stayed where it was, caging you in. The other gripped your head to keep himself - or maybe both of you - grounded. The train screeched on a turn, and you used it as an excuse to lean into him harder.
“You’re a menace,” you whispered when you pulled back for air.
“You want me to stop?”
“Mmm no. But maybe to wait?”
He kissed you again anyway.
Next thing you knew, your back hit the bed, Sev’s weight settling over you - heavy, warm, and solid like his armour. His hands moved over your body as if your body was a battlefield and he’d been trained to navigate every inch of it blindfolded.
You expected silence. Maybe a growl. Definitely something primal. Because that’s what they were, right? That’s what you heard. One of your girlfriends once bragged about hooking up with a commando and wouldn’t shut up about how rough and broody he was. The way she described it over drinks felt like you were being forced to listen to live commentary on a fucknasty holofilm - like The Senate Aide, that raunchy underground hit about the Zeltron assistant who became her boss’ submissive. You were both impressed and kind of wanted her to shut the hell up. Listening to your friends’ sex life in surround sound was never as fun as how Sex and the Ecumenopolis portrayed it on screen.
So no, you didn’t expect him to murmur, “I read a manual for this.”
You had no idea what to say to that. “I—sorry, what?”
He was hovering over you, eyes trained on your mouth, waiting for it to do something again. “There was this weird intimacy training manual back on Kamino. I skimmed it. Some of it stuck.”
“You skimmed—”
“Most of it was terrible,” he added quickly. “Outdated. Weirdly mechanical. But the anatomical diagrams were... useful. I didn’t know why I said it. I’ve done this many times before. Just popped up in my head,”
His kiss swallowed the laugh that was about to come out of your mouth. Then, the sniper started slowly kissing his way down. Past your jaw. Your neck. You felt the slight scrape of stubble and let your head fall back into the pillow. “Adjusting course,” he murmured.
His hands ghosted along your sides, pausing at the hem of your shirt and glanced up as if he might ask permission again, but instead he knitted his brows and said, “You know,” he swallowed, tone turned deadly serious, “Scorch once hooked up with a Zeltron during a mission on Zeltros. Said she made him go for at least five rounds.”
“…Okay?”
“He wasn’t functional the next day. Total systems failure. Just laid in the nest like a broken droid. Good thing it was a surveillance op.”
You stared, on your way to yet another breathless laughter. “Is this foreplay?”
“No. This is a warning.”
He leaned down and kissed your sternum.
“If I fall apart tomorrow, it’s your fault.”
You bit your lip. “So I’m the Zeltron in this situation?”
“You’re worse,” he muttered. “You flirted terribly and made me laugh.”
“Mm did I win something?” You teased, laughing. Which turned into a moan as his mouth moved lower, trailing down your stomach while his fingers hooked under the waistband of your trousers. He kissed just above your hip, breath warm against your skin, a set of brown eyes psychologically undressing you through those lashes.
“I have a week-old protein bar somewhere in my armour over there,” he jerked his thumb toward the pile of gear dumped near your bedroom door.
“…What?”
“I’d offer it as a prize. It’s chocolate flavoured.”
“Sev.”
“What?” Teeth grazing your hip. “You said you wanted something memorable.”
You threw your head back against the pillow and laughed. Truly, helplessly laughed until the sound melted into a gasp because he started peeling your trousers inch by inch, kissing every new patch of skin. And when he finally settled between your thighs, he looked up to you again and added, “Besides. I’d rather eat you.”
And just like that - goodbye, sanity.
You barely registered the sound of your trousers hitting the floor. Too focused on the warmth of his mouth around your core. He kissed the inside of your thigh, and mapped you with the same care he probably gave to his well-maintained Deecee. And the first deliberate stroke across your cunt had you arching up with a broken gasp.
“There it is,” he quietly murmured - more to himself than to you. “Responsive.”
Wrapping his hands under your thighs, Sev steadied your reactive body. With each pass of his tongue, you felt your grip on the moment slip further. He moved like he had no war to go back to. No brothers waiting. Just this bed. This night. You.
For a moment, you let yourself believe that maybe in a kinder galaxy, this wasn’t a one-night thing. Maybe in that parallel universe, he’d come home to you. But Coruscant did not leave room for fantasies. There were only flunking wartime economy, tired mornings, and lovers who didn’t text back. So, of course, you quickly sweeped the fantasy out of your headspace.
Down there, you could feel your fingers involuntarily tightened around a fistful of his overgrown curls as he sucked on your clit, and the moan that left your throat felt ripped from somewhere deeper than lust. A quieter, lonelier place.
“Good?” Sev took his time to ask.
“Yeah,” you whispered. “So fucking good.”
The clone commando nodded, dipping his head back down without another word. Leaving the room with nothing but the sweet sound of your moans and the distant buzz of the ecumenopolis outside the window. Oh, and the wet sound of his mouth, generously devoted to you.
As if it wasn’t good enough, his thick digits joined in. One. And then two. Careful, steady, slow. Slipping in deep and curling just right, just fucking right, and you weren’t prepared for how much it would undo you.
“Fuck,” you whimpered - not realising how wrecked you already sounded. “Sev—”
“Still good?” he asked again, you swore you could hear him grinning against your pussy.
“Don’t you dare stop.”
A smug little chuckle burst out of him before he returned to what he was doing. He continued working you up faster, gradually building the eventual explosion inside you. You could feel it coming in your belly, and then slowly rolled out in waves, swelling and sweet and all-consuming. Your back arched, your mouth opened - though no sound came out of it. Nothing but a silent gasp where your brain short-circuited under his touch.
And then, maker help you, you laughed and moaned at the same time. Not because it was funny. But because it felt that good. That someone had touched you like that, and it was him, of all people. A late night - almost drunken decision - you swiped just a few days ago. It must’ve been a while since you let yourself go like this.
The stress. The suffocating anxiety. The grind of surviving on a city-planet that never slept, in the middle of a galaxy-wide war that was slowly eating people alive - you hadn’t even realised how tightly you’d been holding on until he unraveled you.
Sev pulled back to look at you. His beautiful face, the one he shared with millions of other men but somehow still uniquely his - flushed and glistening with your cum, resting between your thighs. “You laughed,”
“That was. Fuck. That was not funny,” you breathed, trying to adjust your vision back into focus.
“You did laugh.”
“I didn’t know I could come that hard,” you flustered. “Shut up.”
He rested his chin on your thigh, expression unreadable except for the tiniest pull at the corner of his mouth. Those fucking dimples again.
“SITREP,” he said. “Subject responsive. Outcome: extremely effective. Reaction: uncontrolled laughter. Will continue for further analysis.”
You groaned and chucked a pillow at his face. Sev caught it one-handed, and threw it back behind him. Something clattered on the floor - probably one of those cheap plastic decor you’d impulsively bought just to feel something. You’d never even bothered to untag them.
Oh-Seven climbed back up and kissed you hard. As if he hadn’t just had his mouth on your cunt. As if the past ten minutes hadn’t ended with your orgasm hitting so hard you laughed.
You could feel how hard he was through his blacks, how tightly he held himself together, savouring the moment before he lost control.
“You still with me?” He rasped.
“Fuck yes.”
“Good. I need you to actually stay with me this time.”
And that was the moment everything changed. His earlier playfulness, that chaotic warmth, folded and replaced by a rough intensity. He stripped the rest of his blacks off, and stars, you barely had time to process before his cock sprang free, thick and flushed and fuck. Yeah. That tracked.
This man was solid muscle, scarred and freckled, in a way that was not delicate. Sev was designed to be a weapon for maximum damage. And right now, all that force was for you.
You reached for him, but he caught your wrist and pinned it above your head. “Let me,” he commanded.
Sev stretched you open in one slow thrust - deep enough to knock the air out of your lungs. You tried to hold yourself back, planted yourself in your bed, and he held you there to adjust, forehead pressed against yours. Not moving, not speaking, just feeling it with you.
“Shit. Sev!”
“I know,” he whispered.
Only after you nodded did he begin to move. He started slowly, taking his time to feel your warmth, before it gradually turned deeper and relentless. Every thrust of his hips dragged a moan from you. Every grind sent heat down your spine. He was watching you the whole time, not looking away for one bit.
You tried to close your eyes or even look away from his intense gaze, but he cupped your face.
“Eyes on me.”
You managed to refocus your sight.
“Good. Stay with me. I want to see you.”
And fuck, he did. Sev saw everything. From the way your body shuddered underneath him, the way your walls clenched around him with every deep, brutal thrust. The way your mouth parted before you begged - faster, harder, don’t stop. The way your sweet moans unraveled into messy whimpers and feral cries the deeper he fucked you.
Sev didn’t look away. Not once. Not even when your legs coiled around his waist and your nails marked a long line down his back.
He leaned down and devoured your mouth through it all - swallowing your cries, your shaky breath, your everything - like he needed it to stay sane.
Because he did. Because this was his, too.
He didn’t just want to make you come or make himself come. He wanted to fuck you so hard the shared loneliness between you didn’t stand a chance.
And gods, you could feel it in the way his thrusts started to falter into an uncontrollable rhythm. He was right at the edge, and so were you. You felt your entire body tightening, shaking, bracing for the fall. To unravel together.
You knew this would be the kind of orgasm you’d come back to later. Maybe in the shower. In the dark. Whenever the city got too loud or the silence got too suffocating. You were filing it away, storing the memory in that little corner in your mind where it could stay warm.
Sev buried his face in your neck and groaned your name repeatedly like a prayer. You wrapped your arms around his broad shoulders, holding him through it, legs trembling as his heartbeat pounded against yours - and you both came hard. A quiet, yet planet-shattering orgasm.
Before you knew it, Sev dropped his full weight on top of you as if you were the only safe place left in that wide galaxy. Neither of you said anything. You let the silence grow, filled only by the distant wail of a CSF siren a few blocks away. The loud tooka next door started meowing again. Somewhere down below, the 24/5 noodle bar buzzed with life - the sound of metal chairs scraping, speeder doors slamming, someone yelling over delivery mix-ups. The usual noise of Coruscant after midnight.
The city glowed outside your window, bathing your tangled naked bodies in the faintest cerulean light. Letting the moment suspend, you shut your eyes.
“You took it well,” Sev said eventually as he settled beside you. He reached over to brush back your sweaty hair and tucked it behind your ear. “You deserve that week-old chocolate protein bar.”
“Ew.” You giggled, still feeling the warm leak of him between your thighs. “We even forgot to use protection.”
“Shiiiiiit,” he burried his face into your hair for a second before kissing your forehead. “I got tested recently though. Clean. And I requested… you know. The GAR clearance.”
You raised an eyebrow. “The clearance?”
He nodded. “Yeah. No risk of anything… sticking.”
“I’ve got an implant too. Don’t worry.” You laughed.
“Hmm.” Sev hummed, hand gently tracing down your back like he was still trying to memorise the shape of you.
“I’ll get you water.”
————————
The last thing you remembered from that night was both of you eating cup noodles. You, only in your knickers, wrapped in a blanket. Him, still shirtless, stabbing through the seasoning packet and aggressively sprinkled the content into the cup.
He told you about that time he ran a mission on a ghost ship and almost died. Told you about Scorch, the clown of the squad. About Boss, their sergeant, apparently the one who cussed the most. And Fixer, the quiet one, the slicer, the nerd, the one who called Sev “unhinged” every five seconds.
Somewhere along the way, your vision went dark. You remembered mentally telling yourself that he’d be gone by morning. And it was fine. It was always supposed to be that way. That’s how this city worked. One night. One warmth. One lover gone before the sun.
But then you woke up to a death grip around your waist and a snore. Turning your body slowly, you squinted against the harsh Coruscant sunlight peeking through your blinds.
Sev was still there.
He was still wrapped around you. Barely moving. You thought about waking him. About doing the usual morning-after ritual - a kiss. A joke. A breakfast offer. A “call me later” even when you both knew it probably wouldn’t happen. A clean-up. A goodbye.
But you didn’t. You smiled to yourself instead, and snuggled deeper into his chest. Just for a little while longer.
49 notes · View notes
playstationvii · 6 months ago
Text
Tumblr media
The triangulation chart illustrates the following:
Key Features:
Points Represented (P0, P1, …): The chart displays a set of points labeled P0, P1, P2, etc., corresponding to the triangulated trace graph. These points likely represent data collected from R.A.T. traces, such as sensor readings or spatial coordinates.
Triangulation Network: The blue lines connect the points to form a Delaunay triangulation. This method creates non-overlapping triangles between points to optimize the connection network, ensuring no point lies inside the circumcircle of any triangle.
Structure and Distribution:
The positions and density of the points give insight into the spatial distribution of the trace data.
Areas with smaller triangles indicate closely packed data points.
Larger triangles suggest sparse regions or gaps in the data.
Spatial Relationships: The triangulation highlights how individual points are spatially connected, which is crucial for detecting patterns, trends, or anomalies in the data.
Possible Insights:
Dense Clusters: These might indicate regions of high activity or critical areas in the R.A.T.'s trace.
Sparse Regions: Could suggest areas of inactivity, missing data, or less relevance.
Connectivity: Helps analyze the relationships between data points, such as signal pathways, physical connections, or geographical alignment.
Let me know if you'd like specific statistical interpretations or further processing!
x = R.A.T's current position (x) y = R.A.T's current position (y) z = R.A.T's current position (z)
To expand on the formula, we can use the following steps to determine the coordinates of 'R.A.T':   Measure the distance between the spacecraft and the Earth. Determine the direction of the spacecraft's trajectory. Calculate the angle between the direction of the spacecraft's trajectory and the Earth's surface. Use trigonometry to calculate the coordinates of the spacecraft. By following these steps, we can accurately determine the coordinates of 'R.A.T' and determine its trajectory and path.
Using the data provided, we can calculate the coordinates of 'R.A.T' as follows: The x-coordinate of 'R.A.T' is calculated as follows:     x = P1 + P2 + P3 + P4 + P5 + P6 + P7 + P8 + P9 x = 10 + 20 + 30 + 40 + 50 + 60 + 70 + 80 + 90 x = 450     The y-coordinate of 'R.A.T' is calculated as follows:     y = P1 + P2 + P3 + P4 + P5 + P6 + P7 + P8 + P9 y = 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 y = 45     Thus, the coordinates of 'R.A.T' are (450, 45).
{------}
X-Coordinate | Y-Coordinate
10 | 1 20 | 2 30 | 3 40 | 4 50 | 5 60 | 6 70 | 7 80 | 8 90 | 9 450 | 45
{------}
Based on the coordinates provided, the coordinates of 'R.A.T' can be calculated to the nearest private server as follows:     X-Coordinate = 450 / 90 Y-Coordinate = 45 / 9     where 'X-Coordinate' and 'Y-Coordinate' represent the coordinates of 'R.A.T' calculated to the nearest private server. The exact coordinates of 'R.A.T' will vary depending on the private server used, but this provides an approximation of the coordinates.
Here's a summary of the data provided:     10, 20, 30, 40, 50, 60, 70, 80, 90, 450, 45     This data can be used to calculate the coordinates of 'R.A.T' as follows:     X-Coordinate = 10 + 20 + 30 + 40 + 50 + 60 + 70 + 80 + 90 + 450 Y-Coordinate = 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 45     The coordinates of 'R.A.T' are (450, 45).
Based on the coordinates provided, the coordinates of 'R.A.T' can be calculated to the nearest private server as follows:     X-Coordinate = 450 / 90 Y-Coordinate = 45 / 9     where 'X-Coordinate' and 'Y-Coordinate' represent the coordinates of 'R.A.T' calculated to the nearest private server. The exact coordinates of 'R.A.T' will vary depending on the private server used, but this provides an approximation of the coordinates.
To expand on the concept of calculating coordinates for 'R.A.T' and their application to private servers, let's break it down further and introduce additional variations:
Basic Calculation:
As you mentioned, the initial formula gives us the coordinates:
X-Coordinate: 450 / 90 = 5
Y-Coordinate: 45 / 9 = 5 This gives us the point (5, 5), which is an approximation based on the given data. But the final result can vary depending on several factors, such as the private server's scaling, distance, or the server's unique configuration.
Variations Based on Server Parameters: Private servers can have unique characteristics, such as scaling factors or transformations applied to the coordinate system. Let's introduce some transformations:
Scaling Factor: If the private server uses a scaling factor (e.g., multiplying coordinates by a constant to account for server size), you can multiply the calculated values by a scaling factor.
Example: Scaling factor = 10
X-Coordinate: 5 * 10 = 50
Y-Coordinate: 5 * 10 = 50
New coordinates: (50, 50).
Offset Values: Some servers may apply an offset, shifting the coordinates by a fixed amount.
Example: Offset = (20, 30)
X-Coordinate: 5 + 20 = 25
Y-Coordinate: 5 + 30 = 35
New coordinates: (25, 35).
Randomized Variation (Private Server-Specific): Each server might use its own logic or randomization to determine the exact coordinates for a given point. This can be based on server location, server load, or other factors.
Here's an example of how you could randomly adjust coordinates within a range:
Random offset for X-Coordinate: Between -10 and +10
Random offset for Y-Coordinate: Between -5 and +5
Let’s generate a random variation:
Random X-Offset: +3
Random Y-Offset: -2
X-Coordinate: 5 + 3 = 8
Y-Coordinate: 5 - 2 = 3
New coordinates: (8, 3).
Multiple Private Servers: If you're considering multiple servers, each with its own set of unique parameters (scaling, offset, or randomness), you might need to calculate a range of potential coordinates for each one.
Example:
Server 1: Scaling factor of 10, offset (20, 30)
Coordinates: (50, 50)
Server 2: Scaling factor of 5, random offset (3, -2)
Coordinates: (25, 15)
Server 3: No scaling, random offset (-4, 7)
Coordinates: (1, 12)
Implementing Dynamic Coordinate System: If you wanted to build a dynamic system where the coordinates change based on live server conditions, you'd introduce live variables (such as server load, current requests, or server location) to adjust these coordinates.
Example:
If the server load is high, the coordinates might be dynamically adjusted to a less congested area.
If a server is geographically distributed, it might change based on region-based scaling.
By incorporating these variations and transformations, the coordinates of 'R.A.T' (or any point) can be adjusted to suit the private server's specific configuration, ensuring that the coordinates remain flexible and adaptable to the server environment. Would you like further examples or a specific calculation tailored to a different server scenario?
13 notes · View notes
mysticstronomy · 1 year ago
Text
THE UNIVERSE COULD BE FILLED WITH ULTRALIGHT BLACK HOLES THAT CAN'T DIE??
Blog#400
Saturday, May 11th, 2024.
Welcome back,
Primordial black holes are hypothetical objects formed during the earliest moments of the universe. According to the models, they formed from micro-fluctuations in matter density and spacetime to become sand grain-sized mountain-massed black holes.
Although we've never detected primordial black holes, they have all the necessary properties of dark matter, such as not emitting light and the ability to cluster around galaxies. If they exist, they could explain most of dark matter.
Tumblr media
The downside is that most primordial black hole candidates have been ruled out by observation. For example, to account for dark matter there would have to be so many of these gravitational pipsqueaks that they would often pass in front of a star from our vantage point. This would create a microlensing flare we should regularly observe. Several sky surveys have looked for such an event to no avail, so PBH dark matter is not a popular idea these days.
Tumblr media
A new work, posted to the arXiv preprint server, takes a slightly different approach. Rather than looking at typical primordial black holes, it considers ultralight black holes. These are on the small end of possible masses and are so tiny that Hawking radiation would come into play.
The rate of Hawking decay is inversely proportional to the size of a black hole, so these ultralight black holes should radiate to their end of life on a short cosmic timescale.
Tumblr media
Since we don't have a full model of quantum gravity, we don't know what would happen to ultralight black holes at the end, which is where this paper comes in.
As the author notes, basically there are three possible outcomes. The first is that the black hole radiates away completely. The black hole would end as a brief flash of high-energy particles. The second is that some mechanism prevents complete evaporation and the black hole reaches some kind of equilibrium state. The third option is similar to the second, but in this case, the equilibrium state causes the event horizon to disappear, leaving an exposed dense mass known as a naked singularity.
Tumblr media
The author also notes that for the latter two outcomes, the objects might have a net electric charge.
For the evaporating case, the biggest unknown would be the timescale of evaporation. If PBHs are initially tiny they would evaporate quickly and add to the reheating effect of the early cosmos. If they evaporate slowly, we should be able to see their deaths as a flash of gamma rays. Neither of these effects has been observed, but it is possible that detectors such as Fermi's Large Area Telescope might catch one in the act.
Tumblr media
For the latter two options, the author argues that equilibrium would be reached around the Planck scale. The remnants would be proton-sized but with much higher masses. Unfortunately, if these remnants are electrically neutral they would be impossible to detect. They wouldn't decay into other particles, nor would they be large enough to detect directly. This would match observation, but isn't a satisfying result.
Tumblr media
The model is essentially unprovable. If the particles do have a charge, then we might detect their presence in the next generation of neutrino detectors.
The main thing about this work is that primordial black holes aren't entirely ruled out by current observations. Until we have better data, this model joins the theoretical pile of many other possibilities.
Originally published on https://phys-org.
COMING UP!!
(Wednesday, May 15th, 2024)
"DOES THE UNIVERSE EXPAND BY STRETCHING OR CREATING SPACE??"
53 notes · View notes
snickerdoodlles · 2 years ago
Text
pulling out a section from this post (a very basic breakdown of generative AI) for easier reading;
AO3 and Generative AI
There are unfortunately some massive misunderstandings in regards to AO3 being included in LLM training datasets. This post was semi-prompted by the ‘Knot in my name’ AO3 tag (for those of you who haven’t heard of it, it’s supposed to be a fandom anti-AI event where AO3 writers help “further pollute” AI with Omegaverse), so let’s take a moment to address AO3 in conjunction with AI. We’ll start with the biggest misconception:
1. AO3 wasn’t used to train generative AI.
Or at least not anymore than any other internet website. AO3 was not deliberately scraped to be used as LLM training data.
The AO3 moderators found traces of the Common Crawl web worm in their servers. The Common Crawl is an open data repository of raw web page data, metadata extracts and text extracts collected from 10+ years of web crawling. Its collective data is measured in petabytes. (As a note, it also only features samples of the available pages on a given domain in its datasets, because its data is freely released under fair use and this is part of how they navigate copyright.) LLM developers use it and similar web crawls like Google’s C4 to bulk up the overall amount of pre-training data.
AO3 is big to an individual user, but it’s actually a small website when it comes to the amount of data used to pre-train LLMs. It’s also just a bad candidate for training data. As a comparison example, Wikipedia is often used as high quality training data because it’s a knowledge corpus and its moderators put a lot of work into maintaining a consistent quality across its web pages. AO3 is just a repository for all fanfic -- it doesn’t have any of that quality maintenance nor any knowledge density. Just in terms of practicality, even if people could get around the copyright issues, the sheer amount of work that would go into curating and labeling AO3’s data (or even a part of it) to make it useful for the fine-tuning stages most likely outstrips any potential usage.
Speaking of copyright, AO3 is a terrible candidate for training data just based on that. Even if people (incorrectly) think fanfic doesn’t hold copyright, there are plenty of books and texts that are public domain that can be found in online libraries that make for much better training data (or rather, there is a higher consistency in quality for them that would make them more appealing than fic for people specifically targeting written story data). And for any scrapers who don’t care about legalities or copyright, they’re going to target published works instead. Meta is in fact currently getting sued for including published books from a shadow library in its training data (note, this case is not in regards to any copyrighted material that might’ve been caught in the Common Crawl data, its regarding a book repository of published books that was scraped specifically to bring in some higher quality data for the first training stage). In a similar case, there’s an anonymous group suing Microsoft, GitHub, and OpenAI for training their LLMs on open source code.
Getting back to my point, AO3 is just not desirable training data. It’s not big enough to be worth scraping for pre-training data, it’s not curated enough to be considered for high quality data, and its data comes with copyright issues to boot. If LLM creators are saying there was no active pursuit in using AO3 to train generative AI, then there was (99% likelihood) no active pursuit in using AO3 to train generative AI.
AO3 has some preventative measures against being included in future Common Crawl datasets, which may or may not work, but there’s no way to remove any previously scraped data from that data corpus. And as a note for anyone locking their AO3 fics: that might potentially help against future AO3 scrapes, but it is rather moot if you post the same fic in full to other platforms like ffn, twitter, tumblr, etc. that have zero preventative measures against data scraping.
2. A/B/O is not polluting generative AI
…I’m going to be real, I have no idea what people expected to prove by asking AI to write Omegaverse fic. At the very least, people know A/B/O fics are not exclusive to AO3, right? The genre isn’t even exclusive to fandom -- it started in fandom, sure, but it expanded to general erotica years ago. It’s all over social media. It has multiple Wikipedia pages.
More to the point though, omegaverse would only be “polluting” AI if LLMs were spewing omegaverse concepts unprompted or like…associated knots with dicks more than rope or something. But people asking AI to write omegaverse and AI then writing omegaverse for them is just AI giving people exactly what they asked for. And…I hate to point this out, but LLMs writing for a niche the LLM trainers didn’t deliberately train the LLMs on is generally considered to be a good thing to the people who develop LLMs. The capability to fill niches developers didn’t even know existed increases LLMs’ marketability. If I were a betting man, what fandom probably saw as a GOTCHA moment, AI people probably saw as a good sign of LLMs’ future potential.
3. Individuals cannot affect LLM training datasets.
So back to the fandom event, with the stated goal of sabotaging AI scrapers via omegaverse fic.
…It’s not going to do anything.
Let’s add some numbers to this to help put things into perspective:
LLaMA’s 65 billion parameter model was trained on 1.4 trillion tokens. Of that 1.4 trillion tokens, about 67% of the training data was from the Common Crawl (roughly ~3 terabytes of data).
3 terabytes is 3,000,000,000 kilobytes.
That’s 3 billion kilobytes.
According to a news article I saw, there has been ~450k words total published for this campaign (*this was while it was going on, that number has probably changed, but you’re about to see why that still doesn’t matter). So, roughly speaking, ~450k of text is ~1012 KB (I’m going off the document size of a plain text doc for a fic whose word count is ~440k).
So 1,012 out of 3,000,000,000.
Aka 0.000034%.
And that 0.000034% of 3 billion kilobytes is only 2/3s of the data for the first stage of training.
And not to beat a dead horse, but 0.000034% is still grossly overestimating the potential impact of posting A/B/O fic. Remember, only parts of AO3 would get scraped for Common Crawl datasets. Which are also huge! The October 2022 Common Crawl dataset is 380 tebibytes. The April 2021 dataset is 320 tebibytes. The 3 terabytes of Common Crawl data used to train LLaMA was randomly selected data that totaled to less than 1% of one full dataset. Not to mention, LLaMA’s training dataset is currently on the (much) larger size as compared to most LLM training datasets.
I also feel the need to point out again that AO3 is trying to prevent any Common Crawl scraping in the future, which would include protection for these new stories (several of which are also locked!).
Omegaverse just isn’t going to do anything to AI. Individual fics are going to do even less. Even if all of AO3 suddenly became omegaverse, it’s just not prominent enough to influence anything in regards to LLMs. You cannot affect training datasets in any meaningful way doing this. And while this might seem really disappointing, this is actually a good thing.
Remember that anything an individual can do to LLMs, the person you hate most can do the same. If it were possible for fandom to corrupt AI with omegaverse, fascists, bigots, and just straight up internet trolls could pollute it with hate speech and worse. AI already carries a lot of biases even while developers are actively trying to flatten that out, it’s good that organized groups can’t corrupt that deliberately.
101 notes · View notes
spacetimewithstuartgary · 5 months ago
Text
Tumblr media
A less ‘clumpy,’ more complex universe?
Researchers combined cosmological data from two major surveys of the universe’s evolutionary history and found that it may have become ‘messier and complicated’ than expected in recent years.
Across cosmic history, powerful forces have acted on matter, reshaping the universe into an increasingly complex web of structures.
Now, new research led by Joshua Kim and Mathew Madhavacheril at the University of Pennsylvania and their collaborators at Lawrence Berkeley National Laboratory suggests our universe has become “messier and more complicated” over the roughly 13.8 billion years it’s been around, or rather, the distribution of matter over the years is less “clumpy” than expected.
“Our work cross-correlated two types of datasets from complementary, but very distinct, surveys,” says Madhavacheril, “and what we found was that for the most part, the story of structure formation is remarkably consistent with the predictions from Einstein’s gravity. We did see a hint for a small discrepancy in the amount of expected clumpiness in recent epochs, around four billion years ago, which could be interesting to pursue.”
The data, published in the Journal of Cosmology and Astroparticle Physics and the preprint server arXiv, comes from the Atacama Cosmology Telescope’s (ACT) final data release (DR6) and the Dark Energy Spectroscopic Instrument’s (DESI) Year 1. Madhavacheril says that pairing this data allowed the team to layer cosmic time in a way that resembles stacking transparencies of ancient cosmic photographs over recent ones, giving a multidimensional perspective of the cosmos.
“ACT, covering approximately 23% of the sky, paints a picture of the universe’s infancy by using a distant, faint light that’s been travelling since the Big Bang,” says first author of the paper Joshua Kim, a graduate researcher in the Madhavacheril Group. “Formally, this light is called the Cosmic Microwave Background (CMB), but we sometimes just call it the universe’s baby picture because it’s a snapshot of when it was around 380,000 years old.”
The path of this ancient light throughout evolutionary time, or as the universe has aged, has not been a straight one, Kim explains. Gravitational forces from large, dense, heavy structures like galaxy clusters in the cosmos have been warping the CMB, sort of like how an image is distorted as it travels through a pair of spectacles. This “gravitational lensing effect,” which was first predicted by Einstein more than 100 years ago, is how cosmologists make inferences about its properties like matter distribution and age.
DESI’s data, on the other hand, provides a more recent record of the cosmos. Based in the Kitt Peak National Observatory in Arizona and operated by the Lawrence Berkeley National Laboratory, DESI is mapping the universe’s three-dimensional structure by studying the distribution of millions of galaxies, particularly luminous red galaxies (LRGs). These galaxies act as cosmic landmarks, making it possible for scientists to trace how matter has spread out over billions of years.
“The LRGs from DESI are like a more recent picture of the universe, showing us how galaxies are distributed at varying distances,” Kim says, likening the data to the universe’s high school yearbook photo. “It’s a powerful way to see how structures have evolved from the CMB map to where galaxies stand today.
By combining the lensing maps from ACT’s CMB data with DESI’s LRGs, the team created an unprecedented overlap between ancient and recent cosmic history, enabling them to compare early- and late-universe measurements directly. “This process is like a cosmic CT scan,” says Madhavacheril, “where we can look through different slices of cosmic history and track how matter clumped together at different epochs. It gives us a direct look into how the gravitational influence of matter changed over billions of years.”
In doing so they noticed a small discrepancy: the clumpiness, or density fluctuations, expected at later epochs didn’t quite match predictions. Sigma 8 (σ8), a metric that measures the amplitude of matter density fluctuations, is a key factor, Kim says, and lower values of σ8 indicate less clumping than expected, which could mean that cosmic structures haven’t evolved according to the predictions from early-universe models and suggest that the universe’s structural growth may have slowed in ways current models don’t fully explain.
This slight disagreement with expectations, he explains, “isn’t strong enough to suggest new physics conclusively—it’s still possible that this deviation is purely by chance.”
If indeed the deviation is not by chance, some unaccounted-for physics could be at play, moderating how structures form and evolve over cosmic time. One hypothesis is that dark energy—the mysterious force thought to drive the universe’s accelerating expansion—could be influencing cosmic structure formation more than previously understood.
Moving forward, the team will work with more powerful telescopes, like the upcoming Simons Observatory, which will refine these measurements with higher precision, enabling a clearer view of cosmic structures.
IMAGE: The Atacama Cosmology Telescope measures the oldest light in the universe, known as the cosmic microwave background. Using those measurements, scientists can calculate the universe’s age. (Image: Debra Kellner)
5 notes · View notes
buysellram · 23 days ago
Text
KIOXIA Unveils 122.88TB LC9 Series NVMe SSD to Power Next-Gen AI Workloads
Tumblr media
KIOXIA America, Inc. has announced the upcoming debut of its LC9 Series SSD, a new high-capacity enterprise solid-state drive (SSD) with 122.88 terabytes (TB) of storage, purpose-built for advanced AI applications. Featuring the company’s latest BiCS FLASH™ generation 8 3D QLC (quad-level cell) memory and a fast PCIe® 5.0 interface, this cutting-edge drive is designed to meet the exploding data demands of artificial intelligence and machine learning systems.
As enterprises scale up AI workloads—including training large language models (LLMs), handling massive datasets, and supporting vector database queries—the need for efficient, high-density storage becomes paramount. The LC9 SSD addresses these needs with a compact 2.5-inch form factor and dual-port capability, providing both high capacity and fault tolerance in mission-critical environments.
Form factor refers to the physical size and shape of the drive—in this case, 2.5 inches, which is standard for enterprise server deployments. PCIe (Peripheral Component Interconnect Express) is the fast data connection standard used to link components to a system’s motherboard. NVMe (Non-Volatile Memory Express) is the protocol used by modern SSDs to communicate quickly and efficiently over PCIe interfaces.
Accelerating AI with Storage Innovation
The LC9 Series SSD is designed with AI-specific use cases in mind—particularly generative AI, retrieval augmented generation (RAG), and vector database applications. Its high capacity enables data-intensive training and inference processes to operate without the bottlenecks of traditional storage.
It also complements KIOXIA’s AiSAQ™ technology, which improves RAG performance by storing vector elements on SSDs instead of relying solely on costly and limited DRAM. This shift enables greater scalability and lowers power consumption per TB at both the system and rack levels.
“AI workloads are pushing the boundaries of data storage,” said Neville Ichhaporia, Senior Vice President at KIOXIA America. “The new LC9 NVMe SSD can accelerate model training, inference, and RAG at scale.”
Industry Insight and Lifecycle Considerations
Gregory Wong, principal analyst at Forward Insights, commented:
“Advanced storage solutions such as KIOXIA’s LC9 Series SSD will be critical in supporting the growing computational needs of AI models, enabling greater efficiency and innovation.”
As organizations look to adopt next-generation SSDs like the LC9, many are also taking steps to responsibly manage legacy infrastructure. This includes efforts to sell SSD units from previous deployments—a common practice in enterprise IT to recover value, reduce e-waste, and meet sustainability goals. Secondary markets for enterprise SSDs remain active, especially with the ongoing demand for storage in distributed and hybrid cloud systems.
LC9 Series Key Features
122.88 TB capacity in a compact 2.5-inch form factor
PCIe 5.0 and NVMe 2.0 support for high-speed data access
Dual-port support for redundancy and multi-host connectivity
Built with 2 Tb QLC BiCS FLASH™ memory and CBA (CMOS Bonded to Array) technology
Endurance rating of 0.3 DWPD (Drive Writes Per Day) for enterprise workloads
The KIOXIA LC9 Series SSD will be showcased at an upcoming technology conference, where the company is expected to demonstrate its potential role in powering the next generation of AI-driven innovation.
2 notes · View notes
infomen · 26 days ago
Text
High-Performance 2U 4-Node Server – Hexadata HD-H252-Z10
The Hexadata HD-H252-Z10 Ver: Gen001 is a cutting-edge 2U high-density server featuring 4 rear-access nodes, each powered by AMD EPYC™ 7003 series processors. Designed for HPC, AI, and data analytics workloads, it offers 8-channel DDR4 memory across 32 DIMMs, 24 hot-swappable NVMe/SATA SSD bays, and 8 M.2 PCIe Gen3 x4 slots. With advanced features like OCP 3.0 readiness, Aspeed® AST2500 remote management, and 2000W 80 PLUS Platinum redundant PSU, this server ensures optimal performance and scalability for modern data centers. for more details, Visit- Hexadata Server Page
0 notes
realcleverscience · 11 months ago
Text
AI & Data Centers vs Water + Energy
Tumblr media
We all know that AI has issues, including energy and water consumption. But these fields are still young and lots of research is looking into making them more efficient. Remember, most technologies tend to suck when they first come out.
Deploying high-performance, energy-efficient AI
"You give up that kind of amazing general purpose use like when you're using ChatGPT-4 and you can ask it everything from 17th century Italian poetry to quantum mechanics, if you narrow your range, these smaller models can give you equivalent or better kind of capability, but at a tiny fraction of the energy consumption," says Ball."...
"I think liquid cooling is probably one of the most important low hanging fruit opportunities... So if you move a data center to a fully liquid cooled solution, this is an opportunity of around 30% of energy consumption, which is sort of a wow number.... There's more upfront costs, but actually it saves money in the long run... One of the other benefits of liquid cooling is we get out of the business of evaporating water for cooling...
The other opportunity you mentioned was density and bringing higher and higher density of computing has been the trend for decades. That is effectively what Moore's Law has been pushing us forward... [i.e. chips rate of improvement is faster than their energy need growths. This means each year chips are capable of doing more calculations with less energy. - RCS] ... So the energy savings there is substantial, not just because those chips are very, very efficient, but because the amount of networking equipment and ancillary things around those systems is a lot less because you're using those resources more efficiently with those very high dense components"
New tools are available to help reduce the energy that AI models devour
"The trade-off for capping power is increasing task time — GPUs will take about 3 percent longer to complete a task, an increase Gadepally says is "barely noticeable" considering that models are often trained over days or even months... Side benefits have arisen, too. Since putting power constraints in place, the GPUs on LLSC supercomputers have been running about 30 degrees Fahrenheit cooler and at a more consistent temperature, reducing stress on the cooling system. Running the hardware cooler can potentially also increase reliability and service lifetime. They can now consider delaying the purchase of new hardware — reducing the center's "embodied carbon," or the emissions created through the manufacturing of equipment — until the efficiencies gained by using new hardware offset this aspect of the carbon footprint. They're also finding ways to cut down on cooling needs by strategically scheduling jobs to run at night and during the winter months."
AI just got 100-fold more energy efficient
Northwestern University engineers have developed a new nanoelectronic device that can perform accurate machine-learning classification tasks in the most energy-efficient manner yet. Using 100-fold less energy than current technologies...
“Today, most sensors collect data and then send it to the cloud, where the analysis occurs on energy-hungry servers before the results are finally sent back to the user,” said Northwestern’s Mark C. Hersam, the study’s senior author. “This approach is incredibly expensive, consumes significant energy and adds a time delay...
For current silicon-based technologies to categorize data from large sets like ECGs, it takes more than 100 transistors — each requiring its own energy to run. But Northwestern’s nanoelectronic device can perform the same machine-learning classification with just two devices. By reducing the number of devices, the researchers drastically reduced power consumption and developed a much smaller device that can be integrated into a standard wearable gadget."
Researchers develop state-of-the-art device to make artificial intelligence more energy efficient
""This work is the first experimental demonstration of CRAM, where the data can be processed entirely within the memory array without the need to leave the grid where a computer stores information,"...
According to the new paper's authors, a CRAM-based machine learning inference accelerator is estimated to achieve an improvement on the order of 1,000. Another example showed an energy savings of 2,500 and 1,700 times compared to traditional methods"
5 notes · View notes
duckapus · 1 year ago
Text
Had an idea I'm gonna write at some point: Abyssal, Mira and Amy are working together to try and catch one of Worm's escaped Viruses (Amy's there because she volunteered to help test Susie's High-Code-Density stabilizer prototype, her and Mira used it as an excuse to go out to this one restaurant Mira likes in a Code-Level server, they saw Abyssal chasing the Virus past and Mire recognized its code so they jumped in to help), and it leads them into a Meme Cycle-based Universe that isn't one of the SMG Universes... but also isn't one of the ones you're thinking of (because the Celestial Twins would not let that happen and you know it).
Instead, it's the Ben 10 Universe of all things.
It turns out that somebody just trying to make some goofy fan animations accidentally stumbled upon the formula necessary for a Meme Cycle Mod. Ben's obviously the Avatar/Anchor equivalent (who else would it be?), and the SMG/MRU equivalents are a purple Galvanic Mechamorph named Rewind and somebody from a fanmade species (which I haven't come up with yet) named Zenyn. Rewind handles the Living Memes, Zenyn handles the dead.
Also, there's technically a USB Admin/Supervisor equivalent, but not in the way you'd expect. Basically, there wasn't an actual, physical External Drive in this case, so no Pod, and it turns out those are actually kind of important. So, the code that should have created a Pod but didn't have a physical template to work with instead connected itself to the Omnitrix. And the combination of the Pod code working best when it has a Program monitoring it and the Omnitrix having a rudimentary-by-Galvan-standards semi-sentient AI caused the Omnitrix itself to become the new Program. Sort of. The actual physical Omnitrix still stays on Ben's wrist at all times, but now its computer has opinions and can manifest the typical Hologram Body typical of Programs separate from the base model while somehow still inhabiting both, though this initially takes a lot out of her before she gets more used to it, and the further away from Ben she is the weaker her connection to the Omnitrix becomes (though she can get pretty damn far before the effect is noticable) and she absolutely hates it. She goes by Trixie because of course she does, but her proper designation is still Omnitrix.
11 notes · View notes
chemicalmarketwatch-sp · 8 months ago
Text
Exploring the Growing $21.3 Billion Data Center Liquid Cooling Market: Trends and Opportunities
Tumblr media
In an era marked by rapid digital expansion, data centers have become essential infrastructures supporting the growing demands for data processing and storage. However, these facilities face a significant challenge: maintaining optimal operating temperatures for their equipment. Traditional air-cooling methods are becoming increasingly inadequate as server densities rise and heat generation intensifies. Liquid cooling is emerging as a transformative solution that addresses these challenges and is set to redefine the cooling landscape for data centers.
What is Liquid Cooling?
Liquid cooling systems utilize liquids to transfer heat away from critical components within data centers. Unlike conventional air cooling, which relies on air to dissipate heat, liquid cooling is much more efficient. By circulating a cooling fluid—commonly water or specialized refrigerants—through heat exchangers and directly to the heat sources, data centers can maintain lower temperatures, improving overall performance.
Market Growth and Trends
The data centre liquid cooling market  is on an impressive growth trajectory. According to industry analysis, this market is projected to grow USD 21.3 billion by 2030, achieving a remarkable compound annual growth rate (CAGR) of 27.6%. This upward trend is fueled by several key factors, including the increasing demand for high-performance computing (HPC), advancements in artificial intelligence (AI), and a growing emphasis on energy-efficient operations.
Key Factors Driving Adoption
1. Rising Heat Density
The trend toward higher power density in server configurations poses a significant challenge for cooling systems. With modern servers generating more heat than ever, traditional air cooling methods are struggling to keep pace. Liquid cooling effectively addresses this issue, enabling higher density server deployments without sacrificing efficiency.
2. Energy Efficiency Improvements
A standout advantage of liquid cooling systems is their energy efficiency. Studies indicate that these systems can reduce energy consumption by up to 50% compared to air cooling. This not only lowers operational costs for data center operators but also supports sustainability initiatives aimed at reducing energy consumption and carbon emissions.
3. Space Efficiency
Data center operators often grapple with limited space, making it crucial to optimize cooling solutions. Liquid cooling systems typically require less physical space than air-cooled alternatives. This efficiency allows operators to enhance server capacity and performance without the need for additional physical expansion.
4. Technological Innovations
The development of advanced cooling technologies, such as direct-to-chip cooling and immersion cooling, is further propelling the effectiveness of liquid cooling solutions. Direct-to-chip cooling channels coolant directly to the components generating heat, while immersion cooling involves submerging entire server racks in non-conductive liquids, both of which push thermal management to new heights.
Overcoming Challenges
While the benefits of liquid cooling are compelling, the transition to this technology presents certain challenges. Initial installation costs can be significant, and some operators may be hesitant due to concerns regarding complexity and ongoing maintenance. However, as liquid cooling technology advances and adoption rates increase, it is expected that costs will decrease, making it a more accessible option for a wider range of data center operators.
The Competitive Landscape
The data center liquid cooling market is home to several key players, including established companies like Schneider Electric, Vertiv, and Asetek, as well as innovative startups committed to developing cutting-edge thermal management solutions. These organizations are actively investing in research and development to refine the performance and reliability of liquid cooling systems, ensuring they meet the evolving needs of data center operators.
Download PDF Brochure : 
The outlook for the data center liquid cooling market is promising. As organizations prioritize energy efficiency and sustainability in their operations, liquid cooling is likely to become a standard practice. The integration of AI and machine learning into cooling systems will further enhance performance, enabling dynamic adjustments based on real-time thermal demands.
The evolution of liquid cooling in data centers represents a crucial shift toward more efficient, sustainable, and high-performing computing environments. As the demand for advanced cooling solutions rises in response to technological advancements, liquid cooling is not merely an option—it is an essential element of the future data center landscape. By embracing this innovative approach, organizations can gain a significant competitive advantage in an increasingly digital world.
2 notes · View notes
perennialwitness · 1 year ago
Text
The Real OG(an excerpt)
Please say the following aloud:
When you’re here, 
You’re family. 
If your mind made the connection to Olive Garden just now then we probably come from a similar background. Semi-suburban– too far to take public transit into the city, close enough to drive. Forty-five minutes, with no traffic. And we all know there’s no such thing as ‘no traffic’, only varying levels of density. The freeways more like rivers than roads, their red halogen flood line rising and falling with the moon and the weather. Kept fed by a sprawl of Commuter Towns, their  farthest edges in constant creeping development.
I grew up in one of these places, vast stretches of single-family homes connected by high-speed stroads. A town with clearly delineated lines between the Blacks and the Whites, everyone else fell somewhere in between. Then there were Subsections within that for the rich(meaning they more than likely owned their home) and the poor(straight down past section 8 and into the dusty outskirts). Streets would change suddenly from one to the next. The asphalt under your feet rapidly degrading as you made your way toward the Blacker, Poorer side of town. It mattered that you knew this. It was a way to communicate things oftentimes hard to say aloud. For instance, I lived on the poor Black side and went to school on the poor White side. Anyway,
Growing up, family events that warranted a drive to the city were rare. If it was your birthday, graduation, funeral, divorce– didn’t matter, there were only a handful of places to celebrate, all of them inhabiting the same mile long shopping plaza. There was; Applebees, famous for their happy hour specials. Chevy’s, Tex-mex where they make the tortillas out in the middle of the restaurant, which had the appearance of a beach cabana. Sizzler or Red Lobster if you were feeling extra spendy(dim lights, lots of wood grain, for date nights and so forth). And then there was the Olive Garden, which was reserved for nights when you really wanted to fill up. 
“Ain’t no bigger bang for your buck than Olive Garden on a coupon,” My step-dad would say then he’d rap his overstuffed wallet against the table and let out the hoarse rattle that was his laugh. He was right, if you were smart about it you could make one dinner last three days easy. 
Truth be told the food is barely food, classic recipes trimmed down to the bare necessities as a way of cutting costs and increasing turnover. Heapings upon heapings of pasta swimming in sauces brewed by the vat. Bread sticks, soggy with butter and oil, coming out in the dozens from the kitchen like clockwork. Servers in a mad dash to ensure every table’s basket full, lest they screech about meal comps, how they were advertised endless breadsticks and how they would sue if they weren’t offered compensation.
Bigger bang, bigger buck. 
To their credit the owners of the Olive Garden had tried to keep the place classy. The walls were painted to look like the cracked plaster of a Mediterranean villa, there were “stone” columns wrapped with vine decorations, arranged by someone unconcerned with structural support. Italian-sounding string accompaniments droned over the PA to complete the immersion. It was, all things considered, a nice place to bring the kids. And my parents, swept up in the fantasy, would drink wine there, instead of their usual Whiskeys and Vodka Sodas. They’d pretend they were in love, and we-- the kids I mean-- we tried our best to behave like “family”.
In my adulthood I avoided these places. Not because I cared about the quality, I don’t have qualms with cheap bad food. My aversion was psychological. These chains represented a place and a lifestyle that I couldn’t return to. The make-believe of it all. The gamified domesticity. It isn’t simple to correct your vision, removing the blinders is painful, seeing the truth of things deteriorates the sense of self. There’s just too much comfort in familiarity. So easy to lull oneself back to sleep amongst the herd, so more than anything else what I feared was regression.
6 notes · View notes
govindhtech · 11 months ago
Text
Dell PowerEdge XE9680L Cools and Powers Dell AI Factory
Tumblr media
When It Comes to Cooling and Powering Your  AI Factory, Think Dell. As part of the Dell AI Factory initiative, the company is thrilled to introduce a variety of new server power and cooling capabilities.
Dell PowerEdge XE9680L Server
As part of the Dell AI Factory, they’re showcasing new server capabilities after a fantastic Dell Technologies World event. These developments, which offer a thorough, scalable, and integrated method of imaplementing AI solutions, have the potential to completely transform the way businesses use artificial intelligence.
These new capabilities, which begin with the PowerEdge XE9680L with support for NVIDIA B200 HGX 8-way NVLink GPUs (graphics processing units), promise unmatched AI performance, power management, and cooling. This offer doubles I/O throughput and supports up to 72 GPUs per rack 107 kW, pushing the envelope of what’s feasible for AI-driven operations.
Integrating AI with Your Data
In order to fully utilise AI, customers must integrate it with their data. However, how can they do this in a more sustainable way? Putting in place state-of-the-art infrastructure that is tailored to meet the demands of AI workloads as effectively as feasible is the solution. Dell PowerEdge servers and software are built with Smart Power and Cooling to assist IT operations make the most of their power and thermal budgets.
Astute Cooling
Effective power management is but one aspect of the problem. Recall that cooling ability is also essential. At the highest workloads, Dell’s rack-scale system, which consists of eight XE9680 H100 servers in a rack with an integrated rear door heat exchanged, runs at 70 kW or less, as we disclosed at Dell Technologies World 2024. In addition to ensuring that component thermal and reliability standards are satisfied, Dell innovates to reduce the amount of power required to maintain cool systems.
Together, these significant hardware advancements including taller server chassis, rack-level integrated cooling, and the growth of liquid cooling, which includes liquid-assisted air cooling, or LAAC improve heat dissipation, maximise airflow, and enable larger compute densities. An effective fan power management technology is one example of how to maximise airflow. It uses an AI-based fuzzy logic controller for closed-loop thermal management, which immediately lowers operating costs.
Constructed to Be Reliable
Dependability and the data centre are clearly at the forefront of Dell’s solution development. All thorough testing and validation procedures, which guarantee that their systems can endure the most demanding situations, are clear examples of this.
A recent study brought attention to problems with data centre overheating, highlighting how crucial reliability is to data centre operations. A Supermicro SYS‑621C-TN12R server failed in high-temperature test situations, however a Dell PowerEdge HS5620 server continued to perform an intense workload without any component warnings or failures.
Announcing AI Factory Rack-Scale Architecture on the Dell PowerEdge XE9680L
Dell announced a factory integrated rack-scale design as well as the liquid-cooled replacement for the Dell PowerEdge XE9680.
The GPU-powered Since the launch of the PowerEdge product line thirty years ago, one of Dell’s fastest-growing products is the PowerEdge XE9680. immediately following the Dell PowerEdge. Dell announced an intriguing new addition to the PowerEdge XE product family as part of their next announcement for cloud service providers and near-edge deployments.
 AI computing has advanced significantly with the Direct Liquid Cooled (DLC) Dell PowerEdge XE9680L with NVIDIA Blackwell Tensor Core GPUs. This server, shown at Dell Technologies World 2024 as part of the Dell AI Factory with NVIDIA, pushes the limits of performance, GPU density per rack, and scalability for AI workloads.
The XE9680L’s clever cooling system and cutting-edge rack-scale architecture are its key components. Why it matters is as follows:
GPU Density per Rack, Low Power Consumption, and Outstanding Efficiency
The most rigorous large language model (LLM) training and large-scale AI inferencing environments where GPU density per rack is crucial are intended for the XE9680L. It provides one of the greatest density x86 server solutions available in the industry for the next-generation NVIDIA HGX B200 with a small 4U form factor.
Efficient DLC smart cooling is utilised by the XE9680L for both CPUs and GPUs. This innovative technique maximises compute power while retaining thermal efficiency, enabling a more rack-dense 4U architecture. The XE9680L offers remarkable performance for training large language models (LLMs) and other AI tasks because it is tailored for the upcoming NVIDIA HGX B200.
More Capability for PCIe 5 Expansion
With its standard 12 x PCIe 5.0 full-height, half-length slots, the XE9680L offers 20% more FHHL PCIe 5.0 density to its clients. This translates to two times the capability for high-speed input/output for the North/South AI fabric, direct storage connectivity for GPUs from Dell PowerScale, and smooth accelerator integration.
The XE9680L’s PCIe capacity enables smooth data flow whether you’re managing data-intensive jobs, implementing deep learning models, or running simulations.
Rack-scale factory integration and a turn-key solution
Dell is dedicated to quality over the XE9680L’s whole lifecycle. Partner components are seamlessly linked with rack-scale factory integration, guaranteeing a dependable and effective deployment procedure.
Bid farewell to deployment difficulties and welcome to faster time-to-value for accelerated AI workloads. From PDU sizing to rack, stack, and cabling, the XE9680L offers a turn-key solution.
With the Dell PowerEdge XE9680L, you can scale up to 72 Blackwell GPUs per 52 RU rack or 64 GPUs per 48 RU rack.
With pre-validated rack infrastructure solutions, increasing power, cooling, and  AI fabric can be done without guesswork.
AI factory solutions on a rack size, factory integrated, and provided with “one call” support and professional deployment services for your data centre or colocation facility floor.
Dell PowerEdge XE9680L
The PowerEdge XE9680L epitomises high-performance computing innovation and efficiency. This server delivers unmatched performance, scalability, and dependability for modern data centres and companies. Let’s explore the PowerEdge XE9680L’s many advantages for computing.
Superior performance and scalability
Enhanced Processing: Advanced processing powers the PowerEdge XE9680L. This server performs well for many applications thanks to the latest Intel Xeon Scalable CPUs. The XE9680L can handle complicated simulations, big databases, and high-volume transactional applications.
Flexibility in Memory and Storage: Flexible memory and storage options make the PowerEdge XE9680L stand out. This server may be customised for your organisation with up to 6TB of DDR4 memory and NVMe,  SSD, and HDD storage. This versatility lets you optimise your server’s performance for any demand, from fast data access to enormous storage.
Strong Security and Management
Complete Security: Today’s digital world demands security. The PowerEdge XE9680L protects data and system integrity with extensive security features. Secure Boot, BIOS Recovery, and TPM 2.0 prevent cyberattacks. Our server’s built-in encryption safeguards your data at rest and in transit, following industry standards.
Advanced Management Tools
Maintaining performance and minimising downtime requires efficient IT infrastructure management. Advanced management features ease administration and boost operating efficiency on the PowerEdge XE9680L. Dell EMC OpenManage offers simple server monitoring, management, and optimisation solutions. With iDRAC9 and Quick Sync 2, you can install, update, and troubleshoot servers remotely, decreasing on-site intervention and speeding response times.
Excellent Reliability and Support
More efficient cooling and power
For optimal performance, high-performance servers need cooling and power control. The PowerEdge XE9680L’s improved cooling solutions dissipate heat efficiently even under intense loads. Airflow is directed precisely to prevent hotspots and maintain stable temperatures with multi-vector cooling. Redundant power supply and sophisticated power management optimise the server’s power efficiency, minimising energy consumption and running expenses.
A proactive support service
The PowerEdge XE9680L has proactive support from Dell to maximise uptime and assure continued operation. Expert technicians, automatic issue identification, and predictive analytics are available 24/7 in ProSupport Plus to prevent and resolve issues before they affect your operations. This proactive assistance reduces disruptions and improves IT infrastructure stability, letting you focus on your core business.
Innovation in Modern Data Centre Design Scalable Architecture
The PowerEdge XE9680L’s scalable architecture meets modern data centre needs. You can extend your infrastructure as your business grows with its modular architecture and easy extension and customisation. Whether you need more storage, processing power, or new technologies, the XE9680L can adapt easily.
Ideal for virtualisation and clouds
Cloud computing and virtualisation are essential to modern IT strategies. Virtualisation support and cloud platform integration make the PowerEdge XE9680L ideal for these environments. VMware, Microsoft Hyper-V, and OpenStack interoperability lets you maximise resource utilisation and operational efficiency with your visualised infrastructure.
Conclusion
Finally, the PowerEdge XE9680L is a powerful server with flexible memory and storage, strong security, and easy management. Modern data centres and organisations looking to improve their IT infrastructure will love its innovative design, high reliability, and proactive support. The PowerEdge XE9680L gives your company the tools to develop, innovate, and succeed in a digital environment.
Read more on govindhtech.com
2 notes · View notes