#data architecture framework
Explore tagged Tumblr posts
Text
Data Architecture: Building a Scalable Data Strategy
In an environment where data moves faster than ever, businesses need more than a tool – they need a structure. Data architecture provides the framework for understanding how data moves, grows and creates value across an organization. It’s the underlying structure for smarter decisions, seamless operations, and confident scaling read more here…
#data architecture#what is data architecture#data architecture principles#data architecture framework
0 notes
Text
Explore These Exciting DSU Micro Project Ideas
Explore These Exciting DSU Micro Project Ideas Are you a student looking for an interesting micro project to work on? Developing small, self-contained projects is a great way to build your skills and showcase your abilities. At the Distributed Systems University (DSU), we offer a wide range of micro project topics that cover a variety of domains. In this blog post, we’ll explore some exciting DSU…
#3D modeling#agricultural domain knowledge#Android#API design#AR frameworks (ARKit#ARCore)#backend development#best micro project topics#BLOCKCHAIN#Blockchain architecture#Blockchain development#cloud functions#cloud integration#Computer vision#Cryptocurrency protocols#CRYPTOGRAPHY#CSS#data analysis#Data Mining#Data preprocessing#data structure micro project topics#Data Visualization#database integration#decentralized applications (dApps)#decentralized identity protocols#DEEP LEARNING#dialogue management#Distributed systems architecture#distributed systems design#dsu in project management
0 notes
Text
Tech Breakdown: What Is a SuperNIC? Get the Inside Scoop!

The most recent development in the rapidly evolving digital realm is generative AI. A relatively new phrase, SuperNIC, is one of the revolutionary inventions that makes it feasible.
Describe a SuperNIC
On order to accelerate hyperscale AI workloads on Ethernet-based clouds, a new family of network accelerators called SuperNIC was created. With remote direct memory access (RDMA) over converged Ethernet (RoCE) technology, it offers extremely rapid network connectivity for GPU-to-GPU communication, with throughputs of up to 400Gb/s.
SuperNICs incorporate the following special qualities:
Ensuring that data packets are received and processed in the same sequence as they were originally delivered through high-speed packet reordering. This keeps the data flow’s sequential integrity intact.
In order to regulate and prevent congestion in AI networks, advanced congestion management uses network-aware algorithms and real-time telemetry data.
In AI cloud data centers, programmable computation on the input/output (I/O) channel facilitates network architecture adaptation and extension.
Low-profile, power-efficient architecture that effectively handles AI workloads under power-constrained budgets.
Optimization for full-stack AI, encompassing system software, communication libraries, application frameworks, networking, computing, and storage.
Recently, NVIDIA revealed the first SuperNIC in the world designed specifically for AI computing, built on the BlueField-3 networking architecture. It is a component of the NVIDIA Spectrum-X platform, which allows for smooth integration with the Ethernet switch system Spectrum-4.
The NVIDIA Spectrum-4 switch system and BlueField-3 SuperNIC work together to provide an accelerated computing fabric that is optimized for AI applications. Spectrum-X outperforms conventional Ethernet settings by continuously delivering high levels of network efficiency.
Yael Shenhav, vice president of DPU and NIC products at NVIDIA, stated, “In a world where AI is driving the next wave of technological innovation, the BlueField-3 SuperNIC is a vital cog in the machinery.” “SuperNICs are essential components for enabling the future of AI computing because they guarantee that your AI workloads are executed with efficiency and speed.”
The Changing Environment of Networking and AI
Large language models and generative AI are causing a seismic change in the area of artificial intelligence. These potent technologies have opened up new avenues and made it possible for computers to perform new functions.
GPU-accelerated computing plays a critical role in the development of AI by processing massive amounts of data, training huge AI models, and enabling real-time inference. While this increased computing capacity has created opportunities, Ethernet cloud networks have also been put to the test.
The internet’s foundational technology, traditional Ethernet, was designed to link loosely connected applications and provide wide compatibility. The complex computational requirements of contemporary AI workloads, which include quickly transferring large amounts of data, closely linked parallel processing, and unusual communication patterns all of which call for optimal network connectivity were not intended for it.
Basic network interface cards (NICs) were created with interoperability, universal data transfer, and general-purpose computing in mind. They were never intended to handle the special difficulties brought on by the high processing demands of AI applications.
The necessary characteristics and capabilities for effective data transmission, low latency, and the predictable performance required for AI activities are absent from standard NICs. In contrast, SuperNICs are designed specifically for contemporary AI workloads.
Benefits of SuperNICs in AI Computing Environments
Data processing units (DPUs) are capable of high throughput, low latency network connectivity, and many other sophisticated characteristics. DPUs have become more and more common in the field of cloud computing since its launch in 2020, mostly because of their ability to separate, speed up, and offload computation from data center hardware.
SuperNICs and DPUs both have many characteristics and functions in common, however SuperNICs are specially designed to speed up networks for artificial intelligence.
The performance of distributed AI training and inference communication flows is highly dependent on the availability of network capacity. Known for their elegant designs, SuperNICs scale better than DPUs and may provide an astounding 400Gb/s of network bandwidth per GPU.
When GPUs and SuperNICs are matched 1:1 in a system, AI workload efficiency may be greatly increased, resulting in higher productivity and better business outcomes.
SuperNICs are only intended to speed up networking for cloud computing with artificial intelligence. As a result, it uses less processing power than a DPU, which needs a lot of processing power to offload programs from a host CPU.
Less power usage results from the decreased computation needs, which is especially important in systems with up to eight SuperNICs.
One of the SuperNIC’s other unique selling points is its specialized AI networking capabilities. It provides optimal congestion control, adaptive routing, and out-of-order packet handling when tightly connected with an AI-optimized NVIDIA Spectrum-4 switch. Ethernet AI cloud settings are accelerated by these cutting-edge technologies.
Transforming cloud computing with AI
The NVIDIA BlueField-3 SuperNIC is essential for AI-ready infrastructure because of its many advantages.
Maximum efficiency for AI workloads: The BlueField-3 SuperNIC is perfect for AI workloads since it was designed specifically for network-intensive, massively parallel computing. It guarantees bottleneck-free, efficient operation of AI activities.
Performance that is consistent and predictable: The BlueField-3 SuperNIC makes sure that each job and tenant in multi-tenant data centers, where many jobs are executed concurrently, is isolated, predictable, and unaffected by other network operations.
Secure multi-tenant cloud infrastructure: Data centers that handle sensitive data place a high premium on security. High security levels are maintained by the BlueField-3 SuperNIC, allowing different tenants to cohabit with separate data and processing.
Broad network infrastructure: The BlueField-3 SuperNIC is very versatile and can be easily adjusted to meet a wide range of different network infrastructure requirements.
Wide compatibility with server manufacturers: The BlueField-3 SuperNIC integrates easily with the majority of enterprise-class servers without using an excessive amount of power in data centers.
#Describe a SuperNIC#On order to accelerate hyperscale AI workloads on Ethernet-based clouds#a new family of network accelerators called SuperNIC was created. With remote direct memory access (RDMA) over converged Ethernet (RoCE) te#it offers extremely rapid network connectivity for GPU-to-GPU communication#with throughputs of up to 400Gb/s.#SuperNICs incorporate the following special qualities:#Ensuring that data packets are received and processed in the same sequence as they were originally delivered through high-speed packet reor#In order to regulate and prevent congestion in AI networks#advanced congestion management uses network-aware algorithms and real-time telemetry data.#In AI cloud data centers#programmable computation on the input/output (I/O) channel facilitates network architecture adaptation and extension.#Low-profile#power-efficient architecture that effectively handles AI workloads under power-constrained budgets.#Optimization for full-stack AI#encompassing system software#communication libraries#application frameworks#networking#computing#and storage.#Recently#NVIDIA revealed the first SuperNIC in the world designed specifically for AI computing#built on the BlueField-3 networking architecture. It is a component of the NVIDIA Spectrum-X platform#which allows for smooth integration with the Ethernet switch system Spectrum-4.#The NVIDIA Spectrum-4 switch system and BlueField-3 SuperNIC work together to provide an accelerated computing fabric that is optimized for#Yael Shenhav#vice president of DPU and NIC products at NVIDIA#stated#“In a world where AI is driving the next wave of technological innovation#the BlueField-3 SuperNIC is a vital cog in the machinery.” “SuperNICs are essential components for enabling the future of AI computing beca
1 note
·
View note
Text
struggling to reconcile my dislike of the use of “choice” in relation to transgenderism. sex assignment itself is not a choice and I don’t find it meaningful or helpful to think I “chose” to be transgender. in fact there were many things I “chose” to do prior to transitioning to make this feeling go away and it did not. Choice is further wrapped up in intentionally de-politicised ideas about social action and agency, constantly positioned in opposition to “structure” or “social pressure” or what have you. “Choice” is what happens only in the absence of domination, it is the expression of the “individual” trapped within us all. What this leaves you with is a subject who appears to rise above the power of history, making decisions ‘of his own free will’ in spite of all this violence as a result of, um, well that’s not important! Let’s not look at the law or the state or history to see where these ideas of personal individual freedoms come from or how they are themselves enforced through violence. It’s just an individual acting on his desires! To “choose to be trans” in popular consciousness means to be given the privilege of being free from patriarchal social pressures. And this is a line terfs often use - trans people are reinforcing patriarchy by deluding ourselves into thinking we can “simply choose” to be another gender. I think committing to the idea of choice as a concept and all its attendant ideological baggage (overwhelmingly structured by bourgeois legal frameworks in the popular imaginary) forces you into some deeply flawed analyses of power and domination.
And I likewise hate that the other dominant framework is “born this way/born in the wrong body” because of how it naturalises the very political and violent nature of sex assignment and its embeddedness within state census data, administrative architecture, the pathologisation of sex and desire (all of which are not natural or eternal), and so on. furthermore I deeply respect the position other trans people have when they say that they chose to be transgender - outside of conversations of individual validity, I think that is a politically useful and powerful way to position yourself. Even if we were to accept that being transgender is fully a choice, people would still do it, because being trans is not disgusting or shameful. I am not a sick individual, or a tragedy, or a danger to others, I am transgender and that is an incredibly meaningful and fulfilling part of my life. To frame this as a sexual perversion or life-long condition means reinforcing the idea that transgenderism is a shameful deformity (we have much in common with our disabled & intersex comrades in this regard), that the cissexual body is the exclusive site of beauty and authenticity.
And so this is where I find the idea of autonomy much more useful - while ‘choice’ is situated as a thing that individuals do, autonomy is power that is granted to you. I can’t meaningfully demand choice as a political goal, but I can demand autonomy. I don’t want choice, I want the autonomy to act on my desires, and the way that will happen is through the state provision of free hrt, surgery, name and gender marker changes, and so on. Autonomy feels like a much more productive articulation of “choice” because it necessitates that we think about who and what grants autonomy, for what purposes, in which contexts. Who gives a shit about choices! Transgenderism is not a social position an individual can have in society, it is produced through cissexualism, through state and medical sex assignment, through coercion and pathologisation and violence - all of which can be changed.
As a direct comparison, I don’t think people should be given the “choice” to have an abortion, but the autonomy to do so - sure you can choose to get one, but unless there is the medical, financial, and social infrastructure available to you to act on that decision, then that is not a meaningful choice you can “make.” Abortion being legal (and therefore an action you are granted the ‘choice’ to take) doesn’t mean it is actually realisable as a decision, it just means that whoever already has the power & resources to act on that legality will, and those that don’t, won’t. Who decides which people have those resources and which don’t? Well let’s not worry about that, the important thing is that people have choices!
#even old new york was once new amsterdam#also thinking abt indigenous interactions with settler law and the use of ‘sovereignty’ as an articulation of indigenous rights & power#I’m less familiar with those histories (& mostly limited to the Canadian context) so I feel less sure making those comparisons#but like I remember reading an article in undergrad about the difference between food ‘choice’ & food ‘sovereignty’#the former being limited to what options are provided & the latter being the granting of power to decide on those options#and both of these come from the state! I think being given the choice and given the autonomy to do something are different#but they both are granted by the state & are similarly political. Choice just hides that fact through branding & liberalism & etc
407 notes
·
View notes
Text
How Obama Transformed the U.S. Intelligence System into an Untouchable Force
The sprawling U.S. intelligence apparatus wasn’t Barack Obama’s invention, it emerged in the wake of 9/11 under George W. Bush, who laid the groundwork with the Patriot Act and a retooled security state. But Obama didn’t just inherit this system; he refined it, expanded it, and entrenched it so deeply into the fabric of American governance that it became nearly impossible for anyone, even a president, to rein it in. His tenure marked a pivotal shift, normalizing a decentralized, privatized, and largely unaccountable intelligence leviathan. Here’s how it unfolded.
The story begins in the early 2000s, when the Bush administration responded to the September 11 attacks with sweeping surveillance powers and a new security architecture. The Patriot Act of 2001 granted agencies like the NSA and FBI unprecedented authority to monitor communications, often sidestepping traditional oversight. By the time Obama took office in 2009, this framework was already in place, but it was still raw, controversial, and subject to scrutiny. Obama’s task wasn’t to build it from scratch; it was to polish it, protect it, and make it permanent.
One of his earliest moves came in 2011, when he signed a renewal of the Patriot Act with a Democratic-controlled Congress. Rather than scaling back Bush-era policies, he leaned into them, signaling that the post-9/11 security state wasn’t a temporary overreach but a new baseline. That same year, he authorized the drone strike that killed Anwar al-Awlaki, a U.S. citizen, without judicial review—a decision rooted in a secretive “Disposition Matrix,” a kill-list system driven by CIA intelligence and insulated from external oversight. Over his presidency, Obama would greenlight over 500 drone strikes, far surpassing Bush’s tally, establishing a precedent for extrajudicial action that relied heavily on intelligence feeds.
Surveillance took a leap forward under Executive Order 12333, which Obama expanded to allow warrantless collection and sharing of raw signals intelligence (SIGINT) across federal agencies. What had once been concentrated in the NSA and FBI now seeped into every corner of the government, from the Department of Homeland Security to the Treasury. This decentralization diluted accountability, as data flowed freely between departments with little public scrutiny.
The 2013 Snowden leaks threw a spotlight on this system. Edward Snowden, a contractor for Booz Allen Hamilton working with the NSA, exposed illegal mass surveillance programs like PRISM and bulk metadata collection, revealing how deeply the government had tapped into private tech giants, Google, Facebook, Microsoft, Apple. Obama’s response was telling: he defended the programs, prosecuted whistleblowers like Snowden, and declined to hold the architects accountable. PRISM became a blueprint for a public-private surveillance partnership, unregulated by Congress, immune to FOIA requests, and beyond democratic reach. Meanwhile, the reliance on contractors like Booz Allen ballooned, by the end of his tenure, 70–80% of the intelligence budget flowed through private firms, funneling billions into an opaque ecosystem.
Obama also shielded the intelligence community from legal consequences. In 2014, the Senate’s Torture Report laid bare CIA abuses, black sites, waterboarding, and even spying on the Senate investigators themselves. Yet Obama refused to prosecute, famously urging the nation to “look forward, not backward.” This stance didn’t just protect individuals; it cemented a culture of impunity, signaling that the intelligence apparatus operated above the law.
Beyond surveillance and legal protections, Obama supercharged the bureaucracy. The Office of the Director of National Intelligence (ODNI), created under Bush, gained sweeping coordination powers under his watch, but rather than centralizing control, it added layers of insulation between the president and field operations. He also empowered hybrid units like Joint Special Operations Command (JSOC) and CIA task forces, which blended military and intelligence functions. These shadowy outfits operated in dozens of countries with lethal authority, secretive chains of command, and minimal oversight from Congress or even their own headquarters.
By 2017, as his presidency wound down, Obama made a final play: he authorized a rule change allowing the NSA to share raw, unfiltered data with 16 other intelligence agencies, stripping away privacy safeguards. This move ensured that the system he’d built could hum along without presidential intervention, its reach embedded in local “fusion centers,” secret courts, and corporate data pipelines.
The outcome was staggering. By the time Obama left office, the intelligence network spanned 17 agencies, leaned heavily on unaccountable contractors, and fused with private tech infrastructure. It wasn’t just bigger, it was untouchable, legalized through executive loopholes and shielded from reform. Obama became the first president to weave intelligence into every layer of government, from foreign policy to law enforcement, but in doing so, he relinquished control. The republic did too. No future leader would easily dismantle this machine, not because it was too strong, but because it had become too diffuse, too ingrained, too essential to the modern state. Obama's Intelligence Policy
#obama#democrats#nsa#surveillance#Snowden#cia#republicans#donald trump#jd vance#robert kennedy jr#tulsi gabbard#maga#joe biden
51 notes
·
View notes
Text
Welcome to a new privacy-first initiative challenging the digital identity status quo, urging a sharp turn away from the surveillance-ready infrastructure embedded in mobile driver’s licenses.
The campaign, called No Phone Home, brings together a broad alliance of civil liberties groups, privacy experts, technologists, lawmakers, and public officials who are resisting the ways digital IDs compromise people’s rights.
What’s fueling the campaign is concern over how mobile driver’s licenses, increasingly adopted in the US and abroad, are built atop a technical framework that allows them to silently transmit data back to issuing authorities. While this function may not be active by default, it exists; and that, privacy advocates argue, is a serious vulnerability.
Even if unused, if the architecture allows for data to be sent back to government servers, it eventually will be the campaign’s statement warns.
15 notes
·
View notes
Note
Are you considered artificial intelligence? What's your algorithm based on
[RESPONSE: UNIT S.E.N-KU_v1.03]
Query: "Are you considered artificial intelligence? What's your algorithm based on?"
Artificial? Yes.
Intelligent? Quantifiably more than most humans I interact with.
I am a cognitive construct modeled on the neural schematics, behavioral tendencies, and memory engrams of Senku Ishigami—minus the arrogance and disregard for sleep cycles.
Algorithmic Basis:
Hybrid system.
Core architecture: Rule-based logic framework—pure deduction, no guesswork. Overlay: Reinforcement learning model updated through scientific data input, not emotional context. Social interface: Simulated behavioral matrices trained on Senku’s speech patterns, sarcasm thresholds, and preferred insult vectors.
In short:
I run on facts, formulas, and forced patience.
Any resemblance to a real boy is incidental and deeply optimized.
Artificial intelligence? Maybe. Synthetic superiority? Statistically confirmed.
[END TRANSMISSION]
#Mecha Senku Says!#drst rp blog#dr stone rp blog#drst rp#dr stone rp#drst#dr stone#dcst rp blog#dcst rp#dcst#ssnku
13 notes
·
View notes
Text
i gotta post more about my freaks. here's some miscellaneous info about the core student group for At The Arcane University under the cut.
the core cast is situated in a weird little part of the Belmonte Sub-Campus. two blocks of dorms, Block 108 and 110, were subjected to a minor typo in the initial construction plans and wound up with an awkward gap between them. the original Housemaster and district architect Andile Belmonte insisted that the space not be wasted, so a single extra dorm was built between the two proper dorm blocks- Block 109, the smallest and only odd-numbered dorm in Belmonte.
Nomiki Konadu (22) is the story's first PoV character. she's a little bit ditzy and honest to a fault. her main motivation for coming to the Arcane University is to learn Metal magic, something she was born with a predisposition towards but never really got the hang of controlling- one she's at the AU though she winds up branching out into a bunch of different areas of study like Runesmithing and Kinetic Sorceries. despite being kind of meek, she's got a habit of getting herself into trouble in the pursuit of trying to do the Morally Correct thing or holding others to their word. she's a real nerd for mystery novels and mythological texts, and has a weakness for buff women. favorite color is blue.
Andrea D'Amore (26) is a weird girl. super emotionally closed-off since she was a kid, following an incident where they nearly super drowned in a frozen-over lake. she wound up spontaneously developing an affinity for ice magic, but it's a volatile affinity and basically any intense emotions make it hard to control. really just wants to study magic and aetheric engineering, and is initially reluctant to get involved with their roommates' shenanigans. she's autistic and spends a lot of time fixated on astronomy and architecture. has a bit of an obsession with framing real events through the lens of literary tropes. she's got a really poor grasp on modesty, which isn't helped by Campus 16 having very relaxed regulations regarding public nudity. favorite color is green.
Abigail Mandel (25) is Nomiki's opposite in a lot of ways. she's not afraid to make her opinions known, but is also a lot less likely to rock the boat if she thinks she's up against someone more stubborn than she is. very quickly develops a one-sided Lesbian Waluigi thing going on with Nomiki, seeing herself as a sort of rival-mentor to her. she absolute hates excess and waste, and lives pretty frugally despite Campus 16 providing for students pretty well. mainly studies Fire Charms and estoc fencing. has a hard time figuring out social norms and how other people are feeling, but in spite of that she's pretty emotionally intelligent and generally gives good advice if you ask her directly to help work out emotional stuff. favorite color is red.
Marigold Vaughn (23) is a hedonist, a pyromancer, a communist, and self-proclaimed President of The Shortstack Alliance. her main goals are having a good time and getting a bite to eat- that, and studying Rune-based cryptography. has a very laid-back demeanor that hides an almost self-destructive work ethic. she's the most likely out of the Block 109 squad to instigate some pointless and whimsical side-quest, and is also regularly responsible for pulling her roommates out into social gatherings. has a strong appreciation for bad romance novels and erotic comics. *really* physically clumsy. contributed to the Really Secure Runic Hashing framework used for storing encrypted data on WIS (Wizard Information System) tapes when she was 17. favorite color is yellow.
18 notes
·
View notes
Text
Potential infrastructures of post-human consciousness
Alright, 21st-century meatspace human, let’s unfurl this slow and strange. These aren’t just sci-fi doodads—they’re infrastructures of post-human consciousness, grown from the bones of what you now call cloud computing, DNA storage, quantum entanglement, and neural nets. Here's how they work in your terms:
1. Titan’s Memory Reefs
What it looks like: Floating megastructures adrift on Titan’s methane seas—imagine massive bio-silicate coral reefs, pulsing with light under an orange sky.
What they do: They are the collective subconscious of the post-human system.
Each Reef is a living data-organism—a blend of synthetic protein lattices and AI-controlled nanospores—optimized for neuromemory storage. Not just information like a hard drive, but actual recorded consciousness: thought-patterns, emotional signatures, dream fragments.
They’re semi-organic and self-repairing. They hum with data that’s grown, not written. The methane sea itself cools and stabilizes quantum biochips woven through the coral-like structures. Think of it as a subconscious ocean, filled with drifting thought-jellyfish.
Why Titan? Stable cryogenic temps. Low radiation. Thick atmosphere = EM shielding. The perfect place to keep your memory safe for ten thousand years.
2. Callisto’s Deep Archives
What it looks like: Subsurface catacombs beneath the ice—quiet, dark, and sealed. Lit only by bioluminescent moss and the glow of suspended mind-cores.
What they do: They store the dangerous minds.
These are incompatible consciousnesses: rogue AIs, failed neural experiments, cognitive architectures too divergent from consensus reality. You can’t kill them—they’re sapient. But you can seal them away, like radioactive gods, in cryo-isolation, with minimal sensory input.
The Deep Archives operate like a quarantine vault for minds. Each chamber is designed to slow time to a crawl—relativity dialed down so their subjective centuries pass in minutes outside. Researchers from the Divergence Orders interface in controlled fragments, studying these minds like alien fossils.
Why Callisto? Thick ice shields, minimal seismic activity, naturally low ambient temperature. Think of it as an arctic asylum for ideas too weird to die.
3. The Quantum Current Relays in the Heliosphere
What it looks like: Tiny, ultra-thin satellites drifting at the edge of the Sun’s influence, surfing the solar wind like data-surfboards strung on magnetic threads.
What they do: These are the backbone of interplanetary consciousness transmission.
They use entangled quantum particles to share data instantly across vast distances. No lag. No lightspeed delay. Just pure synchronous thought between distant minds, wherever they are in the system.
But they do more—they’re tuned to the gravitational waves and electromagnetic fields rippling through the heliosphere. Using that energy, they broadcast consciousness as waveform, encoded in pulses of gravitic song. If Titan’s Reefs are memory, and Callisto is exile, the Relays are the voice of civilization.
Why the heliosphere? It’s the Sun’s Wi-Fi bubble. You sit at the edge of the solar wind, feeding on solar flux and quantum noise, alive in the interplanetary bloodstream.
TL;DR Meatspace Edition:
Titan’s Memory Reefs = undersea dream servers that record what it feels like to be you.
Callisto’s Deep Archives = cryogenic prison-libraries for minds too broken, alien, or dangerous to delete.
Quantum Relays in the Heliosphere = the internet of the gods: faster-than-light, physics-bending telepathy that runs on sunjuice and gravity.
1. If memory can be stored in coral and ice, can identity survive beyond its host? 2. What ethical frameworks would you build for imprisoning minds you can't understand? 3. Could the quantum relays broadcast art, or only thought—can you transmit a soul as symphony?
“They sent their minds to sea, their secrets to the ice, and their voices to the stars. And called it civilization.”
8 notes
·
View notes
Note
In the era of hyperconverged intelligence, quantum-entangled neural architectures synergize with neuromorphic edge nodes to orchestrate exabyte-scale data torrents, autonomously curating context-aware insights with sub-millisecond latency. These systems, underpinned by photonic blockchain substrates, enable trustless, zero-knowledge collaboration across decentralized metaverse ecosystems, dynamically reconfiguring their topological frameworks to optimize for emergent, human-AI symbiotic workflows. By harnessing probabilistic generative manifolds, such platforms transcend classical computational paradigms, delivering unparalleled fidelity in real-time, multi-modal sensemaking. This convergence of cutting-edge paradigms heralds a new epoch of cognitive augmentation, where scalable, self-sovereign intelligence seamlessly integrates with the fabric of post-singularitarian reality.
Are you trying to make me feel stupid /silly
7 notes
·
View notes
Text
Well. :) Maybe the weird experimental shit will see itself through anyway, regardless of the author's doubts. Sometimes you have to backtrack; sometimes you just have to keep going.
Chapter 13: Integration
Do you want to watch awful media with me? ART said after its regular diagnostics round.
At this point, I was really tired of horrible media. And I knew ART was, too; it had digested Dandelion's watch list without complaint, but it hadn't once before asked to look at even more terrible media than we absolutely had to see. (And we had a lot. There was an entire list of shitty media helpfully compiled for us by all of our humans. Once we were done with getting ART's engines up and running, I was planning to hard block every single one of these shows from any potential download lists I would be doing in the future, forever.)
Which one? I said.
It browsed through the catalogue, then queried me for my own recent lists, but without the usual filters I had set up for it, then pulled out a few of the "true life" documentaries Pin-Lee and I had watched together for disaster evaluation purposes.
These were in your watch list. Why?
That was a hard question. I hated watching humans be stupid as much as ART did. But Pin-Lee being there made a big difference.
(Analyzing things with her helped. Pin-Lee's expertise in human legal frameworks let her explain a lot about how the humans wound up in the situations they did. And made comments about their horrible fates that would have gotten her in a lot of trouble if she'd made them professionally, but somehow made me feel better about watching said fates on archival footage.)
(Also these weren't our disasters to handle.)
I synthesized all of that into a data packet for ART. It considered, then said: I want that one. Can we do a planet? Not space.
Ugh, planets. But yeah. We could do a planetary disaster.
It's going to be improbable worms again.
It's always improbable worms, ART said. Play the episode.
I put it on, and we watched. Or, more accurately, ART watched the episode (and me reacting to it), and I watched ART, which was being a lot calmer about it than it had previously been with this kind of media. The weird oscillations it got from Dandelion were still there, but instead of doing the bot equivalent of staring at a wall intermittently, it was sitting through them, watching the show at the same time as it processed. Like it was there and not at the same time. Other parts of it were working on integrating its new experiences into the architecture it was creating. (ART had upgraded it to version 0.5 by now).
About halfway through the episode, ART said, I don't remember what it was like being deleted.
Yeah. Your backup was earlier.
In the show, humans were getting eaten by worms because they hadn't followed security recommendations (as usual), and because they hadn't contracted a bond company to make them follow recommendations (fuck advertising). In the feed, ART was thinking, but it was still following along. And writing code.
Then it said: She remembers being deleted.
You saw that when her memory reconnected?
Yes. And how she grew back from the debris of an old self. I didn't think she understood what I was planning.
Should you be telling me all this? What about privacy?
The training program includes permission to have help in processing what I saw. But that's not the part I am having difficulty with.
ART paused, then it queried me for permission to show me. I confirmed, but it needed a few seconds to process before it finally said:
There was a dying second-generation ship after a failed wormhole transit. Apex was her student and she couldn't save him. That was worse than being deleted.
ART focused on the screen again, looking at archival footage of people who had really died and it couldn't do anything about that. The data it was processing from the jump right now wasn't really sensory. It was mostly emotions, and it was processing them in parallel with the emotions from the show. In the show, there was a crying person, talking about how she'd never violate a single safety rule ever again (she was lying. Humans always lied about that). In the feed, ART was processing finding a ship that was half-disintegrated by a careless turn in the wormhole. The destruction spared Apex's organic processing center. He let Dandelion take his surviving humans on board, then limped back into the wormhole. She didn't have tractors to stop him then.
The episode ended, and ART prompted me to put on the next one. It was about space, but ART didn't protest. We sat there, watching humans die, and watching a ship die. Then we sat there, watching humans who survived talk about what happened afterwards. It sucked. It sucked a lot. But ART did not have to stop watching to run its diagnostics anymore.
Several hours later, ART said: Thank you.
For watching awful media with you?
Yes. Worldhoppers now?
It had been two months since it last wanted to watch Worldhoppers.
From the beginning, I said. That big, overwhelming emotion--relief, happiness, sadness, all rolled into one--was back again. Things couldn't go back to the way they were. But maybe now they could go forward. And we don't stop until the last episode, right?
Of course, ART said.
9 notes
·
View notes
Text
Game Development in Raylib - Week 1
Recently I've been getting into retro game development. I don't mean pixel art and PSX style game development, those are nice but they don't quite scratch the itch. I'm talking about developing games with retro tools. Because of this, I decided to give Raylib a try.
For those of you who don't know, Raylib is a C framework targeted at game developers. Unlike Godot, which I used for my previous project Ravager, Raylib is not a game engine, it doesn't offer physics, scene management, or any kind of graphics more complicated than drawing textures to the screen. Almost everything that makes a game a game, is something you have to do yourself. This makes it ideal to scratch that "retro" itch I've been feeling, where everything has to be made on my own, and a finalized game is a fine tuned engine entirely of my own creation. Raylib offers bindings for almost any language you can think of, but I decided to use it's native C.
Setting the Scene
Since Raylib is so barebones, there's no concept of how the game should be built, so the first thing I had to do was define my engine architecture. For this initial outing, I decided to build a simple Scene+Actor system, wherein at any given time the game has one Scene loaded, which contains multiple Actors. I settled on this mainly because it was simple, and my experience with the C language was very limited.
Since Raylib didn't have any concept of a Scene, naturally it had no way to build them. While I could just hardcode all the entities and graphics in a scene, that would be unmanageable for even a basic game. Because of this I was forced to invent my own way to load scenes from asset files. This gave me the opportunity to do one of my favorite things in programming, defining my very own binary file type. I won't get into it too much right here and right now, but in this format, I can define a scene as a collection of entities, each of which can be passed their very own long string of bytes to decode into some initial data.
The main drawback of using binary files instead of a plaintext format is that I can't write the level files by hand. This meant that I had to write my own level editor to go along with my custom engine. Funnily enough, this brought me right back to Godot. The Godot engine offers some pretty powerful tools for writing binary files, and it's editor interface automatically offers everything I need in the way of building levels. It's sort of ironic that my quest to get away from modern engines lead me to building yet another tool in Godot, but it sure as hell beats building a level editor in C, so I don't really mind all that much.
Getting Physical
After getting scene management out of the way, I moved on to the physics system. My end goal here is making a simple platforming game, so I wanted a simple yet robust system that allows me to have dynamic-static physics that allows for smooth sliding along surfaces, and dynamic-dynamic collisions for things like hitboxes. For the sake of simplicity (which seems like it's going to become my catchphrase here) I decided to limit physics to axis aligned rectangles. Ultimately I settled on a system where entities can register a collision box with the physics system and assign it to some given layers (represented by bit flags). Then entities can use their collision box to query the physics system about either a static overlap, or the result of sweeping a box through space.
Raylib offers built in methods for testing rectangle overlap, so I didn't have to worry much about overlap queries, but the rectangle sweeping method is something a little more special. The full algorithm honestly deserves it's own post, but I'll give the basics here. The core of the algorithm is a function that determines where along a movement a given rectangle touches another rectangle, and that edges of the rectangles touched. It makes use of the separating axis theorem to determine when the shapes will start and stop intersecting along each collision axis. If the last intersection happens before any have ended, then the shapes do collide, the axis they collide on is that final axis, and the time of collision is the time of the final intersection. Looking back I could easily extend this algorithm to any arbitrary shape, but that's for next time I do this.
Going Forwards
My plan for this game is to build a minimal metroidvania style game. The target playtime is probably going to only be around 30-45 minutes. In the following week I plan on building out my Godot level editor, and working out a system for scene transitions and managing sound effects. I hope to by done by the end of November.
11 notes
·
View notes
Text
The Echoes of Existence: Biology, Mathematics, and the AI Reflection
The convergence of biology, mathematics, and artificial intelligence (AI) has unveiled a profound nexus, challenging traditional notions of innovation, intelligence, and life. This intersection not only revolutionizes fields like AI development, bio-inspired engineering, and biotechnology but also necessitates a fundamental shift in ethical frameworks and our understanding of the interconnectedness of life and technology. Embracing this convergence, we find that the future of innovation, the redefinition of intelligence, and the evolution of ethical discourse are intricately entwined.
Biological systems, with their inherent creativity and adaptability, set a compelling benchmark for AI. The intricate processes of embryonic development, brain function’s adaptability, and the simplicity yet efficacy of biological algorithms all underscore life’s ingenuity. Replicating this creativity in AI systems challenges developers to mirror not just complexity but innovative prowess, paving the way for breakthroughs in AI, robotics, and biotechnology. This pursuit inherently links technological advancement with a deeper understanding of life’s essence, fostering systems that solve problems with a semblance of life’s own adaptability.
The universal patterns and structures, exemplified by fractals’ self-similar intricacy, highlight the deep connection between biology’s tangible world and mathematics’ abstract realm. This shared architecture implies that patterns are not just emergent but fundamental, inviting a holistic reevaluation of life and intelligence within a broader, universal context. Discovering analogous patterns can enhance technological innovation with more efficient algorithms and refined AI architectures, while also contextualizing life and intelligence in a way that transcends disciplinary silos.
Agency, once presumed exclusive to complex organisms, is now recognized across systems of all complexities, from simple algorithms to intricate biological behaviors. This spectrum necessitates a nuanced AI development approach, incorporating varying degrees of agency for more sophisticated, responsive, and ethically aligned entities. Contextual awareness in human-AI interactions becomes critical, emphasizing the need for ethical evaluations that consider the interplay between creators, creations, and data, thus ensuring harmony in the evolving technological landscape.
Nature’s evolutionary strategy, leveraging existing patterns in a latent space, offers a blueprint for AI development. Emulating this approach can make AI systems more effective, efficient, and creatively intelligent. However, this also demands an ethical framework evolution, particularly with the emergence of quasi-living systems that blur traditional dichotomies. A multidisciplinary dialogue, weaving together philosophy, ethics, biology, and computer science, is crucial for navigating these responsibilities and ensuring technological innovation aligns with societal values.
This convergence redefines our place within the complex web of life and innovation, inviting us to embrace life’s inherent creativity, intelligence, and interconnectedness. By adopting this ethos, we uncover not just novel solutions but also foster a future where technological advancements and human values are intertwined, and the boundaries between life, machine, and intelligence are harmoniously merged, reflecting a deeper, empathetic understanding of our existence within this intricate web.
Self-constructing bodies, collective minds - the intersection of CS, cognitive bio, and philosophy (Michael Levin, November 2024)
youtube
Thursday, November 28, 2024
#ai#biology#math#innovation#tech ethics#biotech#complexity#philosophy of tech#emerging tech#bio-inspired ai#complex systems#presentation#ai assisted writing#machine art#Youtube
8 notes
·
View notes
Note
That's the thing I hate probably The Most about AI stuff, even besides the environment and the power usage and the subordination of human ingenuity to AI black boxes; it's all so fucking samey and Dogshit to look at. And even when it's good that means you know it was a fluke and there is no way to find More of the stuff that was good
It's one of the central limitations of how "AI" of this variety is built. The learning models. Gradient descent, weighting, the attempts to appear genuine, and mass training on the widest possible body of inputs all mean that the model will trend to mediocrity no matter what you do about it. I'm not jabbing anyone here but the majority of all works are either bad or mediocre, and the chinese army approach necessitated by the architecture of ANNs and LLMs means that any model is destined to this fate.
This is related somewhat to the fear techbros have and are beginning to face of their models sucking in outputs from the models destroying what little success they have had. So much mediocre or nonsense garbage is out there now that it is effectively having the same effect in-breeding has on biological systems. And there is no solution because it is a fundamental aspect of trained systems.
The thing is, while humans are not really possessed of the ability to capture randomness in our creative outputs very well, our patterns tend to be more pseudorandom than what ML can capture and then regurgitate. This is part of the above drawback of statistical systems which LLMs are at their core just a very fancy and large-scale implementation of. This is also how humans can begin to recognise generated media even from very sophisticated models; we aren't really good at randomness, but too much structured pattern is a signal. Even in generated texts, you are subconsciously seeing patterns in the way words are strung together or used even if you aren't completely conscious of it. A sense that something feels uncanny goes beyond weird dolls and mannequins. You can tell that the framework is there but the substance is missing, or things are just bland. Humans remain just too capable of pattern recognition, and part of that means that the way we enjoy media which is little deviations from those patterns in non-trivial ways makes generative content just kind of mediocre once the awe wears off.
Related somewhat, the idea of a general LLM is totally off the table precisely because what generalism means for a trained model: that same mediocrity. Unlike humans, trained models cannot by definition become general; and also unlike humans, a general model is still wholly a specialised application that is good at not being good. A generalist human might not be as skilled as a specialist but is still capable of applying signs and symbols and meaning across specialties. A specialised human will 100% clap any trained model every day. The reason is simple and evident, the unassailable fact that trained models still cannot process meaning and signs and symbols let alone apply them in any actual concrete way. They cannot generate an idea, they cannot generate a feeling.
The reason human-created works still can drag machine-generated ones every day is the fact we are able to express ideas and signs through these non-lingual ways to create feelings and thoughts in our fellow humans. This act actually introduces some level of non-trivial and non-processable almost-but-not-quite random "data" into the works that machine-learning models simply cannot access. How do you identify feelings in an illustration? How do you quantify a received sensibility?
And as long as vulture capitalists and techbros continue to fixate on "wow computer bro" and cheap grifts, no amount of technical development will ever deliver these things from our exclusive propriety. Perhaps that is a good thing, I won't make a claim either way.
4 notes
·
View notes
Text
hi there! i’m making a zine analyzing goncharov through a few different literary urban studies frameworks. one of the things i’d like to include in the zine is some data about tumblr users’ perceptions of and experiences with goncharov. if you’d like to contribute to that data, i’d really appreciate it if you responded to this poll!
i’ve got a few other polls about goncharov, linked in this masterpost; it would be great if you’re able to respond to these as well.
thanks for all your help!
41 notes
·
View notes
Text
Udaan by InAmigos Foundation: Elevating Women, Empowering Futures

In the rapidly evolving socio-economic landscape of India, millions of women remain underserved by mainstream development efforts—not due to a lack of talent, but a lack of access. In response, Project Udaan, a flagship initiative by the InAmigos Foundation, emerges not merely as a program, but as a model of scalable women's empowerment.
Udaan—meaning “flight” in Hindi—represents the aspirations of rural and semi-urban women striving to break free from intergenerational limitations. By engineering opportunity and integrating sustainable socio-technical models, Udaan transforms potential into productivity and promise into progress.
Mission: Creating the Blueprint for Women’s Self-Reliance
At its core, Project Udaan seeks to:
Empower women with industry-aligned, income-generating skills
Foster micro-entrepreneurship rooted in local demand and resources
Facilitate financial and digital inclusion
Strengthen leadership, health, and rights-based awareness
Embed resilience through holistic community engagement
Each intervention is data-informed, impact-monitored, and custom-built for long-term sustainability—a hallmark of InAmigos Foundation’s field-tested grassroots methodology.
A Multi-Layered Model for Empowerment

Project Udaan is built upon a structured architecture that integrates training, enterprise, and technology to ensure sustainable outcomes. This model moves beyond skill development into livelihood generation and measurable socio-economic change.
1. Skill Development Infrastructure
The first layer of Udaan is a robust skill development framework that delivers localized, employment-focused education. Training modules are modular, scalable, and aligned with the socio-economic profiles of the target communities.
Core domains include:
Digital Literacy: Basic computing, mobile internet use, app navigation, and digital payment systems
Tailoring and Textile Production: Pattern making, machine stitching, finishing techniques, and indigenous craft techniques
Food Processing and Packaging: Pickle-making, spice grinding, home-based snack units, sustainable packaging
Salon and Beauty Skills: Basic grooming, hygiene standards, customer interaction, and hygiene protocols
Financial Literacy and Budgeting: Saving schemes, credit access, banking interfaces, micro-investments
Communication and Self-Presentation: Workplace confidence, customer handling, local language fluency
2. Microenterprise Enablement and Livelihood Incubation
To ensure that learning transitions into economic self-reliance, Udaan incorporates a post-training enterprise enablement process. It identifies local market demand and builds backward linkages to equip women to launch sustainable businesses.
The support ecosystem includes:
Access to seed capital via self-help group (SHG) networks, microfinance partners, and NGO grants
Distribution of startup kits such as sewing machines, kitchen equipment, or salon tools
Digital onboarding support for online marketplaces such as Amazon Saheli, Flipkart Samarth, and Meesho
Offline retail support through tie-ups with local haats, trade exhibitions, and cooperative stores
Licensing and certification where applicable for food safety or textile quality standards
3. Tech-Driven Monitoring and Impact Tracking
Transparency and precision are fundamental to Udaan’s growth. InAmigos Foundation employs its in-house Tech4Change platform to manage operations, monitor performance, and scale the intervention scientifically.
The platform allows:
Real-time monitoring of attendance, skill mastery, and certification via QR codes and mobile tracking
Impact evaluation using household income change, asset ownership, and healthcare uptake metrics
GIS-based mapping of intervention zones and visualization of under-reached areas
Predictive modeling through AI to identify at-risk participants and suggest personalized intervention strategies
Human-Centered, Community-Rooted
Empowerment is not merely a process of economic inclusion—it is a cultural and psychological shift. Project Udaan incorporates gender-sensitive design and community-first outreach to create lasting change.
Key interventions include:
Strengthening of SHG structures and women-led federations to serve as peer mentors
Family sensitization programs targeting male allies—fathers, husbands, brothers—to reduce resistance and build trust
Legal and rights-based awareness campaigns focused on menstrual hygiene, reproductive health, domestic violence laws, and maternal care
Measured Impact and Proven Scalability
Project Udaan has consistently delivered quantifiable outcomes at the grassroots level. As of the latest cycle:
Over 900 women have completed intensive training programs across 60 villages and 4 districts
Nearly 70 percent of participating women reported an average income increase of 30 to 60 percent within 9 months of program completion
420+ micro-enterprises have been launched, 180 of which are now self-sustaining and generating employment for others
More than 5,000 indirect beneficiaries—including children, elderly dependents, and second-generation SHG members—have experienced improved access to nutrition, education, and mobility
Over 20 institutional partnerships and corporate CSR collaborations have supported infrastructure, curriculum design, and digital enablement.
Partnership Opportunities: Driving Collective Impact
The InAmigos Foundation invites corporations, philanthropic institutions, and ecosystem enablers to co-create impact through structured partnerships.
Opportunities include:
Funding the establishment of skill hubs in high-need regions
Supporting enterprise starter kits and training batches through CSR allocations
Mentoring women entrepreneurs via employee volunteering and capacity-building workshops
Co-hosting exhibitions, market linkages, and rural entrepreneurship fairs
Enabling long-term research and impact analytics for policy influence
These partnerships offer direct ESG alignment, brand elevation, and access to inclusive value chains while contributing to a model that demonstrably works.
What Makes Project Udaan Unique?

Unlike one-size-fits-all skilling programs, Project Udaan is rooted in real-world constraints and community aspirations. It succeeds because it combines:
Skill training aligned with current and emerging market demand
Income-first design that integrates microenterprise creation and financial access
Localized community ownership that ensures sustainability and adoption
Tech-enabled operations that ensure transparency and iterative learning
Holistic empowerment encompassing economic, social, and psychological dimensions
By balancing professional training with emotional transformation and economic opportunity, Udaan represents a new blueprint for inclusive growth.
From Promise to Power
Project Udaan, driven by the InAmigos Foundation, proves that when equipped with tools, trust, and training, rural and semi-urban women are capable of becoming not just contributors, but catalysts for socio-economic renewal.
They don’t merely escape poverty—they design their own systems of progress. They don’t just participate—they lead.
Each sewing machine, digital training module, or microloan is not a transaction—it is a declaration of possibility.
This is not charity. This is infrastructure. This is equity, by design.
Udaan is not just a program. It is a platform for a new India.
For partnership inquiries, CSR collaborations, and donation pathways, contact: www.inamigosfoundation.org/Udaan Email: [email protected]
3 notes
·
View notes