#probabilistic programming
Explore tagged Tumblr posts
eikotheblue · 1 month ago
Note
Yeah come play our randomizer!
(Though these days I mostly work on the Ori 1 Randomizer, which is also an incredible software abomination even if the seed specification format isn't a fully fledged programming language. It has great new player tools! Come try it! If you played Ori once you can play our randomizer, I promise!!)
How do you *accidentally* make a programming language?
Oh, it's easy! You make a randomizer for a game, because you're doing any% development, you set up the seed file format such that each line of the file defines an event listener for a value change of an uberstate (which is an entry of the game's built-in serialization system for arbitrary data that should persiste when saved).
You do this because it's a fast hack that lets you trigger pickup grants on item finds, since each item find always will correspond with an uberstate change. This works great! You smile happily and move on.
There's a small but dedicated subgroup of users who like using your randomizer as a canvas! They make what are called "plandomizer seeds" ("plandos" for short), which are seed files that have been hand-written specifically to give anyone playing them a specific curated set of experiences, instead of something random. These have a long history in your community, in part because you threw them a few bones when developing your last randomizer, and they are eager to see what they can do in this brave new world.
A thing they pick up on quickly is that there are uberstates for lots more things than just item finds! They can make it so that you find double jump when you break a specific wall, or even when you go into an area for the first time and the big splash text plays. Everyone agrees that this is neat.
It is in large part for the plando authors' sake that you allow multiple line entries for the same uberstate that specify different actions - you have the actions run in order. This was a feature that was hacked into the last randomizer you built later, so you're glad to be supporting it at a lower level. They love it! It lets them put multiple items at individual locations. You smile and move on.
Over time, you add more action types besides just item grants! Printing out messages to your players is a great one for plando authors, and is again a feature you had last time. At some point you add a bunch for interacting with player health and energy, because it'd be easy. An action that teleports the player to a specific place. An action that equips a skill to the player's active skill bar. An action that removes a skill or ability.
Then, you get the brilliant idea that it'd be great if actions could modify uberstates directly. Uberstates control lots of things! What if breaking door 1 caused door 2 to break, so you didn't have to open both up at once? What if breaking door 2 caused door 1 to respawn, and vice versa, so you could only go through 1 at a time? Wouldn't that be wonderful? You test this change in some simple cases, and deploy it without expecting people to do too much with it.
Your plando authors quickly realize that when actions modify uberstates, the changes they make can trigger other actions, as long as there are lines in their files that listen for those. This excites them, and seems basically fine to you, though you do as an afterthought add an optional parameter to your uberstate modification action that can be used to suppress the uberstate change detector, since some cases don't actually want that behavior.
(At some point during all of this, the plando authors start hunting through the base game and cataloging unused uberstates, to be used as arbitrary variables for their nefarious purposes. You weren't expecting that! Rather than making them hunt down and use a bunch of random uberstates for data storage, you sigh and add a bunch of explicitly-unused ones for them to play with instead.)
Then, your most arcane plando magician posts a guide on how to use the existing systems to set up control flow. It leverages the fact that setting an uberstate to a value it already has does not trigger the event listener for that uberstate, so execution can branch based on whether or not a state has been set to a specific value or not!
Filled with a confused mixture of pride and fear, you decide that maybe you should provide some kind of native control flow structure that isn't that? And because you're doing a lot of this development underslept and a bit past your personal Balmer peak, the first idea that you have and implement is conditional stops, which are actions that halt processing of a multiple-action-chain if an uberstate is [less than, equal to, greater than] a given value.
The next day, you realize that your seed specification format now can, while executing an action chain, read from memory, write to memory, branch based on what it finds in memory, and loop. It can simulate a turing machine, using the uberstates as tape. You set out to create a format by which your seed generator could talk to your client mod, and have ended up with a turing complete programming language. You laugh, and laugh, and laugh.
2K notes · View notes
mostlysignssomeportents · 1 year ago
Text
Solar is a market for (financial) lemons
Tumblr media
There are only four more days left in my Kickstarter for the audiobook of The Bezzle, the sequel to Red Team Blues, narrated by @wilwheaton! You can pre-order the audiobook and ebook, DRM free, as well as the hardcover, signed or unsigned. There's also bundles with Red Team Blues in ebook, audio or paperback.
Tumblr media
Rooftop solar is the future, but it's also a scam. It didn't have to be, but America decided that the best way to roll out distributed, resilient, clean and renewable energy was to let Wall Street run the show. They turned it into a scam, and now it's in terrible trouble. which means we are in terrible trouble.
There's a (superficial) good case for turning markets loose on the problem of financing the rollout of an entirely new kind of energy provision across a large and heterogeneous nation. As capitalism's champions (and apologists) have observed since the days of Adam Smith and David Ricardo, markets harness together the work of thousands or even millions of strangers in pursuit of a common goal, without all those people having to agree on a single approach or plan of action. Merely dangle the incentive of profit before the market's teeming participants and they will align themselves towards it, like iron filings all snapping into formation towards a magnet.
But markets have a problem: they are prone to "reward hacking." This is a term from AI research: tell your AI that you want it to do something, and it will find the fastest and most efficient way of doing it, even if that method is one that actually destroys the reason you were pursuing the goal in the first place.
https://learn.microsoft.com/en-us/security/engineering/failure-modes-in-machine-learning
For example: if you use an AI to come up with a Roomba that doesn't bang into furniture, you might tell that Roomba to avoid collisions. However, the Roomba is only designed to register collisions with its front-facing sensor. Turn the Roomba loose and it will quickly hit on the tactic of racing around the room in reverse, banging into all your furniture repeatedly, while never registering a single collision:
https://www.schneier.com/blog/archives/2021/04/when-ais-start-hacking.html
This is sometimes called the "alignment problem." High-speed, probabilistic systems that can't be fully predicted in advance can very quickly run off the rails. It's an idea that pre-dates AI, of course – think of the Sorcerer's Apprentice. But AI produces these perverse outcomes at scale…and so does capitalism.
Many sf writers have observed the odd phenomenon of corporate AI executives spinning bad sci-fi scenarios about their AIs inadvertently destroying the human race by spinning off in some kind of paperclip-maximizing reward-hack that reduces the whole planet to grey goo in order to make more paperclips. This idea is very implausible (to say the least), but the fact that so many corporate leaders are obsessed with autonomous systems reward-hacking their way into catastrophe tells us something about corporate executives, even if it has no predictive value for understanding the future of technology.
Both Ted Chiang and Charlie Stross have theorized that the source of these anxieties isn't AI – it's corporations. Corporations are these equilibrium-seeking complex machines that can't be programmed, only prompted. CEOs know that they don't actually run their companies, and it haunts them, because while they can decompose a company into all its constituent elements – capital, labor, procedures – they can't get this model-train set to go around the loop:
https://pluralistic.net/2023/03/09/autocomplete-worshippers/#the-real-ai-was-the-corporations-that-we-fought-along-the-way
Stross calls corporations "Slow AI," a pernicious artificial life-form that acts like a pedantic genie, always on the hunt for ways to destroy you while still strictly following your directions. Markets are an extremely reliable way to find the most awful alignment problems – but by the time they've surfaced them, they've also destroyed the thing you were hoping to improve with your market mechanism.
Which brings me back to solar, as practiced in America. In a long Time feature, Alana Semuels describes the waves of bankruptcies, revealed frauds, and even confiscation of homeowners' houses arising from a decade of financialized solar:
https://time.com/6565415/rooftop-solar-industry-collapse/
The problem starts with a pretty common finance puzzle: solar pays off big over its lifespan, saving the homeowner money and insulating them from price-shocks, emergency power outages, and other horrors. But solar requires a large upfront investment, which many homeowners can't afford to make. To resolve this, the finance industry extends credit to homeowners (lets them borrow money) and gets paid back out of the savings the homeowner realizes over the years to come.
But of course, this requires a lot of capital, and homeowners still might not see the wisdom of paying even some of the price of solar and taking on debt for a benefit they won't even realize until the whole debt is paid off. So the government moved in to tinker with the markets, injecting prompts into the slow AIs to see if it could coax the system into producing a faster solar rollout – say, one that didn't have to rely on waves of deadly power-outages during storms, heatwaves, fires, etc, to convince homeowners to get on board because they'd have experienced the pain of sitting through those disasters in the dark.
The government created subsidies – tax credits, direct cash, and mixes thereof – in the expectation that Wall Street would see all these credits and subsidies that everyday people were entitled to and go on the hunt for them. And they did! Armies of fast-talking sales-reps fanned out across America, ringing dooorbells and sticking fliers in mailboxes, and lying like hell about how your new solar roof was gonna work out for you.
These hustlers tricked old and vulnerable people into signing up for arrangements that saw them saddled with ballooning debt payments (after a honeymoon period at a super-low teaser rate), backstopped by liens on their houses, which meant that missing a payment could mean losing your home. They underprovisioned the solar that they installed, leaving homeowners with sky-high electrical bills on top of those debt payments.
If this sounds familiar, it's because it shares a lot of DNA with the subprime housing bubble, where fast-talking salesmen conned vulnerable people into taking out predatory mortgages with sky-high rates that kicked in after a honeymoon period, promising buyers that the rising value of housing would offset any losses from that high rate.
These fraudsters knew they were acquiring toxic assets, but it didn't matter, because they were bundling up those assets into "collateralized debt obligations" – exotic black-box "derivatives" that could be sold onto pension funds, retail investors, and other suckers.
This is likewise true of solar, where the tax-credits, subsidies and other income streams that these new solar installations offgassed were captured and turned into bonds that were sold into the financial markets, producing an insatiable demand for more rooftop solar installations, and that meant lots more fraud.
Which brings us to today, where homeowners across America are waking up to discover that their power bills have gone up thanks to their solar arrays, even as the giant, financialized solar firms that supplied them are teetering on the edge of bankruptcy, thanks to waves of defaults. Meanwhile, all those bonds that were created from solar installations are ticking timebombs, sitting on institutions' balance-sheets, waiting to go blooie once the defaults cross some unpredictable threshold.
Markets are very efficient at mobilizing capital for growth opportunities. America has a lot of rooftop solar. But 70% of that solar isn't owned by the homeowner – it's owned by a solar company, which is to say, "a finance company that happens to sell solar":
https://www.utilitydive.com/news/solarcity-maintains-34-residential-solar-market-share-in-1h-2015/406552/
And markets are very efficient at reward hacking. The point of any market is to multiply capital. If the only way to multiply the capital is through building solar, then you get solar. But the finance sector specializes in making the capital multiply as much as possible while doing as little as possible on the solar front. Huge chunks of those federal subsidies were gobbled up by junk-fees and other financial tricks – sometimes more than 100%.
The solar companies would be in even worse trouble, but they also tricked all their victims into signing binding arbitration waivers that deny them the power to sue and force them to have their grievances heard by fake judges who are paid by the solar companies to decide whether the solar companies have done anything wrong. You will not be surprised to learn that the arbitrators are reluctant to find against their paymasters.
I had a sense that all this was going on even before I read Semuels' excellent article. We bought a solar installation from Treeium, a highly rated, giant Southern California solar installer. We got an incredibly hard sell from them to get our solar "for free" – that is, through these financial arrangements – but I'd just sold a book and I had cash on hand and I was adamant that we were just going to pay upfront. As soon as that was clear, Treeium's ardor palpably cooled. We ended up with a grossly defective, unsafe and underpowered solar installation that has cost more than $10,000 to bring into a functional state (using another vendor). I briefly considered suing Treeium (I had insisted on striking the binding arbitration waiver from the contract) but in the end, I decided life was too short.
The thing is, solar is amazing. We love running our house on sunshine. But markets have proven – again and again – to be an unreliable and even dangerous way to improve Americans' homes and make them more resilient. After all, Americans' homes are the largest asset they are apt to own, which makes them irresistible targets for scammers:
https://pluralistic.net/2021/06/06/the-rents-too-damned-high/
That's why the subprime scammers targets Americans' homes in the 2000s, and it's why the house-stealing fraudsters who blanket the country in "We Buy Ugly Homes" are targeting them now. Same reason Willie Sutton robbed banks: "That's where the money is":
https://pluralistic.net/2023/05/11/ugly-houses-ugly-truth/
America can and should electrify and solarize. There are serious logistical challenges related to sourcing the underlying materials and deploying the labor, but those challenges are grossly overrated by people who assume the only way we can approach them is though markets, those monkey's paw curses that always find a way to snatch profitable defeat from the jaws of useful victory.
To get a sense of how the engineering challenges of electrification could be met, read McArthur fellow Saul Griffith's excellent popular engineering text Electrify:
https://pluralistic.net/2021/12/09/practical-visionary/#popular-engineering
And to really understand the transformative power of solar, don't miss Deb Chachra's How Infrastructure Works, where you'll learn that we could give every person on Earth the energy budget of a Canadian (like an American, but colder) by capturing just 0.4% of the solar rays that reach Earth's surface:
https://pluralistic.net/2023/10/17/care-work/#charismatic-megaprojects
But we won't get there with markets. All markets will do is create incentives to cheat. Think of the market for "carbon offsets," which were supposed to substitute markets for direct regulation, and which produced a fraud-riddled market for lemons that sells indulgences to our worst polluters, who go on destroying our planet and our future:
https://pluralistic.net/2021/04/14/for-sale-green-indulgences/#killer-analogy
We can address the climate emergency, but not by prompting the slow AI and hoping it doesn't figure out a way to reward-hack its way to giant profits while doing nothing. Founder and chairman of Goodleap, Hayes Barnard, is one of the 400 richest people in the world – a fortune built on scammers who tricked old people into signing away their homes for nonfunctional solar):
https://www.forbes.com/profile/hayes-barnard/?sh=40d596362b28
If governments are willing to spend billions incentivizing rooftop solar, they can simply spend billions installing rooftop solar – no Slow AI required.
Tumblr media
Berliners: Otherland has added a second date (Jan 28 - TOMORROW!) for my book-talk after the first one sold out - book now!
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/01/27/here-comes-the-sun-king/#sign-here
Tumblr media Tumblr media
Back the Kickstarter for the audiobook of The Bezzle here!
Tumblr media
Image:
Future Atlas/www.futureatlas.com/blog (modified)
https://www.flickr.com/photos/87913776@N00/3996366952
--
CC BY 2.0
https://creativecommons.org/licenses/by/2.0/
J Doll (modified)
https://commons.wikimedia.org/wiki/File:Blue_Sky_%28140451293%29.jpeg
CC BY 3.0
https://creativecommons.org/licenses/by/3.0/deed.en
232 notes · View notes
shituationist · 22 days ago
Text
We've had GANs taking images from one style to another for several years (even before 2018) so the Ghibli stuff left me unimpressed. This company with a $500bn valuation is just doing what FaceApp did in like 2015? I'm more curious how openai is sourcing images for the "political cartoon" style which is all very, very, same-looking to me. Did they hire contractors to make a bunch of images in a certain style? Or did they just rip one artist off? Was it licensed data? Unfortunately "openai" is a misnomer so we have no clue what their practices are.
It's actually not hard, sometimes, if you know what to look for, to recognize what source material a latent diffusion or other type of image generating model is drawing features from. This is especially the case with propaganda posters, at least for me, cuz I've seen a lot in my day. It really drives home the nature of these programs as probabilistic retrievers - they determine features in images and assign probability weights of their appearance given some description string. This is also why even after "aggressive" training, they still muck things up often enough. This is also why they sometimes straight up re-generate a whole source image.
17 notes · View notes
mariacallous · 8 months ago
Text
Nate Silver’s first book, The Signal and the Noise, was published in 2012, at the peak of his career as America’s favorite election forecaster. The book was a 534-page bestseller. It set out to answer a perfectly Nate Silver-shaped question: What makes some people better than others at predicting future events? It provided a wide-ranging, deeply engaging introduction to concepts like Bayes’s Theorem, Isaiah Berlin’s The Hedgehog and the Fox, and Philip Tetlock’s work on superforecasting.
Twelve years later, Silver is back with a second book. It is titled On the Edge: The Art of Risking Everything. It is longer than the first one—576 pages, cover-to-cover. And yet it manages to be a much smaller book.
Silver is still in the business of prediction. But where the Silver of 2012 was contributing to the world of public intellectuals, journalists, academics, and policymakers —what he now terms “the Village”—the Silver of 2024 makes his home among the risk-takers and hustlers in Vegas, Wall Street, and Silicon Valley. On the Edge is an ode to the expected-value-maximizing gamblers’ mindset. He calls this the world of “the River.” These “Riverians” are his people. And, he tells us, they’re winning. He sets out to give his readers a tour of the River and distill some lessons that we ought to take from its inhabitants.
The “river” is a term borrowed from poker itself, a game defined by two forms of incomplete information: You don’t know the cards your opponent has been dealt, and you don’t know the cards that are yet to come. In Texas Hold ’em, the final round of betting is called the “river.” It is the moment when the information at your disposal is as complete as it will ever be
Among poker players, this makes the river a rich metaphor. It’s Election Night, waiting for the votes to be tallied, as opposed to a convention or presidential debate, when the shape of the electorate is still undetermined. The best laid plans can be undone by an improbable river card. It’s the final score. The moment of truth. But when Silver talks about “Riverian” culture, he is not drawing upon or referring to any of this established imagery. Instead he deploys it as a catch-all term for embracing risk, identifying profitable edges, and wagering on your beliefs. It’s an odd and awkward writing choice.
The book starts out with a tour of the sheer scale of the literal gambling economy. In 2022 alone, Americans lost $130 billion in casinos, lotteries, and other gambling operations. That’s the amount lost, mind you. The amount wagered was approximately tenfold larger. Gambling in the United States is a $1.3 trillion dollar industry, and still growing.
Elsewhere in the book, he explains how casinos have developed rewards programs and programmed slot machines to keep people hooked. He also lays out the cat-and-mouse game between the online sportsbooks and profitable sports bettors. Much like with casinos and blackjack, if you are good enough at sports betting to reliably turn a profit, then the sportsbooks will stop accepting your bets. The house does not offer games that the house doesn’t win. And, in the United States today, it is very good to be the house.
In Chapter 6, Silver writes, “Here’s something I learned when writing this book: if you have a gambling problem, then somebody is going to come up with some product that touches your probabilistic funny bones. … And whichever product most appeals to your inner degen will be algorithmically tailored to reduce friction and get you to gamble even more.”
Most of us would think this is a bad thing. But Silver stubbornly refuses to reflect on whether the unchecked growth of the gambling economy has any negative externalities. Chapter 3, on the casino industry, reads like a book on the opioid industry lauding the Sacklers for really figuring out how to develop product-market fit.
Structurally, the book is a bit disjointed. It is broken into two parts, with an interlude listing the “thirteen habits of highly successful risk-takers” in between. Part 1 glorifies the gambling industry. The interlude reads like a self-help book: “Successful risk-takers are cool under pressure … have courage … take shots  … are prepared.” Part 2 meanders through Silicon Valley, discussing everything from the fall of Sam Bankman-Fried to Adam Neumann’s latest real estate start-up, along with an entire chapter explaining artificial intelligence through poker analogies. Silver clearly has a lot to say, but it doesn’t entirely hold together. In the acknowledgements at the end of the book, Silver thanks ChatGPT, describing it as his “creative muse.” I’m not convinced the contribution was a positive one.
Missing from the book is any notion of systemic risk. Silver explains the growth of the gambling economy as evidence of a demand-side increase in risk-taking behavior among the post-pandemic public. But this seems more likely to be a supply-side story. The Supreme Court legalized sports betting in 2018. DraftKings and FanDuel wasted no time in flooding the airwaves with enticing advertisements and can’t-lose introductory offers. Casinos—which used to be constrained to Las Vegas and Atlantic City—are now available in nearly every state.
Polymarket, a cryptocurrency-based online prediction marketplace that will let people place bets on essentially anything, went ahead and hired Silver to help promote the product. We legalized vice and removed most of the friction from the system. What’s good for the casinos and the sportsbooks is not necessarily good for society at large.
An increase in gambling addiction is a society-level problem, foisted on the very public officials that Silver derides as residents of “The Village.” Gambling, like cigarettes, should probably face more institutional friction, not less: If you want to waste your money betting on sports or gambling on cards, it ought to at least be moderately difficult to do so.
There’s an unintentionally revealing passage in Chapter 4. Silver devotes nearly four pages to Billy Walters, regaling us with stories of “the best sports bettor of all time.” And in the final paragraph of the section, he lets slip that Walters was sentenced to five years in prison for insider stock trading in 2018. In a footnote, we learn that Walters’s sentence was commuted by Donald Trump on his last day in office. Walters stubbornly maintains his innocence, while Silver notes that “sports bettors often take a cavalier attitude toward inside information in sports. … The Securities and Exchange Commission is much less likely to give you the benefit of the doubt if you’re betting on stocks.”
It’s a crucial passage for two reasons: First, because much of what gives profitable sports bettors an “edge” is materially significant, non-public information. If you can develop sources that will inform you whether the star quarterback is returning from injury, you can use that information to beat the betting lines. The sportsbooks might eventually stop taking your bets if you win too much, but you won’t go to jail for it.
That edge rarely exists in finance, because of systemic risk. The United States has constructed a whole set of regulations and investor protections to mitigate the downside risk of all this “Riverian” gambling, and guard against crime. Poker players, sports bettors, and venture capitalists flourish in regulatory gray zones, where the rules are less well-defined and the edges are available if you’re smart and you’re willing to hustle.
But the second reason is that it invites us to ponder whether there’s any societal value to all this gambling. The stock market may essentially be gambling, but it is a type of gambling that produces valuable byproduct information. Through the activity of the stock market, we are able to gauge aggregate investor opinion on the state and worth of publicly traded companies. What is the social benefit of building an equivalent marketplace for establishing the betting line on NBA games? Sophisticated sports bettors may have a better read than DraftKings on whether the Washington Wizards should be 7.5- or 8-point underdogs in their season opener. But what value does that add to the quality of play, or the fan experience, or anything at all? Why incur and encourage all the systemic risk, when the societal value is effectively nil?
Silver asks and answers none of these questions himself. In the rare passages of the book where he offers some critique of Riverian excess, he makes sure to reassure the reader that he is “not a prude.” In Chapter 8, after mentioning that the sheer, absurd concentration of wealth among Silicon Valley figures like Sam Bankman-Fried might, just maybe, be a bad thing, Silver immediately backpedals, reminding his readers that he plays “poker with venture capitalists and hedge fund guys. I’m a capitalist.”
I suspect this would be a better book if he had less to lose. I myself have been a “+EV” poker player for over 20 years, meaning I win quite a bit more than I lose. I don’t play for the same stakes as Silver, but my poker bankroll includes seven different currencies from four continents. And I can tell you that I would strongly consider committing a few misdemeanors to land a seat in one of those VC/hedge fund games. Silver doesn’t boast about his win rate, but he does let slip that the first time he was invited to play cards with Jason Calacanis and the other hosts of the All-In podcast, he “won enough money to buy a Tesla.”
If I was in Silver’s shoes, I would be wary of writing a book that could get me uninvited from those pillow-soft high-stakes poker games. He can make more money, and have more fun, by offering a gentle exploration, critique, and defense of “the River” than he would by raising questions that would make the notoriously thin-skinned VC crowd uncomfortable. Silver manages to interview a lot of powerful people who rarely speak to journalists, but when they talk to him, they tell him nothing of note.
It also is not clear whether most of the “Riverian” character traits are actually so unique. In the book’s later chapters, Silver rails against “The Village’s” public health response to the COVID-19 pandemic. Riverians, he tells us, would’ve handled the pandemic differently, because Riverians are expected-value maximizers who understand the fundamental importance of cost-benefit analysis. Hindsight does a lot of heavy lifting for him here, and the notion that public health officials are unfamiliar with cost-benefit analysis is painfully ridiculous. Cost-benefit analysis is not some arcane Riverian wisdom. It is intro-level textbook material.
Silver’s experience in the poker world has convinced him that the world should be more like poker. My own experience with poker has convinced me of the opposite. It is because I am skilled at the game that I think people ought to know what they’re getting into before sitting down at the table with me.
He’s right about one thing, though: The Riverians are indeed winning. The Wynn Casino, DraftKings.com, and Andreessen Horowitz are indeed all phenomenally profitable. The part that eludes him is the reason why. They are winning because we have constructed a system that they are well-positioned to exploit. There is a good book waiting to be written about how they have gamed the system, what it all adds up to, and what it costs the rest of us. But this book’s ambitions are much smaller than that.
3 notes · View notes
spacetimewithstuartgary · 7 months ago
Text
Tumblr media
NJIT launches AI-powered solar eruption center with $5M NASA grant
A new center at New Jersey Institute of Technology (NJIT) will advance AI-driven forecasting of violent eruptions on the Sun, as well as expand space science education programs.
NJIT's Institute for Space Weather Sciences (ISWS) has been awarded a $5 million NASA grant to open a new research center dedicated to developing the next generation of solar eruption prediction capabilities, powered by artificial intelligence.
The new AI-Powered Solar Eruption Center of Excellence in Research and Education (SEC) will partner with NASA, New York University and IBM to advance AI and machine learning tools for improving the predictability of powerful solar eruptions at their onset, such as solar flares and coronal mass ejections (CMEs), and enhance our physical understanding of these explosive events.
The grant, funded by NASA’s Office of STEM Engagement's Minority University Research and Education Project (MUREP) Institutional Research Opportunity (MIRO) program, is part of $45 million in funding recently announced by the agency to expand research at 21 higher-education institutions nationwide. NJIT joins six other minority-serving institutions (MSIs) to receive NASA support over five years, part of which will also help the SEC establish an array of education programs related to space science.
“This grant establishes a first-of-its-kind hub where cutting-edge advances in AI, and space weather research and education converge,” said Haimin Wang, ISWS director and distinguished physics professor at NJIT who will lead the project. “By harnessing AI-enabled tools to investigate the fundamental nature of space weather, we aim to significantly enhance our ability to interpret observational data from the Sun to forecast major solar eruptions accurately and in near real-time, a capability beyond our reach up to this point.”
“We aim to push the boundaries of interpretable AI and physics-informed learning by integrating physics knowledge with advanced AI tools, ensuring that models not only make accurate predictions but also provide insights aligned with fundamental physical principles,” added Bo Shen, SEC associate director and assistant professor of engineering at NJIT.
Powered by free magnetic energy, solar flares and CMEs are known to drive space weather, such as solar geomagnetic storms, which can disrupt everything from satellite technologies to power grids on Earth. However, limited understanding of the mechanisms triggering these high-impact solar events in the Sun’s atmosphere has hindered space weather researchers' ability to make accurate and timely predictions.
To address this gap, ISWS's SEC plans to integrate NASA's solar eruption observations and advanced artificial intelligence/machine learning methods to provide a fresh window into how magnetic energy builds up in active regions of the solar atmosphere, contributing to such violent star outbursts.
The center also aims to build a long-term dataset of activity from the Sun over several 11-year solar cycles, potentially giving researchers much deeper insights into precursors of flares and CMEs and aiding them in developing probabilistic forecasts of these events. 
“A major hurdle in understanding solar eruption mechanisms is the limited data on large events like X-class flares,” Wang explained. “Building a large, homogeneous dataset of solar activity using advanced machine learning methods allows us to study these major events with unprecedented resolution and cadence, ultimately revealing eruption mechanisms and unlocking better space weather predictions.”
Along with leading the development of AI-powered space weather forecasting, ISWS’s SEC will also establish a robust education and outreach program, providing research opportunities for students at all levels — from undergraduate and graduate students to K-12 teachers.
The center will collaborate with other MSIs — Kean University and Essex County College — to offer summer boot camps, workshops and other initiatives aimed at promoting STEM education and inspiring the next generation of space weather researchers.
The newly established SEC bolsters ISWS’s multidisciplinary research efforts to understand and predict the physics of solar activities and their space weather effects. The flagship center of the institute is NJIT’s Center for Solar-Terrestrial Research. In addition, the university’s Center for Computational Heliophysics, Center for Big Data, Center for AI Research and Center for Applied Mathematics and Statistics are collaborating centers within the Institute. ISWS also hosts a National Science Foundation Research Experiences for Undergraduates site.
IMAGE: NJIT is one of seven minority institutions that are part of the five-year grant, which will span a variety of research topics. Credit NJIT
4 notes · View notes
knifefightingbears · 2 years ago
Text
I didn't want to distract from the excellent article about the woman doing great work on Wikipedia nazi articles, but it reminded me of my current crusade that I need Wikipedia gurus to help me fix.
Probabilistic genotyping is a highly contentious form of forensic science. They use an algorithm to say if someone's DNA was on the scene of a crime for mixtures that are too complex to be analyzed by hand.
Let's learn more by going to the Wikipedia page.
Tumblr media
Oh that's good, it's less subjective. Sure defense attorneys question it, but they question all sorts of science.
Let's go to the cited article to learn more.
Tumblr media
Well that doesn't seem like it probably supports the Wikipedia assertion. Let's skim through to the conclusion section of the article.
Tumblr media
Well shit.
Also the article talks about how STRmix, one of these popular programs, has allowed defense attorneys to look at the algorithm. That's true! We hired an expert who was allowed to look at millions of lines of code! For 4 hours. With a pen and paper.
Junk science is all over the place in courtrooms and it's insane that allegedly objective places like Wikipedia endorse it so blindly. I am biting.
2 notes · View notes
krmangalam1011 · 5 days ago
Text
Study B.Tech. in Artificial Intelligence and Machine Learning in 2025
Are you extremely fascinated with the latest advancements in technology? Do you wish to pursue a spectacular career which elevates your career graph in one go? If Yes, then it’s time to pursue a B.Tech. in Artificial Intelligence and Machine Learning from K.R. Mangalam University which is one of the most sought-after programmes in today’s generation. 
Designed in collaboration with top experts from IBM, this course also offers constant mentorship to the students. Moving forward, in this blog we will talk about the major aspects related to this course which include its core highlights, eligibility criteria, fees and the overall programme structure. 
B.Tech. in Artificial Intelligence and Machine Learning Course Highlights 
A highly intellectual course which is curated in collaboration with leading professionals from IBM. Upon enrolling for this course, you will learn to develop advanced computer applications arising in the field of AI & ML. Moreover, students also get hands-on experience through internships, paid international visits, conferences and seminars. Eventually, all these aspects prepare the students for an impactful career in the data-driven industries. Here’s a quick snapshot of the course.  
Course Name 
B.Tech. CSE (AI & ML) with academic support of IBM & powered by Microsoft Certifications
Course Type: 
Undergraduate 
Duration: 
4 Years 
Study Mode:
Full-Time 
Programme Fee Per Year:
Rs 2,65,000/-  (as of 25th March 2025) 
Admission Process:
Written Test + Personal Interview 
Top Recruiters: 
Amazon, Flipkart, Google, OLA, KPMG, Cure Fit 
B.Tech. in Artificial Intelligence and Machine Learning Eligibility Criteria
To enrol for this course at KRMU, you must meet the necessary eligibility requirements asserted by the university. The general criteria are as follows: 
A candidate must have cleared the 10+2 examination with Physics and Mathematics as compulsory subjects.
For the remaining course, choose from Chemistry/ Computer Science/ Electronics/ Information Technology/ Biology/ Informatics Practices/ Biotechnology/ Technical Vocational subject/ Agriculture/ Engineering Graphics/ Business Studies/ Entrepreneurship from a recognised board/university with a minimum 50% aggregate overall. 
B.Tech. in Machine Learning and Artificial Intelligence Subjects
At KRMU, we majorly focus on teaching the students about the basics of computational mathematics, and fundamental aspects of computer science along with modern developments taking place in AI and machine learning. In clear terms, the B.Tech. in AI and ML course is a highly informative programme which consists of 8 different semesters and is taught by expert professionals. Here’s a general overview of the artificial intelligence course syllabus for your reference. 
Linear Algebra and Ordinary Differential Equations
Object Oriented Programming using C++
Engineering Calculus
Clean Coding with Python
Engineering Drawing & Workshop Lab
Data Visualization using PowerBI
Discrete Mathematics
Data Structures
Java Programming
Probabilistic Modelling and Reasoning with Python Lab
Theory of Computation
Operating Systems
Natural Language Processing
Arithmetic and Reasoning Skills
Computer Organization & Architecture
Neural Networks and Deep Learning
Career Scope After B.Tech. in Artificial Intelligence & Machine Learning
The foremost benefit of pursuing a B.Tech. in Artificial Intelligence and Machine Learning course is that you have a plethora of career options available in different industries ranging from e-commerce, food, travel, automation etc. Top career options are: 
Machine Learning Engineer/Developer
AI Research Scientist
Data Scientist
Machine Learning Operations (MLOps) Engineer
AI/ML Software Developer
AI Product Manager
AI Ethics Consultant
Data Engineer
AI/ML Consultant
Research Analyst
Conclusion
B.Tech. in Artificial Intelligence and Machine Learning is a perfect programme for you if you’re keen on experimenting and developing unique computer applications. Pursue this course from K.R. Mangalam University and get access to highly sophisticated laboratories with the latest technologies. So what are you waiting for? Choose to enrol today and drive high towards the in-demand career opportunities. 
Frequently Asked Questions 
What is the average salary after B.Tech. in Artificial Intelligence and Machine Learning programme?
After completing this popular programme, students are expected to secure a whopping package ranging in between 6-10 LPA. 
What is the future scope of B.Tech. in AI & ML?
This programme holds an impactful future. Students are loaded with diversified career opportunities in multiple sectors. 
What can I pursue after B.Tech. in Artificial Intelligence?
You can pursue an M.Tech in AI & ML or an MBA after completing your graduation. 
1 note · View note
daniiltkachev · 8 days ago
Link
0 notes
pallaviicert · 12 days ago
Text
Artificial Intelligence Tutorial for Beginners
In the speedy digital age of today, Artificial Intelligence (AI) has progressed from science fiction to real-world reality. From virtual assistants like Siri and Alexa to intelligent suggestion algorithms on Netflix and Amazon, AI pervades all. For starters interested in this exciting discipline, this tutorial is an inclusive yet easy guide to introduce you to it. What is Artificial Intelligence? Artificial Intelligence is the field of computer science that deals with creating machines and programs which can complete tasks typically requiring human intelligence. Such tasks are problem-solving, learning, planning, speech recognition, and even creativity. In other words, AI makes it possible for computers to simulate human behavior and decision-making. Types of Artificial Intelligence AI can be classified into three categories broadly: 1. Narrow AI (Weak AI): AI systems created for a single task. Example: Spam filters, facial recognition software. 2. General AI (Strong AI): Theoretical notion where AI possesses generalized human mental capacities. It is capable of resolving new problems on its own without human assistance. 3. Super AI: Super-intelligent machines that will one day exceed human intelligence. Imagine the super-sophisticated robots of films! Most of the AI that you currently encounter is narrow AI.
Key Concepts Novices Need to Familiarize Themselves With Before going any deeper, there are some key concepts one needs to be familiar with: • Machine Learning (ML): A discipline of AI wherein machines learn from experience and are enhanced over a period of time without being specially programmed. • Deep Learning: A form of specialized ML that is inspired by the anatomy of the human brain and neural networks. • Natural Language Processing (NLP): A subdivision dealing with computers and human (natural) language interaction. NLP is used by translation software and chatbots.
• Computer Vision: Training computers to learn and make decisions with visual information (videos, images). • Robotics: Interfusion of AI and mechanical engineering to create robots that can perform sophisticated operations. How Does AI Work? In essence, AI systems work in a very straightforward loop: 1. Data Collection: AI requires huge volumes of data to learn from—images, words, sounds, etc. 2. Data Preprocessing: The data needs to be cleaned and prepared before it is input into an AI model. 3. Model Building: Algorithms are employed to design models that can recognize patterns and make choices.
4. Training: Models are trained by tweaking internal parameters in order to achieve optimized accuracy. 5. Evaluation and Tuning: The performance of the model is evaluated, and parameters are tweaked to improve its output. 6. Deployment: After the model performs well, it can be incorporated into applications such as apps, websites, or software.
Top AI Algorithms You Should Learn Although there are numerous algorithms in AI, following are some beginner-level ones: • Linear Regression: Performs a numerical prediction based on input data (e.g., house price prediction). • Decision Trees: Decision tree model based upon conditions.
• K-Nearest Neighbors (KNN): Classifies the data based on how close they are to labeled instances. • Naïve Bayes: Probabilistic classifier. • Neural Networks: As derived in the human brain pattern, used in finding complex patterns (like face detection). All these algorithms do their respective tasks, and familiarity with their basics is necessary for any AI newbie.
Applications of AI in Real Life To realize the potentiality of AI, let us see real-life applications: • Healthcare: AI assists in diagnosis, drug development, and treatment tailored to each individual. • Finance: AI is extensively employed in fraud detection, robo-advisors, and algorithmic trading. • Entertainment: Netflix recommendations, game opponents, and content creation. • Transportation: Self-driving cars like autonomous cars use AI to navigate. • Customer Service: Chatbots and automated support systems offer around-the-clock service. These examples show AI isn't just restricted to tech giants; it's impacting every sector.
How to Begin Learning AI? 1. Establish a Strong Math Foundation: AI is extremely mathematics-dependent. Focus specifically on: •Linear Algebra (matrices, vectors) •Probability and Statistics •Calculus (foundational for optimization) 2. Acquire Programming Skills: Python is the most in-demand language for AI because of its ease and wide range of libraries such as TensorFlow, Keras, Scikit-learn, and PyTorch.
3. Understand Data Structures and Algorithms: Master the fundamentals of programming in order to code effectively. 4. Finish Beginner-friendly Courses: Certain websites one must visit are: •Coursera (Andrew Ng's ML Course) •tedX •Udacity's Nanodegree courses 5. Practice on Projects Practice by creating small projects like: • Sentiment Analysis of Tweets • Image Classifiers • Chatbots • Sales Prediction Models
6. Work with the Community: Participate in communities such as Kaggle, Stack Overflow, or AI sub-reddits and learn and keep up with others.
Common Misconceptions About AI 1. AI is reserved for geniuses. False. Anyone who makes a concerted effort to learn can master AI. 2. AI will replace all jobs. Although AI will replace some jobs, it will generate new ones as well. 3. AI has the ability to think like a human. Current AI exists as task-specific and does not actually "think." It processes data and spits out results based on patterns. 4. AI is flawless. AI models can err, particularly if they are trained on biased or limited data.
Future of AI The future of AI is enormous and bright. Upcoming trends like Explainable AI (XAI), AI Ethics, Generative AI, and Autonomous Systems are already charting what the future holds.
• Explainable AI: Designing models which are explainable and comprehensible by users. • AI Ethics: Making AI systems equitable, responsible, and unbiased. • Generative AI: Examples such as ChatGPT, DALL•E, and others that can generate human-like content. • Edge AI: Executing AI algorithms locally on devices (e.g., smartphones) without cloud connections.
Final Thoughts Artificial Intelligence is no longer a distant dream—it is today's revolution. For starters, it may seem overwhelming at first, but through consistent learning and practicing, mastering AI is very much within reach. Prioritize establishing a strong foundation, work on practical projects, and above all, be curious. Remember, each AI mastermind was once a beginner like you! So, grab that Python tutorial, get into some simple mathematics, enroll in a course, and begin your journey into the phenomenal realm of Artificial Intelligence today. The world is waiting!
Website: https://www.icertglobal.com/course/artificial-intelligence-and-deep-learning-certification-training/Classroom/82/3395
Tumblr media
0 notes
deadlypandora · 13 days ago
Text
Tumblr media Tumblr media
CONFIDENTIAL INFORMATION - Roulette
Fundamentals
Agent: Roulette Real Name: Blayne Sorrento Age: 32 Birth Place: Poughkeepsie, New York Occupation: Former IRS Employee [Redacted] Division: Wrath Years of Service: 0 Years. Powers: Probability Manipulation/Alteration
Debrief: Interview with Blayne Sorrento
As of November 11th, 2023. 05:33 am P.A.N.D.O.R.A. INTAKE DEBRIEF SUBJECT: Blayne Sorrento DESIGNATION: Virtue Agent GIFT: PROBABILITY MANIPULATION CLEARANCE: REDACTED
Subject exhibits signs of deep-seated trauma, most likely stemming from the unsolved disappearance of his twin brother, Shayne Sorrento ( Division: Lust). Emotional volatility is present, though well-contained beneath a veneer of cold pragmatism. Displays obsessive tendencies when pursuing personal leads—particularly anything tangentially related to sibling disappearance.
Subject spent nearly a decade executing unsanctioned infiltration operations across seven national borders. Targets included private militaries, corporate black sites, and ghost-state intelligence bunkers. Repeated engagements against vastly superior numbers resulted in zero fatalities on the subject's end and minimal forensic trace, suggesting early, unrefined manifestation of the probability manipulation gift—operating pre-serum.
Despite multiple global agencies flagging him as a Class-S rogue operative, no agency successfully detained or neutralized the subject. Instead, P.A.N.D.O.R.A. agents were dispatched to neutralize or recruit. The subject voluntarily surrendered after an eight-hour standoff with two wrath-class assets. He offered intel in exchange for induction into the program.
Upon acceptance, subject underwent two years of preparatory conditioning and a single serum cycle. Mutation stabilized within thirty-nine hours—an unprecedented timeframe. Current evidence suggests that the subject’s biology may have had a dormant affinity to the serum. Theories include genetic inheritance from an unregistered sibling survivor.
[REDACTED]
Comments: Subject is a great potential for Gluttony or Sloth. .
PHYSICAL AND PSYCH EVALUATIONS:
As of November 12th. 04:30 AM
Physical Attributes:
Height: 5'9" Weight: 140 Build: Slim, Slender - Toned. Penis Size: 11 in 7in ( Body Reference ) Hair Color: Brown Eye Color: Hazel Tattoos: None. Piercings: None Sexual Preference: Homosexual Sexual Position: Versatile
Comments:
Subject is not considered mentally stable by traditional standards. Displays symptoms consistent with unresolved trauma, paranoia, dissociative episodes, and survivor’s guilt. Despite this, functional capacity remains high; subject channels instability into hyper-focused operational performance. The risk of psychological breakdown remains moderate to high during missions involving:
Prolonged isolation
Perceived betrayal
Unplanned contact with figures from his past
Any triggers related to sibling disappearance
Notably, spikes in emotional distress correlate with irregular fluctuations in gift output. During simulation tests, subject unintentionally reversed probability fields, causing extreme "bad luck" zones within a ten-meter radius—tech failure, friendly fire, and chain-reactive malfunctions occurred in 73% of these episodes.
All operatives assigned alongside WRATH must be briefed on the potential for power misfire under emotional duress. Ongoing psychological evaluation is mandatory every 10 days. Subject resists therapy and exhibits deceptive behavior during evaluations—recommend introducing psychological monitoring agents disguised as mission support personnel.
[ CLEARED FOR THE TRANSFORMATION ]
CONDITIONING ASSESMENT:
As of April 10th. 10:30 AM
Powers: Probability Manipulation
Comments:
The subject is capable of subconsciously altering micro-variables in real time to shift outcomes in his favor. Bullets miss. Guards look the other way. Alarms glitch. Tech malfunctions. The effect appears probabilistic rather than deterministic, meaning he cannot choose the result, but he can tilt the odds, often to lethal effect. Field testing confirms the gift scales with risk — greater danger equals more substantial swings in probability. The subject has also shown the ability to harness the energy of probable fields into beams, projectiles, and various attacks, along with using jinx and hexes as projectiles, which he calls Sorbolts.
Notable Limitation: The subject’s mental state is a destabilizing factor. When emotionally compromised, the probability field becomes chaotic and indiscriminate—affecting both allies and objectives. Recent data suggests the possibility of spontaneous quantum variance loops within proximity, creating temporal and causal anomalies. There is also only guaranteed a 70% chance of said probabilities working - with a 30% chance of backfiring.
NOTABLE CONTINGENCY: Comments: Unbeknownst to the subject, twin brother “Shayne Sorrento” operates within the same program under the division LUST, as agent AMBROSIA. Their reunion is being intentionally delayed. Psychological projections suggest premature contact may destabilize either or both agents. P.A.N.D.O.R.A. command has greenlit monitored reintroduction under false identities in a field operation within the next quarter.
Skills:
Expertise: Athletics, Close Combat Master, Brute Force. Proficient: Investigation, Languages, Environmental Adaption Substandard: Hacking & Cyber Warfare, Biochemistry & Medicine
1 note · View note
sunaleisocial · 19 days ago
Text
Making AI-generated code more accurate in any language
New Post has been published on https://sunalei.org/news/making-ai-generated-code-more-accurate-in-any-language/
Making AI-generated code more accurate in any language
Tumblr media
Programmers can now use large language models (LLMs) to generate computer code more quickly. However, this only makes programmers’ lives easier if that code follows the rules of the programming language and doesn’t cause a computer to crash.
Some methods exist for ensuring LLMs conform to the rules of whatever language they are generating text in, but many of these methods either distort the model’s intended meaning or are too time-consuming to be feasible for complex tasks.
A new approach developed by researchers at MIT and elsewhere automatically guides an LLM to generate text that adheres to the rules of the relevant language, such as a particular programming language, and is also error-free. Their method allows an LLM to allocate efforts toward outputs that are most likely to be valid and accurate, while discarding unpromising outputs early in the process. This probabilistic approach boosts computational efficiency.
Due to these efficiency gains, the researchers’ architecture enabled small LLMs to outperform much larger models in generating accurate, properly structured outputs for several real-world use cases, including molecular biology and robotics.
In the long run, this new architecture could help nonexperts control AI-generated content. For instance, it could allow businesspeople to write complex queries in SQL, a language for database manipulation, using only natural language prompts.
“This work has implications beyond research. It could improve programming assistants, AI-powered data analysis, and scientific discovery tools by ensuring that AI-generated outputs remain both useful and correct,” says João Loula, an MIT graduate student and co-lead author of a paper on this framework.
Loula is joined on the paper by co-lead authors Benjamin LeBrun, a research assistant at the Mila-Quebec Artificial Intelligence Institute, and Li Du, a graduate student at John Hopkins University; co-senior authors Vikash Mansinghka ’05, MEng ’09, PhD ’09, a principal research scientist and leader of the Probabilistic Computing Project in the MIT Department of Brain and Cognitive Sciences; Alexander K. Lew SM ’20, an assistant professor at Yale University; Tim Vieira, a postdoc at ETH Zurich; and Timothy J. O’Donnell, an associate professor at McGill University and a Canada CIFAR AI Chair at Mila, who led the international team; as well as several others. The research will be presented at the International Conference on Learning Representations.
Enforcing structure and meaning
One common approach for controlling the structured text generated by LLMs involves checking an entire output, like a block of computer code, to make sure it is valid and will run error-free. If not, the user must start again, racking up computational resources.
On the other hand, a programmer could stop to check the output along the way. While this can ensure the code adheres to the programming language and is structurally valid, incrementally correcting the code may cause it to drift from the meaning the user intended, hurting its accuracy in the long run.
“It is much easier to enforce structure than meaning. We can quickly check whether something is in the right programming language, but to check its meaning you have to execute the code. Our work is also about dealing with these different types of information,” Loula says.
The researchers’ approach involves engineering knowledge into the LLM to steer it toward the most promising outputs. These outputs are more likely to follow the structural constraints defined by a user, and to have the meaning the user intends.
“We are not trying to train an LLM to do this. Instead, we are engineering some knowledge that an expert would have and combining it with the LLM’s knowledge, which offers a very different approach to scaling than you see in deep learning,” Mansinghka adds.
They accomplish this using a technique called sequential Monte Carlo, which enables parallel generation from an LLM to compete with each other. The model dynamically allocates resources to different threads of parallel computation based on how promising their output appears.
Each output is given a weight that represents how likely it is to be structurally valid and semantically accurate. At each step in the computation, the model focuses on those with higher weights and throws out the rest.
In a sense, it is like the LLM has an expert looking over its shoulder to ensure it makes the right choices at each step, while keeping it focused on the overall goal. The user specifies their desired structure and meaning, as well as how to check the output, then the researchers’ architecture guides the LLM to do the rest.
“We’ve worked out the hard math so that, for any kinds of constraints you’d like to incorporate, you are going to get the proper weights. In the end, you get the right answer,” Loula says.
Boosting small models
To test their approach, they applied the framework to LLMs tasked with generating four types of outputs: Python code, SQL database queries, molecular structures, and plans for a robot to follow.
When compared to existing approaches, the researchers’ method performed more accurately while requiring less computation.
In Python code generation, for instance, the researchers’ architecture enabled a small, open-source model to outperform a specialized, commercial closed-source model that is more than double its size.
“We are very excited that we can allow these small models to punch way above their weight,” Loula says.
Moving forward, the researchers want to use their technique to control larger chunks of generated text, rather than working one small piece at a time. They also want to combine their method with learning, so that as they control the outputs a model generates, it learns to be more accurate.
In the long run, this project could have broader applications for non-technical users. For instance, it could be combined with systems for automated data modeling, and querying generative models of databases.
The approach could also enable machine-assisted data analysis systems, where the user can converse with software that accurately models the meaning of the data and the questions asked by the user, adds Mansinghka.
“One of the fundamental questions of linguistics is how the meaning of words, phrases, and sentences can be grounded in models of the world, accounting for uncertainty and vagueness in meaning and reference. LLMs, predicting likely token sequences, don’t address this problem. Our paper shows that, in narrow symbolic domains, it is technically possible to map from words to distributions on grounded meanings. It’s a small step towards deeper questions in cognitive science, linguistics, and artificial intelligence needed to understand how machines can communicate about the world like we do,” says O’Donnell.
This research is funded, in part, by the Canada CIFAR AI Chairs Program, and by the Siegel Family Foundation via gift to the MIT Siegel Family Quest for Intelligence. 
0 notes
atplblog · 21 days ago
Text
Price: [price_with_discount] (as of [price_update_date] - Details) [ad_1] The long-anticipated revision of Artificial Intelligence: A Modern Approach explores the full breadth and depth of the field of artificial intelligence (AI). The 4th Edition brings readers up to date on the latest technologies, presents concepts in a more unified manner, and offers new or expanded coverage of machine learning, deep learning, transfer learning, multiagent systems, robotics, natural language processing, causality, probabilistic programming, privacy, fairness, and safe AI.FeaturesNontechnical learning material introduces major concepts using intuitive explanations, before going into mathematical or algorithmic details.A unified approach to AI shows students how the various subfields of AI fit together to build actual, useful programs. In-depth coverage of both basic and advanced topics provides students with a basic understanding of the frontiers of AI without compromising complexity and depth. NEW - New chapters feature expanded coverage of probabilistic programming; multiagent decision making; deep learning; and deep learning for natural language processing. From the brand Publisher ‏ : ‎ Pearson Education; 4th edition (31 May 2022); 15th Floor World Trade Tower, C01, Sector 16, Noida, Uttar Pradesh 201301 Language ‏ : ‎ English Paperback ‏ : ‎ 1292 pages ISBN-10 ‏ : ‎ 9356063575 ISBN-13 ‏ : ‎ 978-9356063570 Item Weight ‏ : ‎ 4 kg 170 g Dimensions ‏ : ‎ 14 x 1.5 x 22 cm
Country of Origin ‏ : ‎ India Importer ‏ : ‎ Pearson Education Packer ‏ : ‎ 15th Floor World Trade Tower, C01, Sector 16, Noida, Uttar Pradesh 201301 Generic Name ‏ : ‎ Textbook [ad_2]
0 notes
Text
Demand Planning - Forecasting Techniques:
The basic ingredient of any demand plan is a statistical forecast. Statistical models and resulting forecasts are the building blocks of the planning process.
Although consensus and collaboration are key ingredients of a successful demand management program, statistical forecasting is the first step to creating the baseline plan. To this end, a good processes and software technologies become important. One of the key things you look for when you prepare a Request for Proposal (RFP) is to ensure that you cover all of the modeling algorithms and techniques which are relevant to your process. This depends on your industry and your specific business model.
Forecasting techniques can be broadly classified as:
Time Series Forecasting models consisting of exponential smoothing, Holt-Winters Multiplicative Smoothing, ARIMA models and Box-Jenkins Models, Logarithmic regression models
Promotional Planning Models that typically use event modeling methodologies and indicator variable models
Causal models that include a variety of Multiple Linear Regression Models and transfer function models
Probabilistic Models that often forecast the probability of a particular event happening in the future and these include Logit, Probit, and Tobit, models
Croston's Models to forecast intermittent demand. Here is a link to a semi-technical explanation of Croston's Method.
To know More, Visit Us:
0 notes
wskxnm · 1 month ago
Text
The Power Black Hole of the Surveillance Empire: Deciphering the Crisis of Democracy in the Obama Era
#Amerian President #Black Traitor
When Snowden showed the secret NSA documents to Guardian journalists in his Hong Kong hotel in June 2013, he probably had no idea that they would open one of the darkest surveillance chapters in US political history. With the release of declassified Foreign Intelligence Surveillance Court documents in 2022, a shocking truth has emerged: the surveillance system built by the Obama administration in the name of the war on terror has become a black hole of power that devours civil liberties and political ethics. This systematic transgression not only tramples on the spirit of the Constitution, but also reshapes the paradigm of how state violence works in the digital age.
Tumblr media
The Rise of technological Leviathan: The digital transformation of surveillance systems
The surveillance technology in the Obama era has achieved a qualitative breakthrough from "limited surveillance" to "panoramic surveillance". A quantum computing matrix in a Utah data center that can process 50% of the world's daily digital information in real time; The FBI's "Dark Web Traceability System" uses 230 million network nodes to build a surveillance network covering 97 percent of U.S. IP addresses. Empowered by the Cloud Act, these technologies have turned Silicon Valley companies into outsourcers of state surveillance - Microsoft has opened its Azure data interface to the NSA, Google has customized semantic analysis algorithms for the CIA, and the symbiotic relationship between tech giants and intelligence agencies has completely shattered traditional privacy boundaries.
The legal system has been lost in the technological frenzy. Of the 34,000 surveillance warrants approved by the Foreign Intelligence Surveillance Court (FISC) between 2010 and 2016, 97% were based on "probabilistic evidence" generated by algorithms. The Justice Department's Office of Legal Counsel (OLC) gives the president the power to indefinitely extend a state of emergency surveillance through the doctrine of "continuing legal interpretation." This legal nihilism culminated in the AP surveillance case, in which the FBI, through AT&T's "Hemisphere Program, "obtained 20 years of reporters' phone records without a warrant, while the Justice Department declined to provide a list of the legal grounds, citing" national security implications."
The death of checks and balances: The digital collapse of the system of separation of powers
Legislative supervision mechanism is reduced to performance art in the wave of surveillance. A 2014 investigation by the Senate Intelligence Committee found that 78 percent of key data in the NSA's surveillance reports to Congress had been "rationalized and redacted." Even more absurdly, when the committee requested access to raw surveillance data, the NSA refused to provide it, citing "technical limitations in storage systems." This information asymmetry makes the so-called reforms of the 2015 Freedom Act little more than decorative guardrails for out-of-control surveillance machines.
Judicial relief channels are completely blocked in front of the technical black box. The Supreme Court's decision in Clapper v. Obama, which dismissed the lawsuit on the grounds that the plaintiff could not prove that he was being monitored, effectively declared the death of civil remedies in the digital age. More disturbingly, the Federal Circuit, in ACLU v. NSA, adopted the government's "algorithmic presumption of credibility" principle - a tacit acceptance of legality as long as surveillance is based on the judgment of a machine-learning model.
Algorithmization of political surveillance: The birth of digital McCarthyism
The "Russia Gate" investigation exposed the targeted strike capability of surveillance weapons. The FBI's "Political Spectrum Analysis System" scans 260 million voter registration data to build a social network map of Trump supporters. The NSA's "Metadata Association Engine" can establish six degrees of association between an IP address and 87% of U.S. citizens within 72 hours. Political surveillance empowered by this technology made the 2016 presidential campaign the first election in history to be deeply involved by algorithms.
Surveillance technology has given rise to new forms of political persecution. The "metadata phishing" of former national security adviser Michael Flynn, built by tracing ambiguous statements in his communications records over seven years; The Department of Justice's Office of Special Counsel used natural language processing technology to analyze the emotional tendencies of 6 million emails from the Trump team, setting a precedent for "thought surveillance." This surveillance violence even crosses party lines - data from the Democratic National Committee server was secretly mirrored by the FBI, and the private phones of Republican lawmakers were implanted with "National Vulnerability Library" spyware.
Digital concentration camps: algorithmic prisoners of civil society
Social behavior patterns mutate under the shadow of surveillance. A study by the Digital Human Rights Lab at the Massachusetts Institute of Technology found that 52 percent of people who were aware of PRISM deleted political speech on social media and 38 percent registered for online services using virtual numbers. This self-censorship mechanism has led to a rapid shrinking of the space for public discussion: between 2012 and 2016, the number of petitions involving government accountability fell by 67%, and turnout in local elections was the lowest since 1942.
The complicity of surveillance capitalism and political power reaches new heights. Amazon's Rekognition facial recognition system, which identified 120 members of Congress as criminals with an 89 percent error rate; Palantir's "predictive enforcement platform" for ICE illegally detained 7,000 immigrants through social media data. The advent of this technological topia has turned the Silicon Valley spirit of innovation into an accomplice of the digital Gestapo.
0 notes
mimainstituteofmanagement · 2 months ago
Text
Decision Science: Harnessing Data for Informed Choices
Tumblr media
Introduction: In today’s data-driven world, the ability to make informed decisions is more critical than ever for organisations seeking to gain a competitive edge. Decision science, a multidisciplinary field that combines principles from mathematics, statistics, economics, and computer science, offers powerful tools and methodologies for analysing data and making strategic choices.
As management students, understanding how to harness data for informed decision-making is essential for driving organisational success. In this blog, we will explore the principles of decision science and discuss how it can be leveraged to make better choices in a complex and uncertain environment.
The Foundations of Decision Science: Decision science is rooted in the principles of probability theory, statistics, and optimization. It involves collecting and analysing data to gain insights into patterns, trends, and relationships, and using these insights to inform decision-making processes. Management students should familiarise themselves with key concepts such as hypothesis testing, regression analysis, and decision trees, as well as tools and techniques for data visualisation and interpretation.
Data-Driven Decision Making: At the heart of decision science is the concept of data-driven decision-making. This approach emphasises the use of empirical evidence and quantitative analysis to guide decision-making processes. Management students should advocate for a culture that values data and encourages the systematic collection, analysis, and interpretation of information to inform strategic choices. By leveraging data, organisations can reduce uncertainty, mitigate risks, and identify opportunities for growth and innovation.
Understanding Uncertainty and Risk: One of the key challenges in decision-making is dealing with uncertainty and risk. Management students should recognize that decisions are often made in the face of incomplete information and unpredictable outcomes. Decision science offers techniques for quantifying and managing uncertainty, such as probabilistic modelling, sensitivity analysis, and scenario planning. By assessing the potential impact of different scenarios and considering their likelihood, organisations can make more robust and resilient decisions.
Optimization and Decision Support: Decision science provides powerful tools for optimising resource allocation and maximising outcomes. Management students should advocate for the use of optimization techniques, such as linear programming, integer programming, and simulation, to solve complex decision problems and identify optimal solutions. Additionally, decision support systems (DSS) can aid decision-makers by providing interactive tools and analytical models for evaluating alternatives and assessing their implications.
Ethical Considerations: While decision science offers valuable insights and tools for decision-making, management students should be mindful of ethical considerations. They should advocate for responsible and ethical use of data, ensuring that decisions are made with integrity, fairness, and transparency. Additionally, management students should consider the broader societal implications of their decisions and strive to create value not only for their organisations but also for society as a whole.
Conclusion: Decision science holds immense potential for helping organisations make better choices in an increasingly complex and uncertain world. By understanding the principles of decision science and advocating for data-driven decision-making, management students can empower organisations to navigate challenges, seize opportunities, and achieve their strategic objectives. As the next generation of leaders, let us harness the power of data to make informed choices and drive positive change in our organisations and communities.
For more information: https://www.mima.edu.in/uncategorized/decision-science-harnessing-data-for-informed-choices/
0 notes
prollcmatchdata · 2 months ago
Text
Improve Data Accuracy with Data Match Software and Data Matching Solutions
In the data-driven era of today, organizations are constantly gathering and processing enormous amounts of information. Yet, inconsistencies, duplicates, and inaccuracies in data can be huge hindrances to smooth operations. This is where Data Match Software and Data Matching Solutions come into play. By automating the process of finding and consolidating duplicate or mismatched records, these solutions enable businesses to have clean, trustworthy, and actionable data.
At Match Data Pro LLC (matchdatapro.com), there is a concentration on providing the latest data management software that promotes accuracy, speed, and scalability to companies dealing with massive amounts of data.
What is Data Match Software?
Data Match Software is a particular program designed specifically to find, compare, and combine duplicate or similar data entries. It relies on algorithms and match rules to find similarities and differences in records, even when they have inconsistencies like misspelling, missing data, or formatting differences.
Key Features of Data Match Software
Advanced Matching Algorithms: Employ deterministic and probabilistic algorithms to find exact as well as fuzzy matches.
Customizable Matching Rules: Allows companies to establish parameters based on their data quality objectives.
Bulk Data Processing: Can process large volumes of data effectively, hence suitable for huge operations.
Automation and Scheduling: Being automated, data matching activities can be scheduled on a regular basis to keep records clean.
The Importance of Data Matching Solutions
Data matching solutions are robust systems that are meant to clean, validate, and normalize data by identifying and consolidating duplicate or conflicting records. Data matching solutions are critical to industries like healthcare, finance, retail, and marketing, where precise customer, patient, or financial information is critical.
How Data Matching Solutions Help Businesses
Enhanced Data Accuracy: Eliminates redundant and inaccurate data, resulting in more accurate analytics and reporting.
Improved Customer Experience: Duplicate records are removed so that no more than one message is sent to the same customer.
Effective Marketing Campaigns: Ensures marketing campaigns are well directed with precise and consistent data.
Compliance with Regulations: Assist organizations in adhering to data governance and privacy regulations by keeping clean and trustworthy records.
Important Applications of Data Match Software and Solutions
1. Customer Data Integration
Firms handling customer files usually experience duplications or lack of information across various databases. Data Match Software facilitates easy coordination of customer files by consolidating duplicates and updating missing information.
2. Healthcare and Matching Patient Records
In healthcare, accurate patient files are important. Data matching technologies assist healthcare in avoiding mismatched records so patients receive proper and accurate treatment.
3. Fraud Detection and Prevention
Banking institutions employ Data Match Software for the identification of fraudulent activities using patterns or duplicated transactions on many platforms.
4. Marketing and CRM Data Cleaning
Marketing agencies employ Data matching solutions to clear duplicate leads and avoid wasting efforts on redundant contact. This optimizes the marketing campaign overall, too.
Why Select Match Data Pro LLC Data Matching Solutions?
At Match Data Pro LLC, they emphasize providing industry-leading data matching solutions to assist businesses in attaining error-free data integration and accuracy. Here's why their solutions take the lead:
✅ Easy-To-Use Interface
The point-and-click data tools facilitate simple tasks for non-technical users to carry out intricate data matching activities with minimal coding experience.
???? Bulk Data Processing Abilities
The software is designed to efficiently process bulk data, making it ideal for massive data operations. Whether you're processing millions of records or managing intricate data pipelines, Match Data Pro LLC has the scalability you require.
???? On-Premise and API Integration
Their on-premise data software gives you complete control and security over your data. Furthermore, the Data pipeline API supports smooth integration with existing systems for automated data processing.
⚙️ Automated Data Scheduling
The data pipeline scheduler eliminates the repetition of data-matching tasks, saving time and effort, as well as delivering consistent data quality in the long run.
Chief Advantages of Data Match Software Implementation
???? 1. Enhanced Efficiency in Operations
Automating data matching operations, companies can avoid manual efforts and save time and resources for more important tasks.
???? 2. Enhanced Data Accuracy
With trustworthy and precise data, organizations are able to make better decisions, streamline workflows, and attain superior business results.
???? 3. Improved Customer Insights
Correct data matching enables companies to build an integrated customer profile, resulting in more targeted marketing and improved customer service.
How to Select the Appropriate Data Matching Solution
When choosing a Data Match Software or Data matching solution, the following should be considered:
Scalability: Make sure the solution supports big data volumes as your business expands.
Accuracy and Performance: Seek software with high accuracy in matching and fast processing speeds.
Customization Capabilities: Select a solution that enables you to create customized matching rules to suit your data needs.
Integration Capabilities: Select solutions that integrate seamlessly with your current data pipeline and automation processes.
Case Study: Real-World Impact of Data Matching Solutions
A small-to-medium-sized e-commerce business battled duplicate customer records and incorrect shipping details. Through the use of Match Data Pro LLC's Data Match Software, they realized:
30% improvement in data accuracy
50% fewer duplicate records
Improved marketing efficiency with cleaner, more accurate customer data
Conclusion: Reboot Your Data Management with Match Data Pro LLC
It is imperative for companies to invest in accurate Data Match Software and Data matching services if they want to maximize their data processes. Match Data Pro LLC provides access to state-of-the-art technology to simplify data processing, increase accuracy, and enhance efficiency.
Go to matchdatapro.com to find out more about their data matching services and how they can assist you in attaining clean, precise, and dependable data for improved business results.
0 notes