#big data fundamentals
Explore tagged Tumblr posts
Text
Big Data and Hadoop Online Training
In the era of digital transformation, the synergy between big data analytics and Hadoop technology has become the cornerstone of innovation. To master this dynamic landscape, professionals are turning to Big Data and Hadoop Online Training, a transformative journey that seamlessly blends certification, hands-on learning, and placement support.
Unveiling the Layers of Big Data and Hadoop Online Training:
Revolutionizing Learning with Online Training: Online training has revolutionized education, and our Big Data and Hadoop Online Training capitalize on this shift. It offers professionals the flexibility to learn at their own pace, breaking down geographical barriers and providing access to high-quality content from anywhere globally.
The Power of Certification: Central to our program is the Big Data Hadoop Certification Training Course. Certification is not merely a badge; it's a validation of skills. It not only adds credibility to your profile but opens doors to diverse career opportunities in the competitive job market.
Hands-On Learning Experience: Theoretical knowledge finds practical application in our hands-on learning approach. Participants engage in real-world projects, navigating the complexities of Hadoop technologies. This immersive experience not only solidifies understanding but also fosters confidence in dealing with diverse data scenarios.
Comprehensive Curriculum: Our program covers the entire spectrum of Big Data and Hadoop, from fundamental concepts to advanced tools like Apache Hive, Apache Pig, and Apache HBase. This comprehensive curriculum ensures participants gain a nuanced understanding of the Hadoop ecosystem, preparing them for real-world challenges.
Online Training and Placement Course: Bridging the gap between education and employment, our online training and placement course offers holistic career development. Participants receive support in resume building, interview preparation, and connections to potential employers, ensuring a seamless transition into the workforce.
Advantages of Big Data and Hadoop Online Training:
Flexibility and Accessibility: Online training provides unparalleled flexibility, allowing professionals to learn at their own pace. Accessible from anywhere in the world, it eliminates geographical constraints, making high-quality training available to a diverse global audience.
Global Instructors and Industry Insights: Learning from industry experts enriches the training experience. Global instructors bring real-world insights, experiences, and global perspectives to the program, ensuring participants are well-prepared for the dynamic nature of Big Data projects.
Practical Application for Real-World Challenges: Our emphasis on hands-on learning ensures participants gain practical experience in dealing with real-world Big Data challenges. This practical exposure not only solidifies their understanding of Hadoop but also instills confidence in their ability to tackle complex data scenarios.
Certification for Career Advancement: A certification in Big Data and Hadoop is a valuable credential in the competitive job market. It serves as a testament to an individual's skills and opens doors to a wide range of career opportunities in the expansive domain of Big Data analytics.
Placement Support for Career Transition: The online training and placement course offer valuable support for individuals transitioning into Big Data roles. Assistance with resume building, interview preparation, and introductions to potential employers creates a seamless pathway for participants to embark on a successful career journey.
Conclusion: Empowering Careers in the Data-Driven Future
Enrolling in our Big Data and Hadoop Online Training is not just a learning endeavor; it's a strategic investment in professional growth and career advancement. As the volume of data continues to surge, skilled professionals who can navigate the Big Data landscape are in high demand. Our well-structured online training program, blending certification, hands-on learning, and placement support, prepares individuals to excel in the dynamic world of Big Data. Embrace the transformative power of Big Data and Hadoop, and position yourself for success in the evolving landscape of analytics. Master the data odyssey with confidence, armed with skills and certification that set you apart in the competitive realm of Big Data analytics.
#h2kinfosys#big data#big data hadoop certification#big data fundamentals#big data hadoop#Big Data training#BigDataHadoopCertification#HadoopTraining#CertificationTutorial
0 notes
Text
i know i've been on my anti-modern AU propaganda lately and it's just because i've been delving deeper into the sally face ao3 tags and i just keep finding them over and over. it's frustrating because there are a lot of really interesting concepts out there that would fit really well and make for a genuinely really interesting story in the 90s, but they get thrown off because the author doesn't know enough about the 90s to write for that time period, so they make it into a modern au instead. there's nothing inherently wrong with that, and i think there's room for modern aus to be done well here, i even have my own half-serious modern au, but i do think you often lose part of what makes sally face special when you turn the story into any other kind of contemporary love story/horror story/etc where all the characters just have ~iphones~ and use ~snapchat~ and all these things.
like, the 90s was not some kind of alien planet, and a vast number of the problems that you're solving with smartphones can be worked around very easily with just a bit of research or thought. long distance walkie-talkies, pagers, and PDAs were all (though sometimes expensive) perfectly capable contemporary technologies for talking to people when you are not physically with them. in fact, a lot of the abbreviations and slang we use over text right now was developed by young people in the 90s using pagers to talk to their friends. PDAs were a bit more out there in the 90s than they were in the aughts, but it's still completely plausible for henry in particular to have one, considering he already owns a home computer, which was not at all ubiquitous in the 90s. considering the apparent financial limitations that he and sal live under (it's never stated explicitly, but i mean, they live at addison's, they can't be in a great financial situation) and how insanely expensive computers were back then, it's more than likely that henry's job requires a home computer of some sort, meaning that a PDA would probably be incredibly useful to him if he were away from it, because there's no way in hell he's getting himself a luggable or another kind of early laptop to bring with him, that would've been too expensive.
and that's ignoring the fact that so many situations where two characters are apart form each other but need to communicate could just be fixed by rewriting the plot so that they can meet in person. i know that's not what people wanna hear because rewriting sucks, but you can find a lot of reasons for characters to meet each other randomly or to have reasons to meet up later if you give it a bit of problem-solving. part of what makes the pre-smartphone era interesting to write for and so optimized for horror, and probably a big reason that gabry chose this time period for the story in the first place, is the level of disconnection between each character in the story BECAUSE they don't have things like smartphones. having to work around this technological limitation is part of the fun, because you get a very enjoyable push and pull of closeness vs. disconnection between each character.
this is great for alienating ash, the only one who doesn't live in the apartments (except for neil), and causing her internal conflict about her relationships with the rest of her friends, especially as the story progresses and they start discovering more shit about the cult, and her instincts are to call the cops because she's a lot more normal than her friends are. or, it's good for alienating travis, who also doesn't live there and is far more isolated than everyone else (more on that next), or for creating an unhealthy and codependent relationship between sal and larry, who, with the walkies, are the only two in the friend group who DO have semi-instant access to each other all the time--all of which are plot points i put into my writing.
and if that's not enough, think about the implications for travis's character in particular. his father is a preacher, and a huge talking point of christian extremists in the 90s was that things like television were evil and demonic in some way. they campaigned against these things heavily. with the kind of person that we know kenneth phelps to be and the way many technologies we take for granted today, including TVs, were still being adopted by older generations, it's not out of the question at all that travis doesn't own something like a TV or a VCR, putting him even more out of the loop with what other people his age are doing than he already would be, having approximately 0 friends. he doesn't know what DND is, and he doesn't know how to look it up because he's not familiar with computers or the internet, he just knows his dad thinks it's demonic, so he steers clear of it.
the intention of cult leaders like kenneth is to keep their victims as isolated as possible, and not owning a TV, VCR, home computer, etc, is a great way to keep travis and his sisters isolated and disconnected from their peers, and therefore more connected to the cult, and it's a lot easier to justify not owning these things in the 90s, where the story already takes place, than it is if you're writing a modern au. a modern au for this situation would require all kinds of technological workarounds to make sure that travis owned a phone but couldn't do anything his father didn't want him doing on it. he's the kind of father who would go through and monitor his kid's texts, he wouldn't just let travis have snapchat or whatever, but i digress.
i know i'm just doing my petty bitching and people can do whatever they want however they want to, but i really do feel like there's a huge piece of the story that is lost in turning the sally face story as it is into some kind of modern au, and it's pretty unfortunate to me that people seem to think that the 90s was such a primitive alien world of incomprehensible technology that they don't want to write for that time period at all. it's really not as terrifying as it seems, genuinely. a surface level understanding of the era's technologies would be straightforward enough for anyone who wasn't there to write something perfectly coherent, if lacking in specific cultural/technological details that nobody but me cares about because i have autism.
if you're a sally face fan reading this and you struggle with writing for the american 90s because you weren't there, go look up pagers (also called beepers) and PDAs (which are basically early pocket computers) and how they work. ask older family members if or how they used them. go look at the different kinds of home computers of the era from companies like packard-bell and IBM. learn what a pentium III is/was, or what it means to be X86 compatible. look at the history of the CD-ROM, and how when it was invented, it could contain so much data that consumers had absolutely no idea what to do with them until people started putting video games on them. go watch cathode ray dude, LGR or techmoan on youtube.
go learn things about this era, it's good for you and you will have a lot of fun, even if you're not like me, i promise, and your fanfiction will be better for it. please learn about this era. take my hand. we can go to beautiful places together.
#txt#sally face#unwarranted infodump tag#anti modern au propaganda#i don't want to be mean i really don't#i want to encourage people to learn about this era#because the 80s 90s and 2000s were just#full of these huge technological booms#things that you just don't get nowadays#because most of consumer technology is a solved problem#and because capitalism is causing companies to eat each other and themselves#in a place where there can fundamentally be no competition anymore#it's genuinely amazing to see the technological advancements#and the cultural impacts that things like the walkman made#the fights between betamax and VHS#the death throes of the floppy as CDs came into the mix#the concept of computer tape as a whole#would throw so many young people nowadays for a loop#but computers used to have tape decks in them#because you stored the data for certain programs on tape#in an audio format#you can still find a lot of these programs on youtube#and if you were to play them in front of a computer#that read computer tape#then it would start the program that the data was for#it's awesome and it sucked big nasty hairy fucking balls#be glad you have the gift of hindsight here#so that you can learn about how interesting that technology was#instead of having to use it#like c'mon i want you to learn
8 notes
·
View notes
Text
I've seen a number of people worried and concerned about this language on Ao3s current "agree to these terms of service" page. The short version is:
Don't worry. This isn't anything bad. Checking that box just means you forgive them for being US American.
Long version: This text makes perfect sense if you're familiar with the issues around GDPR and in particular the uncertainty about Privacy Shield and SCCs after Schrems II. But I suspect most people aren't, so let's get into it, with the caveat that this is a Eurocentric (and in particular EU centric) view of this.
The basic outline is that Europeans in the EU have a right to privacy under the EU's General Data Protection Regulation (GDPR), an EU directive (let's simplify things and call it an EU law) that regulates how various entities, including companies and the government, may acquire, store and process data about you.
The list of what counts as data about you is enormous. It includes things like your name and birthday, but also your email address, your computers IP address, user names, whatever. If an advertiser could want it, it's on the list.
The general rule is that they can't, unless you give explicit permission, or it's for one of a number of enumerated reasons (not all of which are as clear as would be desirable, but that's another topic). You have a right to request a copy of the data, you have a right to force them to delete their data and so on. It's not quite on the level of constitutional rights, but it is a pretty big deal.
In contrast, the US, home of most of the world's internet companies, has no such right at a federal level. If someone has your data, it is fundamentally theirs. American police, FBI, CIA and so on also have far more rights to request your data than the ones in Europe.
So how can an American website provide services to persons in the EU? Well… Honestly, there's an argument to be made that they can't.
US websites can promise in their terms and conditions that they will keep your data as safe as a European site would. In fact, they have to, unless they start specifically excluding Europeans. The EU even provides Standard Contract Clauses (SCCs) that they can use for this.
However, e.g. Facebook's T&Cs can't bind the US government. Facebook can't promise that it'll keep your data as secure as it is in the EU even if they wanted to (which they absolutely don't), because the US government can get to it easily, and EU citizens can't even sue the US government over it.
Despite the importance that US companies have in Europe, this is not a theoretical concern at all. There have been two successive international agreements between the US and the EU about this, and both were struck down by the EU court as being in violation of EU law, in the Schrems I and Schrems II decisions (named after Max Schrems, an Austrian privacy activist who sued in both cases).
A third international agreement is currently being prepared, and in the meantime the previous agreement (known as "Privacy Shield") remains tentatively in place. The problem is that the US government does not want to offer EU citizens equivalent protection as they have under EU law; they don't even want to offer US citizens these protections. They just love spying on foreigners too much. The previous agreements tried to hide that under flowery language, but couldn't actually solve it. It's unclear and in my opinion unlikely that they'll manage to get a version that survives judicial review this time. Max Schrems is waiting.
So what is a site like Ao3 to do? They're arguably not part of the problem, Max Schrems keeps suing Meta, not the OTW, but they are subject to the rules because they process stuff like your email address.
Their solution is this checkbox. You agree that they can process your data even though they're in the US, and they can't guarantee you that the US government won't spy on you in ways that would be illegal for the government of e.g. Belgium. Is that legal under EU law? …probably as legal as fan fiction in general, I suppose, which is to say let's hope nobody sues to try and find out.
But what's important is that nothing changed, just the language. Ao3 has always stored your user name and email address on servers in the US, subject to whatever the FBI, CIA, NSA and FRA may want to do it. They're just making it more clear now.
10K notes
·
View notes
Text
Machine Learning Fundamentals for Data Analysis
An Overview of Machine Learning and Its Application to Data Analytics
Machine learning (ML) has developed as a key component of data analytics, providing powerful tools and approaches for extracting meaningful patterns and insights from massive volumes of data. At its core, machine learning is a subset of artificial intelligence (AI) that focuses on creating algorithms that can learn from and predict data. This capacity is becoming increasingly important as organizations from diverse industries strive to use data-driven decision-making processes to improve efficiency, optimize operations, and gain a competitive advantage.
Machine learning's importance to data analytics stems from its capacity to automate and increase the accuracy of data analysis processes. Traditional statistical methods, while effective, frequently require pre existing models and assumptions about the data. In contrast, machine learning algorithms can adapt and evolve as they are exposed to new data, revealing previously unknown patterns and relationships. This versatility makes machine learning especially useful for dealing with complicated and high-dimensional datasets that are common in modern data analytics.
Introduction to Supervised and Unsupervised Learning
Machine learning spans a wide range of learning paradigms, with supervised and unsupervised learning being two of the most fundamental.
Supervised learning
Supervised learning entails training a model using a labeled dataset, where each training example is associated with an output label. The goal of supervised learning is to create a mapping from inputs to outputs that can accurately predict labels for new, previously unseen data. This paradigm is analogous to learning with a teacher, in which the model is given the correct responses during training.
Typical supervised learning activities include:
Classification: Assigning inputs to pre-established groups. Ascertaining whether or not an email is spam, for example.
Regression: Making continuous value predictions. For instance, estimating home values depending on attributes like size and location.
In supervised learning, popular methods include neural networks, support vector machines (SVM), decision trees, logistic regression, and linear regression. From illness diagnosis in healthcare to fraud detection in finance, these algorithms have many uses.
Unsupervised Learning
On the other hand, unsupervised learning works with unlabeled information. The objective is to deduce the inherent organization found in a collection of data points. The model analyzes the inherent qualities of the data in order to find patterns and relationships without the need for predefined labels.
The key tasks in unsupervised learning include:
Clustering: Combining related data points together. Market segmentation, in marketing, is the grouping of clients with similar purchase behaviors.
Dimensionality Reduction: Reducing the amount of random variables being considered. Principal Component Analysis (PCA) is a technique for visualizing high-dimensional data and enhancing computational performance.
Unsupervised learning algorithms include k-means clustering, hierarchical clustering, and Gaussian Mixture Models. These techniques are critical in exploratory data analysis, allowing analysts to find trends and patterns without prior understanding of the data structure.
Conclusion
Understanding the fundamentals of machine learning is critical for anyone working in data analytics. Supervised and unsupervised learning are effective frameworks for modeling and analyzing complicated information, with each having distinct capabilities adapted to specific sorts of challenges. As the volume and complexity of data increases, analysts will need to understand these machine learning approaches in order to glean useful insights and make informed judgements. Data professionals can improve their ability to harness the full potential of their data by incorporating machine learning into their analytical toolkit, resulting in increased creativity and efficiency throughout their organizations.
Are you ready to improve your data analytics skills using machine learning? CACMS Institute in Amritsar offers complete data analytics training. We provide hands-on practical training, flexible scheduling, and an industry-specific curriculum to guarantee that you obtain the information and expertise required to flourish in your career.
Enroll in one of our next batches today to begin your journey to understanding data analytics and machine learning. Contact us at +91 8288040281 or visit CACMS Institute for more information. Don't pass up this opportunity to boost your career with the greatest data analytics training in Amritsar!
#cacms institute#techeducation#machine learning#machine learning course in Amritsar#Machine Learning Training#learn programming#machine learning institute in amritsar#big data analytics#data analysis#data analytics course in Amritsarmachine#machine learning fundamentals#machine learning algorithms
0 notes
Text
Bird NOPE, no thank you. Part 12
masterpost
“So, what’s the verdict, doc?” Danny asked. He was trying really hard to keep his tone light and not fidget. Mostly because when he fidgeted the wings moved and then he remembered that he had wings.
He really, really wanted an answer to the wings thing.
“Well, Phantom,” Frostbite said as he continued to look at the data, “your status as a halfa continues to bring about most interesting developments at the most interesting pacing!”
Danny groaned. He didn’t want to be interesting. There had been enough of being interesting in his lifetime already. Couldn’t he just have a calm rest of his life? Couldn’t this all of these ‘interesting developments’ wait until he was properly dead?
Danny took a deep breath so that he didn’t end up snapping at Frostbite. “Okay, right. What sort of developments are we talking about here? Because wings seem pretty unusual to me, even among ghosts.”
“Oh, yes, certainly. Fundamentally such a change, if one is to change, shouldn’t come so early and certainly not before other more common physical developments,” Frostbite said, rubbing at his chin with his icy claws. “At least not based on what we know of human ghosts.”
Danny rubbed at his face. The wings shifted. “Frostbite, I get that this is all very interesting to you, but I need you to explain things, please.”
Frostbite gave a little huff of air. “If you had attended the lectures as I recommended—”
“I can do that when I’m dead.” It was an old discussion between them at this point.
“Phantom,” Frostbite said kindly, “you are already dead.”
“And I am still alive!” Danny snapped, his patience frayed. The wings flaring out The tips brushed the edges of the walls. “I am still alive! I have eternity to learn about being dead but I only have one life. I only have one life, Frostbite, and I’m already spending half of it dead. Just… just let me try and live it as much as I can, please?”
“… of course, Phantom. I am sorry, friend. I forget what it’s like to have things be… fleeting.”
“I know, Frostbite,” Danny said, deflating as his anger extinguished. The wings folded tight against his back, a heavy weight pulling his shoulders down. “I know. Just, break it down for me, okay? I’ll sit in on all the lectures you want when I’m fully dead, I promise. Just for right now, explain to me what you can? I need to know why I have these things on my back.”
Frostbite gave a solemn nod and pulled up a stool to sit down on. “Human ghosts especially are very mutable. This is little surprise, really, with how mutable living humans are. Even though as dead we are largely stagnant, humans still often find their way to change. Personally I suspect that even as ghost, humans need the change to avoid Fading. You’ve seen these features in many of your friends and rivals: colored skin, fiery hair, exaggerated features. These are all things that you halfas seem to lack. My assumption has always been that it is your living half that keeps your features grounded in, while not reality, a more fixed visage.”
“Plasmius’ hair smolders some these days,” Danny pointed out.
“It does. The hair is often one of the first changes and Plasmius is both an older ghost than you, but also a much older human.” Frostbite paused before adding with a wry smile. “He is also much more fiery in nature than you are.”
That made Danny give a soft snort of amusement. “Okay so changes are expected, got it. I guess some go further? Like Skulker?”
“He is certainly an example of that. Spectra another. By all reason these changes can range from wish fulfillment to the effects of one’s insecurities. The longer one has been dead and the larger part those feelings play in someone’s making, the more likely changes are,” Frostbite explained. “Though there has yet to be any clear rhyme or reason to much of it. I personally believe the less fulfilled a ghost is, the more that they will change in an attempt to bring that part of themselves to peace.”
“Skulker needing to kill big game to soothe over feeling little and insignificant made him actually tiny and at the same time into a literal killing machine, right, got it,” Danny said. “And I guess that’s why Plasmius still looks like he’s just brushing forty. He was always vain. But Frostbite, I don’t want wings.”
“No, but you have always been… exceptional, Danny Phantom,” Frostbite said somberly. “Other ghosts master one or two skills, you master any you are exposed to. Other ghosts grow slowly, you grow by leaps and bounds. At first I thought this might be part of being a halfa, but we do not see the same growth in Plasmius and Dani. Plasmius is changing at a relatively normal rate and Dani, while advanced at first due to her creation, has stagnated quickly.”
Danny kept his eyes on his hands. He felt like he was fourteen again, scared and uncertain. “Why am I different?”
“I do not have the why, but I believe that the because is that you are destined, in time, to become an Ancient, or at least something akin to one.”
It was good that Danny didn’t need to breathe right then, as he was very sure he couldn’t if he tried.
“…an Ancient?”
Frostbite nodded. “Or something akin to one.”
Danny bowed over and buried his face in his hands. The wings responded and came up to curl around him as if trying to shield him from the world behind the oil slick feathers.
It made Danny want to rip them off.
“If nothing else, Ghosts are beholden to symbolism,” Frostbite said, his words a grounding rumble. “Ancients more so than the rest. The wings mean something, Phantom, even if you are unsure what. Answers will come.”
“I hate waiting,” Danny said, mostly just to be pedantic. He was allowed. He’d grown new limbs for fuck’s sake.
Frostbite rested a gentle hand on Danny’s back, right between the wings.
---
AN: Danny is having a hard time of it this post! Things will get better though. I am also having a bit of a hard time of it, so I'm sure there are many mistakes, but that's okay.
Stay delightful, darlings!
2K notes
·
View notes
Text
I am BARELY resisting going full red-strings-corkboard on this season. And by barely resisting I mean not resisting at all here is an extremely long list of the events those pins would be marking out.
BigB getting a Task that was a different color than everyone else's. It's not just a randomly assigned Hard Task, bc Scar rerolled for a Hard Task and his was also just a white envelope. It's fundamentally different.
That task taking BigB away from socialization, and seemingly being an incredibly time-consuming and dull request. Of profound disinterest to any watchers.
The phrasing of his Task!!
Dig a big hole. All the way down. At least 3x3. Make it your base if you want.
Everyone else's are direct and formal - the only one with more than one sentence was Skizz's, with the rule clarification of "One attempt only." Bigb's Task is four short abrupt sentences. It is also the only Task to contain extraneous information, 'Make it your base if you want.' The requirements (at least 3x3) feel like an afterthought to mimic the numerical/specific demands of the other tasks.
Evo symbol on the face of the Secret Keeper statue.
The fact that there's a statue at all; the fact that there is a physical representation of what is assigning tasks that everyone must complete, when previously everything was always handled via commands and unseen RNG.
Grian talking to the statue, and (bc of his Actual Role as game organizer) acting as a mediator for the impartial decisions handed down, speaking for it.
Grian making one last bad joke and saying he doesn't know if it counted or not- depends on whether we the audience laughed.
Grian asking for task recommendations from the audience. The watchers are making the tasks. The Watchers are making the tasks.
Again I could be off-base, and I'm not usually even that smitten with bringing in Evo lore. I don't want a Big Bad really...but. It feels like something very unusual and intentional and cool is happening in this series. And I'd guess we'll know if theres something going on once we have more than one data point.
My largely unfounded suspicion is that there is another being (maybe Listeners, maybe something else) trying to reach out to the Players via decoy Tasks, and BigB was the first recipient. Get them alone, make them of disinterest to the watchers, and tell them something we don't get to know.
Because that's the really, really fucking cool part (if my wacky theory is remotely right): We're the bad guys. We're the ones giving out tasks - hell, we're the ones actively brainstorming harder and crueller tasks in Grian's comments!
If they actually made a story where the Players have to keep secrets from us I will be delighted. Bc that is the same genius bullshit that made Evo Watcher lore so fun
#secret life#slsmp#life series#grian#secret life smp#bigb#i think im starting to get the shape of the conceit#this could all be nonsense of course. i may be completely off base and nothing will happen and it's just a normal life series#but it feels like there's something Larger happening here#anyways. will keep thinking and mulling this over and collecting scraps of evidence#secret life spoilers#slsmp spoilers#spoilers#salem meta#salem tag#im so enriched. i love being wrong about stories
6K notes
·
View notes
Text
In 2019, I gave a talk at TED that created waves: first at the conference, then on the internet and then, convulsively, in my own life. TED is Silicon Valley’s sacred ground. It’s the most consequential tech conference in the world and, in 2019, my talk entitled “Facebook’s role in Brexit - and the threat to democracy” was a break with normal service. It was the first time, a speaker had implicated Silicon Valley directly in the political tumult of 2016. It ricocheted out of the conference and across the internet where it’s now been seen five million times. And, most cataclysmically of all, it precipitated a lawsuit that devoured my time, energy and health.
This week I returned.
It was a big deal on any number of levels. For me, personally, for TED, and, I believe, or at least, hope, for Silicon Valley. I got to send a message to the leaders of these companies from a platform that is inside the temple. I’ve lost my voice and I feel like I’ve lived through a tornado….but with the knowledge that it’s one I’ve chosen to unleash.
TED has just released it as the first talk from the conference. I got to name what is happening for what it is: a coup. I call the Silicon Valley companies who attend this conference and even sponsor it, collaborators who are complicit in a regime of fear and cruelty. And I accuse Sam Altman, the CEO of OpenAI, who is talking here on Friday not just of data theft but data rape.
youtube
There’s so much to say and I will write more soon but for now I’d be so grateful if you watch it and share it with your families and friends. In spite of everything, I’m grateful to have been given this platform and to be able to communicate what I believe are vital truths but I have paid a price for doing this work and the last week has been a rollercoaster of emotions: doubt, self-questioning, denial, overwhelm, fear.
And in the middle of it, the night before I flew to TED, I went to the Observer’s farewell party. This Sunday marks the end of the newspaper as we know it. Six years ago, I got to write about the experience of giving my TED talk in the Guardian/Observer. Paul Webster, the editor, put it on the front page.

This time around, that’s not possible. TED gave me editorial freedom to say what I wanted. The Guardian/Observer won’t even allow me to write about it, in any form.I pitched a piece for this Sunday about the experience. It would be my last article for the paper, it transfers to Tortoise next week who have declined to renew my contract; an epitaph to my 20-year career there and an an end point to an investigation that brought the Guardian and Observer extraordinary kudos and the most money it has ever raised from any story. It was turned down. That is an extraordinary indictment.
Here, instead, is a still from the talk. I believe that existing movements - the labour movement, the civil rights movement - are fundamental to asserting our rights against Silicon Valley, to rebuilding the internet from the ground up to rejecting the autocratic takeover not just the US but our reality: we all live on these platforms.
I’m six years older than when I gave that first talk though I feel 106 years older. Part of my reason for going through with it - and it was touch and go whether I would - was because, as I say at the end, I’m reclaiming my story. I’ve been trapped in someone else’s narrative. And I also really want to use it as a personal moment of change. In 2016, I threw myself over what felt like an about-to-explode bomb. I ended up absorbing the shock blast from something that was much bigger than me: the waves of destruction that the technological and political changes of 2016 sent through the system. I need to mark this chapter as now over and put back together some of the bits that shattered through this process.
But mostly, the talk is a huge thank you to the people who supported me through my legal trials. The 30,000+ people who contributed to my crowdfunder and held me up. You are the model for what is needed in the next days and years.
This is what we’re up against. This was Palmer Luckey, on stage the day after me. That’s an autonomous missile next to him. He’s a US defence contractor, Trump cheerleader.He got a standing ovation.

In my talk, I could feel waves of hostility coming from some people in the room. TED is ground zero of the AI gold rush. But there was also cheerleading and l’ve been overwhelmed by huge love and support from others who see exactly what is happening. It’s the weirdest time to be here. And it was the weirdest energy from an audience of any talk I’ve ever given. But then, it was intended to make them uncomfortable. Politics is technology now. Silicon Valley is desperate to deny that, but it can’t and no can we.
247 notes
·
View notes
Text
ROBERT REICH
FEB 14
Friends,
I want to talk today about the media’s coverage of the Trump-Vance-Musk coup.
I’m not referring to coverage by the bonkers right-wing media of Rupert Murdoch’s Fox News and its imitators.
I’m referring to the U.S. mainstream media — The New York Times, The Washington Post, the Los Angeles Times, The Atlantic, The New Yorker, National Public Radio — and the mainstream media abroad, such as the BBC and The Guardian.
By not calling it a coup, the mainstream media is failing to communicate the gravity of what is occurring.
Yesterday’s opinion by The New York Times’ editorial board offers a pathetic example. It concedes that Trump and his top associates “are stress-testing the Constitution, and the nation, to a degree not seen since the Civil War” but then asks: “Are we in a constitutional crisis yet?” and answers that what Trump is doing “should be taken as a flashing warning sign.”
Warning sign?
Elon Musk’s meddling into the machinery of government is a part of the coup. Musk and his muskrats have no legal right to break into the federal payments system or any of the other sensitive data systems they’re invading, for which they continue to gather computer code.
This data is the lifeblood of our government. It is used to pay Social Security and Medicare. It measures inflation and jobs. Americans have entrusted our private information to professional civil servants who are bound by law to use it only for the purposes to which it is intended. In the wrong hands, without legal authority, it could be used to control or mislead Americans.
By failing to use the term “coup,” the media have also underplayed the Trump-Vance-Musk regime’s freeze on practically all federal funding — suggesting this is a normal part of the pull-and-tug of politics. It is not. Congress has the sole authority to appropriate money. The freeze is illegal and unconstitutional.
By not calling it a coup, the media have also permitted Americans to view the regime’s refusal to follow the orders of the federal courts as a political response, albeit an extreme one, to judicial rulings that are at odds with what a president wants.
There is nothing about the regime’s refusal to be bound by the courts that places it within the boundaries of acceptable politics. Our system of government gives the federal judiciary final say about whether actions of the executive are legal and constitutional. Refusal to be bound by federal court rulings shows how rogue this regime truly is.
Earlier this week, a federal judge excoriated the regime for failing to comply with “the plain text” of an edict the judge issued last month to release billions of dollars in federal grants. Vice President JD Vance, presumably in response, declared that “judges aren’t allowed to control the executive’s legitimate power.”
Vance graduated from the same law school I did. He knows he’s speaking out of his derriere.
In sum, the regime’s disregard for laws and constitutional provisions surrounding access to private data, impoundment of funds appropriated by Congress, and refusal to be bound by judicial orders amount to a takeover of our democracy by a handful of men who have no legal authority to do so.
If this is not a coup d’etat, I don’t know what is.
The mainstream media must call this what it is. In doing so, they would not be “taking sides” in a political dispute. They would be accurately describing the dire emergency America now faces.
Unless Americans see it and understand the whole of it for what it is rather than piecemeal stories that “flood the zone,” Americans cannot possibly respond to the whole of it. The regime is undertaking so many outrageous initiatives that the big picture cannot be seen without it being described clearly and simply.
Unless Americans understand that this is indeed a coup that’s wildly illegal and fundamentally unconstitutional — not just because that happens to be the opinion of constitutional scholars or professors of law, or the views of Trump’s political opponents, but because it is objectively and in reality a coup — Americans cannot rise up as the clear majority we are, and demand that democracy be restored.
128 notes
·
View notes
Text
Big Data Hadoop Certification Training Course
In the era of digital transformation, where data has become a cornerstone of strategic decision-making, professionals equipped with expertise in Big Data and Hadoop are in high demand. The key to unlocking the vast potential of these technologies lies in enrolling in a Big Data Hadoop certification training course. This comprehensive guide explores the significance of Big Data Hadoop certification training, outlines key components of an effective program, and highlights the myriad benefits of earning a coveted certification in this dynamic field.
Why Big Data Hadoop Certification Training Matters:
Expertise in Handling Large Datasets:
Big Data Hadoop certification training course equips professionals with the skills needed to efficiently process and analyze large datasets. This expertise is crucial in a landscape where organizations grapple with ever-growing volumes of data and need professionals who can harness its potential.
Validation of Proficiency:
Certification serves as a tangible validation of a professional's proficiency in Big Data and Hadoop. It signifies that an individual has not only acquired theoretical knowledge but has also demonstrated practical skills in utilizing Hadoop and related tools effectively.
Competitive Edge in the Job Market:
In a competitive job market, possessing a Big Data Hadoop certification sets individuals apart. Employers actively seek professionals with specialized skills, and certification provides a clear signal of a candidate's commitment to staying current and relevant in the field.
Benefits of Pursuing Big Data Hadoop Certification:
Enhanced Employability:
A Big Data Hadoop certification enhances employability by showcasing a professional's specialized skills. Whether seeking new opportunities or aiming for advancement within an organization, certification opens doors to a myriad of possibilities.
Global Recognition:
Certifications obtained through reputable Big Data Hadoop certification training providers carry global recognition. This global acknowledgment adds a universal credential to a professional's profile, making them sought after in a variety of job markets.
Networking Opportunities:
Certification programs often foster a community of learners. This provides participants with networking opportunities, allowing them to connect with industry professionals, share insights, and potentially open doors to mentorship or job referrals within the Big Data community.
Continuous Learning Support:
Reputable certification providers offer continuous learning resources, including access to updated materials, webinars, and forums. This ensures that certified professionals stay informed about the latest advancements in the Big Data and Hadoop ecosystem.
Conclusion:
In conclusion, enrolling in a Big Data Hadoop certification training course is a strategic investment in one's professional development. It not only imparts the skills needed to navigate the complexities of large-scale data analytics but also validates those skills through a recognized certification. With the ever-increasing reliance on data-driven decision-making, a Big Data Hadoop certification is the key to unlocking new career opportunities, standing out in a competitive job market, and contributing to the transformative power of data in the digital age. Elevate your expertise, gain a competitive edge, and become a certified Big Data and Hadoop professional ready to meet the challenges of the data-driven future.
#h2kinfosys#bigdata#big data hadoop online training#hadoop#bigdatahadooptraining#big data#big data fundamentals#big data hadoop certification#big data hadoop
0 notes
Text
Too big to care

I'm on tour with my new, nationally bestselling novel The Bezzle! Catch me in BOSTON with Randall "XKCD" Munroe (Apr 11), then PROVIDENCE (Apr 12), and beyond!
Remember the first time you used Google search? It was like magic. After years of progressively worsening search quality from Altavista and Yahoo, Google was literally stunning, a gateway to the very best things on the internet.
Today, Google has a 90% search market-share. They got it the hard way: they cheated. Google spends tens of billions of dollars on payola in order to ensure that they are the default search engine behind every search box you encounter on every device, every service and every website:
https://pluralistic.net/2023/10/03/not-feeling-lucky/#fundamental-laws-of-economics
Not coincidentally, Google's search is getting progressively, monotonically worse. It is a cesspool of botshit, spam, scams, and nonsense. Important resources that I never bothered to bookmark because I could find them with a quick Google search no longer show up in the first ten screens of results:
https://pluralistic.net/2024/02/21/im-feeling-unlucky/#not-up-to-the-task
Even after all that payola, Google is still absurdly profitable. They have so much money, they were able to do a $80 billion stock buyback. Just a few months later, Google fired 12,000 skilled technical workers. Essentially, Google is saying that they don't need to spend money on quality, because we're all locked into using Google search. It's cheaper to buy the default search box everywhere in the world than it is to make a product that is so good that even if we tried another search engine, we'd still prefer Google.
This is enshittification. Google is shifting value away from end users (searchers) and business customers (advertisers, publishers and merchants) to itself:
https://pluralistic.net/2024/03/05/the-map-is-not-the-territory/#apor-locksmith
And here's the thing: there are search engines out there that are so good that if you just try them, you'll get that same feeling you got the first time you tried Google.
When I was in Tucson last month on my book-tour for my new novel The Bezzle, I crashed with my pals Patrick and Teresa Nielsen Hayden. I've know them since I was a teenager (Patrick is my editor).
We were sitting in his living room on our laptops – just like old times! – and Patrick asked me if I'd tried Kagi, a new search-engine.
Teresa chimed in, extolling the advanced search features, the "lenses" that surfaced specific kinds of resources on the web.
I hadn't even heard of Kagi, but the Nielsen Haydens are among the most effective researchers I know – both in their professional editorial lives and in their many obsessive hobbies. If it was good enough for them…
I tried it. It was magic.
No, seriously. All those things Google couldn't find anymore? Top of the search pile. Queries that generated pages of spam in Google results? Fucking pristine on Kagi – the right answers, over and over again.
That was before I started playing with Kagi's lenses and other bells and whistles, which elevated the search experience from "magic" to sorcerous.
The catch is that Kagi costs money – after 100 queries, they want you to cough up $10/month ($14 for a couple or $20 for a family with up to six accounts, and some kid-specific features):
https://kagi.com/settings?p=billing_plan&plan=family
I immediately bought a family plan. I've been using it for a month. I've basically stopped using Google search altogether.
Kagi just let me get a lot more done, and I assumed that they were some kind of wildly capitalized startup that was running their own crawl and and their own data-centers. But this morning, I read Jason Koebler's 404 Media report on his own experiences using it:
https://www.404media.co/friendship-ended-with-google-now-kagi-is-my-best-friend/
Koebler's piece contained a key detail that I'd somehow missed:
When you search on Kagi, the service makes a series of “anonymized API calls to traditional search indexes like Google, Yandex, Mojeek, and Brave,” as well as a handful of other specialized search engines, Wikimedia Commons, Flickr, etc. Kagi then combines this with its own web index and news index (for news searches) to build the results pages that you see. So, essentially, you are getting some mix of Google search results combined with results from other indexes.
In other words: Kagi is a heavily customized, anonymized front-end to Google.
The implications of this are stunning. It means that Google's enshittified search-results are a choice. Those ad-strewn, sub-Altavista, spam-drowned search pages are a feature, not a bug. Google prefers those results to Kagi, because Google makes more money out of shit than they would out of delivering a good product:
https://www.theverge.com/2024/4/2/24117976/best-printer-2024-home-use-office-use-labels-school-homework
No wonder Google spends a whole-ass Twitter every year to make sure you never try a rival search engine. Bottom line: they ran the numbers and figured out their most profitable course of action is to enshittify their flagship product and bribe their "competitors" like Apple and Samsung so that you never try another search engine and have another one of those magic moments that sent all those Jeeves-askin' Yahooers to Google a quarter-century ago.
One of my favorite TV comedy bits is Lily Tomlin as Ernestine the AT&T operator; Tomlin would do these pitches for the Bell System and end every ad with "We don't care. We don't have to. We're the phone company":
https://snltranscripts.jt.org/76/76aphonecompany.phtml
Speaking of TV comedy: this week saw FTC chair Lina Khan appear on The Daily Show with Jon Stewart. It was amazing:
https://www.youtube.com/watch?v=oaDTiWaYfcM
The coverage of Khan's appearance has focused on Stewart's revelation that when he was doing a show on Apple TV, the company prohibited him from interviewing her (presumably because of her hostility to tech monopolies):
https://www.thebignewsletter.com/p/apple-got-caught-censoring-its-own
But for me, the big moment came when Khan described tech monopolists as "too big to care."
What a phrase!
Since the subprime crisis, we're all familiar with businesses being "too big to fail" and "too big to jail." But "too big to care?" Oof, that got me right in the feels.
Because that's what it feels like to use enshittified Google. That's what it feels like to discover that Kagi – the good search engine – is mostly Google with the weights adjusted to serve users, not shareholders.
Google used to care. They cared because they were worried about competitors and regulators. They cared because their workers made them care:
https://www.vox.com/future-perfect/2019/4/4/18295933/google-cancels-ai-ethics-board
Google doesn't care anymore. They don't have to. They're the search company.
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/04/04/teach-me-how-to-shruggie/#kagi
#pluralistic#john stewart#the daily show#apple#monopoly#lina khan#ftc#too big to fail#too big to jail#monopolism#trustbusting#antitrust#search#enshittification#kagi#google
437 notes
·
View notes
Note
How do they define which country a thing got made in for tariff purposes? Can you distribute to the US via a 10% country? If not, how much value-add has to be done before it is a new thing? And who audits that (considering that exporter govt's may wish to turn a blind eye to noncompliance)?
Oh this is why Trade Lawyers get paid the big bucks, the level of "It Depends" is very extreme here. Typically this the kind of stuff trade agreements define, very heavily, with a lot of text and addendums - for each good it is gonna have its own rules in practice. A normal approach is something like a value-added threshold, where you have to increase the cost-of-production by X% in a country for it to count as being made in that country.
Bypassing tarrifs via shipping it to a middleman country is a tactic as old as trade, and a constant source of political discourse. We have had a decade+ of people arguing a lot of production in Vietnam & Mexico is just China using middlemen, though I found the typical case there to be weak.
Everyone is involved in this - typically the tariff-imposer has the "fundamental" obligation to make that determination, but as part of that they compel companies to provide data on trade flows and product costs, and the other involved nations will have trade agreement obligations to facilitate the accurate collection of that data, police their borders for illicit imports, etc.
74 notes
·
View notes
Note
Agree with your latest big AI post, and I'm grateful to the ones you made in the past (they brought me around!), but I think it may also be useful to revisit your talking points about how little each individual artwork is reflected on the scale of a dataset. I think at least among artists, a decent amount of this is personal discomfort over their work being altered and reused without their permission. The fact that this can happen anyway—and does—as well as the insignificance of any one data point in the output (I seem to remember something about it proportionally being less than a pixel in a finished piece of output?) might help to allay a bit of that?
As always, thank you for the work you do to explain your positions so thoroughly!
i just guess that over time i have become less and less sympathetic to those concerns and have chosen to 'address' them circumspectly through art projects like slopjank prographilose and permission & compensation.
while of course you are correct that there is of course no appreciable effect upon anybody's life for an image they made being in an AI training dataset, i have just grown more and more sick of and scornful towards the underlying ideology behind that personal discomfort--i find it to be in opposition to human creativity, to existing within a culture, to art being a living thing in conversation with the world rather than a static quarantined IP preserved in amber.
especially as i see more and more people take this line to its logical conclusions re: copyright law, i no longer consider people for whom this is a primary concern misguided political allies who should be won over (the way i see people who are primarily concerned with what are fundamentally labour issues, even myopically, or people who are concerned with environmental impacts) but as political enemies.
222 notes
·
View notes
Text
Ofer Ronen, Co-Founder and CEO of Tomato.ai – Interview Series
New Post has been published on https://thedigitalinsider.com/ofer-ronen-co-founder-and-ceo-of-tomato-ai-interview-series/
Ofer Ronen, Co-Founder and CEO of Tomato.ai – Interview Series
Ofer Ronen is the Co-Founder and CEO of Tomato.ai, a platform that offers an AI powered voice filter to soften accents for offshore agent voices as they speak, resulting in improved CSAT and sales metrics.
Ofer previously sold three tech startups, two to Google, and one to IAC. He spent the past five years at Google building contact center AI solutions within the Area 120 incubator. He closed over $500M in deals for these new solutions. He holds an MS in Computer Engineering with a focus on AI from the University of Michigan, and an MBA from Cornell.
What initially attracted you to machine learning and AI?
AI has had a long history of starts and stops. Periods when there was a lot of hope for the technology to transform industries, followed by periods of disillusionment because it didn’t quite live up to the hype.
When I was doing a Masters in AI a couple of decades ago, at the University of Michigan, it was a period of disillusionment, when AI was not quite making an impact. I was intrigued by the idea that computers could be taught to perform tasks through examples vs the traditional heuristics, which requires thinking about what explicit instructions to provide. At the time I was working at an AI research lab on virtual agents which help teachers find resources online for their classes. Back then we didn’t have the big data, the powerful compute resources, or the advanced neural networks that we have today, so the capabilities we built were limited.
From 2016 to 2019 you worked at Google’s Area 120 incubator to design highly robust virtual agents for the largest contact centers. What was this solution precisely?
More recently I worked at Google’s Area 120 incubator on some of the largest voice virtual agents deployments, including a couple of projects for Fortune 50 companies with over one hundred million support calls a year.
In order to build more robust voice virtual agents that can handle complex conversations, we took millions of historical conversations between humans and used those conversations to detect the type of follow-up questions customers have beyond their initial stated issue. By mining follow-up questions and by mining different ways customers phrase each question, we were able to build flexible virtual agents that can have meandering conversations. This mirrored better the kind of conversations customers have with human agents. The end result was a material increase in the total calls fully handled by the virtual agents.
In 2021 and 2022, you built a 2nd startup at Area 120,. could you share what this company was and what you learned from the experience?
My second startup within Area 120 was again focused on call centers. Our solution focused on reducing customer churn by proactively reaching out to customers right after a failed support call where the customer expressed their issue but did not get to a resolution. The outreach would be done by virtual agents trained to address those open issues. What I learned from that experience is that churn is a difficult metric to measure in a timely manner. It can take 6 months to get statistically significant results for changes in churn. That makes it hard to optimize an experience fast enough and to convince customers a solution is working.
Could you share the genesis story behind your third 3rd contact center AI startup Tomato.ai, and why you chose to do it yourself versus working within Google?
The idea for Tomato.ai, my third contact center startup, came from James Fan, my co-founder and CTO. James thought it would be more effective to sell wine using a French accent, and so what if anyone could be made to sound French?
This was the seed of the idea, and from there our thinking evolved. As we investigated it more we found a more acute pain point felt by customers when speaking with accented offshore agents. Customers had problems with comprehension and trust. This represented a larger market opportunity. Given our backgrounds, we realized the sizable impact it would have on call centers, helping them improve their sales and support metrics. We now refer to this type of solution as Accent Softening.
James and I previously led and sold startups, including each of us selling a startup to Google.
We decided to leave Google to start Tomato.ai because, after many years at Google, we were itching to get back to starting and leading our own company.
Tomato.ai solves an important pain point with call centers, which is softening accents for agents. Could you discuss why voice filters are a preferred solution to agent training?
At Tomato.ai, we understand the importance of clear communication in call centers, where accents can sometimes create barriers. Instead of relying solely on traditional agent training, we’ve developed voice filters, or what we call “accent softening.” These filters help agents maintain their unique voice, while reducing their accents, improving clarity for callers. By using voice filters, we ensure better communication and build trust between agents and callers, making every interaction more effective and satisfying to the customer. So, compared to extensive training programs, voice filters offer a simpler and more immediate solution to address accent-related challenges in call centers.
As existing agents leverage these tools to enhance their performance, they will be empowered to command higher rates, reflecting their increased value in delivering exceptional customer experiences. Simultaneously, the democratizing effect of generative AI will bring new entry-level agents into the fold, expanding the talent pool and driving down hourly rates. This dichotomy signifies a fundamental transformation in the dynamics of call center services, where technology and human expertise reshape the landscape of the industry, paving the way for a more inclusive and competitive future.
What are some of the different machine learning and AI technologies that are used to enable voice filtering?
This type of real-time voice filtering solution would not have been possible just a couple of years ago. Advancements in speech research combined with newer architectures like the transformer model and Deep Neural Networks, and more powerful AI hardware (like TPUs from Google, and GPUs from NVidia) make it more possible to build such solutions today. It is still a very difficult problem that requires our team to invent new techniques for training speech-to-speech models that are low latency, and high quality.
What type of feedback has been received from call centers, and how has it impacted employee churn rates?
We have strong demand from large and small offshore call centers to try out our accent softening solution. Those call centers recognize that Tomato.ai can help with their top two problems (1) offshore agents’ performance metrics are not up to par vs onshore agents (2) it is difficult to find enough qualified agents to hire in offshore markets like India and The Philippines.
We expect in the coming weeks to have case studies that highlight the type of impact call centers experience using Accent Softening. We expect sales calls to see an immediate lift in key metrics like revenue, close rates, and lead qualification rates. At the same time, we expect support calls to see shorter handle times, fewer callbacks, and improved CSAT.
As mentioned above churn rates take longer to validate, and so case studies with those improvements will come at a later date.
Tomato.ai recently raised a $10 million funding round, what does this mean for the future of the company?
As Tomato.ai gears up for its inaugural product launch, the team remains steadfast in its commitment to reshaping the landscape of global communication and the future of work, one conversation at a time.
Thank you for the great interview, readers who wish to learn more should visit Tomato.ai.
#2022#ai#Big Data#Building#call center#call Centers#CEO#classes#command#communication#Companies#comprehension#computer#computers#CTO#data#deals#Design#do it yourself#dynamics#engineering#filter#Filters#Fundamental#Funding#Future#generative#generative ai#Global#Google
0 notes
Text
Was talking with wife recently about AI and the ways it's incredibly stupid and I am reminded of the time a few years ago the Execs at the place I worked previously wanted to incorporate AI into our workflow in order to help materials development. They wanted to make sure that the company was "utilizing the latest technology to make us more productive" so they partnered with a company that uses AI/ML to predict chemical structures in order to enhance performance based on our desired properties. My boss and I kinda thought this was stupid when it was first announced, but we were still unprepared for how bad it was really going to be.
The problem of course here is that what a computer thinks is good and will perform well does not often make sense according to the laws of physics. So more often than not the computer would spit out extremely specific and nonsensical structures that it believed would increase performance. These structures could range from completely impractical to sometimes downright impossible to actually make, so for every set of predictions we got back we had to first filter all the nonsense and then select a set from the ones that could be made and tested in a reasonable amount of time. In addition, they emphasized that the more data that they have the better the predictions would be, so the pressure was on to synthesize and validate as many molecules as possible as quickly as possible. This was a huge drain on time and energy because again some of these structures were nontrivial to make. Not that the computer people would be able to tell the difference. But still the executives were excited about it so we gave it a try anyway. The idea was that we would start by making a bunch of different materials and test the results and then feed those results back into the machine to predict better structures based on the ever growing data pool.
The funny part of the story, of course, is that with every iteration, the performance got worse. This was not surprising to me. The mechanisms that dictate performance in this field are not fully understood even now, and there are still many papers coming out every year adding more knowledge to the field. Additionally, the predictions weren't being made using some fundamental understanding of the mechanisms at play, but by training an algorithm using a pool of existing literature. You're just not going to get good results by "midjourneying" chemistry. We did around 3-4 iteration cycles with them over that year contract and every time the performance of the structures that it had predicted were worse than the last set, sometimes dramatically so. And they would tell us "no no, the data set isn't really big enough to give good results yet" and "once the model has tested enough structures it'll get better" but it didn't in that period. And it's possible that on a long enough timescale it might be possible? But, the reality was that we had a whole year of time and resources essentially wasted because our CEO thought that some tech guys in SV could use AI to do chemistry and didn't believe us when we said it was stupid.
And you know what? We figured out something that worked really well less than six months after dumping them and getting to do it our way again.
98 notes
·
View notes
Text
Ryan Burge at Graphs About Religion:
What in the world happened in the 2024 presidential election? It’s a question I’ve been asked by dozens of media outlets over the last six months. But I had a big problem: no reliable data that would aid me in answering such a question. The exit polls, no matter what anyone tells you, should not be considered gospel. There are a number of fundamental flaws in their design that make it impossible to rely on them to construct an accurate portrayal of what actually happened on election day. Their real purpose? To fill air time on election night while the major networks wait for the results to pile again across the United States. But all that’s changed now and my goal over the next couple of months is to tell the story of the campaign between Donald Trump and Kamala Harris using data from the newly released Cooperative Election Study. This survey indicates that 22% of all American adults align with an evangelical denomination. Seventeen percent of the sample are white evangelicals and just over 5% are non-white evangelicals. Among those non-white evangelicals, 38% were Black and 28% were Hispanic.

It should come as no surprise that evangelicals overwhelmingly supported Donald Trump in 2024, because they gave him a tremendous amount of support in both 2016 and 2020. But, it’s noteworthy that Trump continued to make inroads among evangelicals - his share of the vote went from 70% to 75% in the last three elections. The Democrats have not done well at all with evangelicals. Their best effort was in 2012 when Obama got 30% of their votes. But Harris did slightly worse than Biden - 23% vs 25%. But it’s notable that Biden got the same share of the evangelical vote as Hillary Clinton in 2016. Of course, Trump’s real base of support is specifically among white evangelicals. In 2016, Trump’s vote share was no different than McCain in 2008 or Romney’s in 2012 - about 77%. But in 2020, Trump ran up the score just a bit - garnering 81% of the white evangelical vote. The data from 2024 says he continued to win over the white evangelical vote at 83% - the highest on record. However the breakdown of the non-white evangelical vote may tell the story of the 2024 election when it comes to religion. Republicans have historically struggled with this group of voters. In 2008, Obama enjoyed an 18 point advantage and that expanded dramatically in the next couple of election cycles. In 2012, the non-white evangelical vote was D+30 and it was D+25 in 2016. But then in 2020, Trump managed to make some inroads - getting back to 40% and narrowing the gap to 18 points. But look at 2024 - a huge shift. The non-white evangelical vote was essentially split in 2024 - Harris 49% and Trump at 48%. Harris lost at least ten points with this constituency - a huge blow. [...] There’s a lot going on in this graph but I think that the big narrative is how Trump just continues to make gains among evangelical voters. Between 2016 and 2024 he gained five points among yearly attending evangelicals, eight points among monthly attending evangelicals, seven points among weekly attendees and eight points among those who attended multiple times per week. However, Trump didn’t actually lose ground with those who attend less than once a year. What about those non-white evangelicals? I would direct your attention to the bottom right of the graph. Donald Trump made really sizable gains with the high attenders. Between 2016 and 2024, Trump’s share went from 33% to 47% among non-white evangelicals who attend church every week. He did thirteen points better among those who attend religious services multiple times per week. But there are also increases among yearly attenders and monthly attenders, too.
Ryan Burge writes in Graphs About Religion on the 2024 election post-mortem on the evangelical vote. While White evangelicals lopsidedly backed Trump, non-White evangelicals were nearly split [49% Harris to 48% Trump].
In previous elections, non-White evangelicals voted Democratic by a decent margin, but the margins were nearly wiped out, and that was driven mainly by Hispanic evangelicals swinging hard to the GOP.
#Evangelicals#2024 Presidential Election#2024 Elections#Evangelical Christianity#Kamala Harris#Donald Trump#White Evangelicals#Hispanic Evangelicals
56 notes
·
View notes
Text
This is not about Star Trek
Ok so there's a TNG episode I can't remember the name of. It's got one of those boring one world titles like "evolution" or "disaster" and fundamentally it's about the cool scifi idea of "what if enterprise get computer virus?"
Which was a cool futuristic idea in the 80s when it was made, becauee computer viruses were a new and exciting idea then. Anyway. Towards the end of the episode Picard, Data, and Worf are down on the planet what gave the enterprise the virus. There's a stargate thing going on in the room, where it cycles through a bunch of different destinations, like the enterprise, the hostile romulan warbird, and distant planets.
Data gets zapped. He tells Picard how to set the self destruct, and Picard tells Worf to carry Data through the stargate when it shows the Enterprise, so Data can get fixed.
Now Picard is alone, with the self destruct computer. He sets it up to explode so it'll stop trying to virus the enterprise, but time is ticking down. The place is gonna blow, and the stargate hasn't cycled around to the enterprise yet. He jumps though it anywhere, figuring that the place he's currently in is about to become a smoking crater, and basically anywhere else is preferable!
That choice he makes, the "fuck it, I don't care where I end up, so long as it's not fucking here"?
That's exactly how I feel about humanity.
I'm not hugely picky, to be honest. That's a big part of why I don't have a fursona, despite being a big ol' furry: I can't decide on wanting to be a specific kind of animal as honestly I'd take anything. I could spend years getting art commissioned of a cool deergal or sheepboy or cowthing, and if someone was like "hey I got some experimental new Wolf HRT" I'll punch them in the face and down the whole bottle.
And it's not just about animals either. Give me robot bodies and pure software uploads and ascension to pure energy and plants and fungi and plasma-based stellar parasites and things that can't even exist in this reality because our physical laws are incompatible.
I don't have a destination in mind because I'm not planning a vacation. I'm planning a jail break. I don't need to arrive somewhere, I need to escape here.
Please someone smuggle me a file in a cake. I need to get through these bars.
245 notes
·
View notes