#big data and artificial intelligence sources
Explore tagged Tumblr posts
workforcesolution · 1 year ago
Text
SAS IS GAINING PROMINENCE IN HEALTHCARE -1
Tumblr media
SAS is revolutionizing healthcare analytics, offering benefits from cost management to patient care. With SAS skills in high demand, it's a pivotal time to explore its potential in the healthcare sector. Read more to discover the impact of SAS in healthcare analytics. https://www.rangtech.com/blog/data-science/sas-is-gaining-prominence-in-healthcare-1
0 notes
glasshomewrecker · 1 year ago
Text
the new bill to regulate AI in the EU will ban the use of AI for "social scoring" and classifying human beings by your characteristics. which is a great step forward--- however!
it carves out a massive exception for "law enforement" which is extremely worrying and I'm surprised more people aren't talking about this!!!
0 notes
rjzimmerman · 3 months ago
Text
Excerpt from this New York Times story:
The U.S. power grid added more capacity from solar energy in 2024 than from any other source in a single year in more than two decades, according to a new industry report released on Tuesday.
The data was released a day after the new U.S. energy secretary, Chris Wright, strongly criticized solar and wind energy on two fronts. He said on Monday at the start of CERAWeek by S&P Global, an annual energy conference in Houston, that they couldn’t meet the growing electricity needs of the world and that their use was driving up energy costs.
The report, produced by the Solar Energy Industries Association and Wood Mackenzie, a research firm, said about 50 gigawatts of new solar generation capacity was added last year, far more than any other source of electricity.
Mr. Wright and President Trump have been strongly critical of renewable energy, which former President Joseph R. Biden Jr. championed as a way to address climate change. The energy secretary, Mr. Trump and Republicans in Congress have pledged to undo many of Mr. Biden’s climate and energy policies.
“Beyond the obvious scale and cost problems, there is simply no physical way wind, solar and batteries could replace the myriad uses of natural gas,” said Mr. Wright, who was previously chief executive of an oil and gas production company.
Yet solar energy and battery storage systems appear to have significant momentum and may not be easily thwarted. The U.S. Energy Information Administration, which is part of Mr. Wright’s department, said last month that it expected solar and batteries to continue leading new capacity installations on U.S. electric grids this year.
Proponents of clean energy celebrated the milestone for solar power as the world moves to increase electricity production to meet the needs of energy-hungry data centers to support the growth of artificial intelligence.
61 notes · View notes
mariacallous · 3 months ago
Text
The Trump administration’s Federal Trade Commission has removed four years’ worth of business guidance blogs as of Tuesday morning, including important consumer protection information related to artificial intelligence and the agency’s landmark privacy lawsuits under former chair Lina Khan against companies like Amazon and Microsoft. More than 300 blogs were removed.
On the FTC’s website, the page hosting all of the agency’s business-related blogs and guidance no longer includes any information published during former president Joe Biden’s administration, current and former FTC employees, who spoke under anonymity for fear of retaliation, tell WIRED. These blogs contained advice from the FTC on how big tech companies could avoid violating consumer protection laws.
One now deleted blog, titled “Hey, Alexa! What are you doing with my data?” explains how, according to two FTC complaints, Amazon and its Ring security camera products allegedly leveraged sensitive consumer data to train the ecommerce giant’s algorithms. (Amazon disagreed with the FTC’s claims.) It also provided guidance for companies operating similar products and services. Another post titled “$20 million FTC settlement addresses Microsoft Xbox illegal collection of kids’ data: A game changer for COPPA compliance” instructs tech companies on how to abide by the Children’s Online Privacy Protection Act by using the 2023 Microsoft settlement as an example. The settlement followed allegations by the FTC that Microsoft obtained data from children using Xbox systems without the consent of their parents or guardians.
“In terms of the message to industry on what our compliance expectations were, which is in some ways the most important part of enforcement action, they are trying to just erase those from history,” a source familiar tells WIRED.
Another removed FTC blog titled “The Luring Test: AI and the engineering of consumer trust” outlines how businesses could avoid creating chatbots that violate the FTC Act’s rules against unfair or deceptive products. This blog won an award in 2023 for “excellent descriptions of artificial intelligence.”
The Trump administration has received broad support from the tech industry. Big tech companies like Amazon and Meta, as well as tech entrepreneurs like OpenAI CEO Sam Altman, all donated to Trump’s inauguration fund. Other Silicon Valley leaders, like Elon Musk and David Sacks, are officially advising the administration. Musk’s so-called Department of Government Efficiency (DOGE) employs technologists sourced from Musk’s tech companies. And already, federal agencies like the General Services Administration have started to roll out AI products like GSAi, a general-purpose government chatbot.
The FTC did not immediately respond to a request for comment from WIRED.
Removing blogs raises serious compliance concerns under the Federal Records Act and the Open Government Data Act, one former FTC official tells WIRED. During the Biden administration, FTC leadership would place “warning” labels above previous administrations’ public decisions it no longer agreed with, the source said, fearing that removal would violate the law.
Since President Donald Trump designated Andrew Ferguson to replace Khan as FTC chair in January, the Republican regulator has vowed to leverage his authority to go after big tech companies. Unlike Khan, however, Ferguson’s criticisms center around the Republican party’s long-standing allegations that social media platforms, like Facebook and Instagram, censor conservative speech online. Before being selected as chair, Ferguson told Trump that his vision for the agency also included rolling back Biden-era regulations on artificial intelligence and tougher merger standards, The New York Times reported in December.
In an interview with CNBC last week, Ferguson argued that content moderation could equate to an antitrust violation. “If companies are degrading their product quality by kicking people off because they hold particular views, that could be an indication that there's a competition problem,” he said.
Sources speaking with WIRED on Tuesday claimed that tech companies are the only groups who benefit from the removal of these blogs.
“They are talking a big game on censorship. But at the end of the day, the thing that really hits these companies’ bottom line is what data they can collect, how they can use that data, whether they can train their AI models on that data, and if this administration is planning to take the foot off the gas there while stepping up its work on censorship,” the source familiar alleges. “I think that's a change big tech would be very happy with.”
77 notes · View notes
aliteralsemicolon · 4 months ago
Note
honestly, the whole ai fight or disagreement thing is kinda insane. we’re seeing the same pattern that happened when the first advanced computers and laptops came out. people went on the theory that they’d replace humans, but in the end, they just became tools. the same thing happened in the arts. writing, whether through books or handwritten texts, has survived countless technological revolutions from ancient civilizations to our modern world.
you’re writing and sharing your work through a phone, so being against ai sounds a little hypocritical. you might as well quit technology altogether and go 100 percent analog. it’s a never ending cycle. every time there’s a new tech revolution, people act like we’re living in the terminator movies even though we don’t even have flying cars yet. ai is just ai and it’s crappy. people assume the worst but like everything before it it will probably just end up being another tool because people is now going to believe anything, nowadays.
Okay so...no. It's never that black and white. Otherwise I could argue that you might as well go 100% technological and never touch grass again. Which sounds just as silly. There are many problems with AI and it's more than just 'robots taking over'. It's actually a deeper conversation about equity, ethics, environmentalism, corruption and capitalism. That's an essay I'm not sure a lot of people are willing to read, otherwise they would be doing their own research on this. I'll sum it up the best I can.
DISCLAIMER As usual I am not responsible for my grammar errors, this was written and posted in one go and I did not look back even once. I'm not a professional source. I just want to explain this and put this discussion to rest on my blog. Please do your own research as well.
There's helpful advancement tools and there's harmful advancement tools. I would argue that AI falls into the latter for a few of reasons.
It's not 'just AI', it's a tool weaponised for more harm than good: Obvious examples include deep fakes and scamming, but here's more incase you're interested.
A more common nuisance is that humans now have to prove that they are not AI. More specifically, writers and students are at risk of being accused of using AI when their work reads more advance that basic writing criteria. I dealt with this just last year actually. I had to prove that the essay I dedicated weeks of my time researching, writing and gathering citations for was actually mine.
I have mutuals that have been accused of using AI because their writing seems 'too advanced' or whatever bs. Personally, I feel that an AI accusation is more valid when the words are more hollow and lack feeling (as AI ≠ emotional intelligence), not when a writer 'sounds too smart'.
"You're being biased."
Okay, here is an unbiased article for you. Please don't forget to take note of the fact that the negative is all stuff that can genuinely ruin lives and the positive is stuff that makes tasks more convenient. This is the trend in every article I've read.
Equity, ethics, corruption, environmentalism and capitalism:
Maybe there could be a world where AI is able to improve and truly help humans, but in this capitalistic world I don't see it being a reality. AI is not the actual problem in my eyes, this is. Resources are finite and lacking amongst humans. The wealthy hoard them for personal comfort and selfish innovations leading to more financial gain, instead of sharing them according to need. Capitalism is another topic of its own and I want to keep my focus on AI specifically so here are some sources on this topic. I highly recommend skimming through them at least.
> Artificial Intelligence and the Black Hole of Capitalism: A More-than-Human Political Ethology > Exploiting the margin: How capitalism fuels AI at the expense of minoritized groups > Rethinking of Marxist perspectives on big data, artificial intelligence (AI) and capitalist economic development
I want to circle back to your first paragraph and just dissect it really quick.
"we’re seeing the same pattern that happened when the first advanced computers and laptops came out. people went on the theory that they’d replace humans, but in the end, they just became tools."
One quick google search gives you many articles explaining that and deeming this statement irrelevant to this discussion. I think this was more a case of inexperience with the internet and online data. The generations since are more experienced/familiar with this sort of technology. You may have heard of 'once it's out there it can never be deleted' pertaining to how nothing can be deleted off the internet. I do not think you're stupid anon, I think you understand this and how dangerous it truly is. Especially with the rise in weaponisation of AI. I'm going to link some quora and reddit posts (horrible journalism ik but luckily I'm not a journalist), because taking personal opinions from people who experienced that era feels important.
> Quora | When the internet came out, were people afraid of it to a similar degree that people are afraid of AI? > Reddit | Were people as scared of computers when they were a new thing, as they are about AI now? > Reddit | Was there hysteria surrounding the introduction of computers and potential job losses?
"the same thing happened in the arts. writing, whether through books or handwritten texts, has survived countless technological revolutions from ancient civilizations to our modern world."
I think this is a logical guess based on pattern recognition. I cannot find any sources to back this up. Either that or you mean to say that artists and writers are not being harmed by AI. Which would be a really ignorant statement.
We know about stolen content from creatives (writers, artists, musicians, etc) to train AI. Everybody knows exactly why this is wrong even if they're not willing to admit it to themselves.
Let's use writers for example. The work writers put out there is used without their consent to train AI for improvement. This is stealing. Remember the very recent issue of writer having to state that they do not consent to their work being uploaded or shared anywhere else because of those apps stealing it and putting it behind a paywall?
I shouldn't have to expand further on why this is a problem. Everybody knows exactly why this is wrong even if they're not willing to admit it to themselves. If you're still wanting to argue it's not going to be with me, here are some sources to help you out.
> AI, Inspiration, and Content Stealing > ‘Biggest act of copyright theft in history’: thousands of Australian books allegedly used to train AI model > AI Detectors Get It Wrong. Writers Are Being Fired Anyway
"you’re writing and sharing your work through a phone, so being against ai sounds a little hypocritical. you might as well quit technology altogether and go 100 percent analog."
...
"it’s a never ending cycle. every time there’s a new tech revolution, people act like we’re living in the terminator movies even though we don’t even have flying cars yet."
Yes there is usually a general fear of the unknown. Take covid for example and how people were mass buying toilet paper. The reason this statement cannot be applied here is due to evidence of it being an actual issue. You can see AI's effects every single day. Think about AI generated videos on facebook (from harmless hope core videos to proaganda) that older generations easily fall for. With recent developments, it's actually becoming harder for experienced technology users to differentiate between the real and fake content too. Do I really need to explain why this is a major, major problem?
> AI-generated images already fool people. Why experts say they'll only get harder to detect. > Q&A: The increasing difficulty of detecting AI- versus human-generated text > New results in AI research: Humans barely able to recognize AI-generated media
"ai is just ai and it’s crappy. people assume the worst but like everything before it it will probably just end up being another tool because people is now going to believe anything, nowadays."
AI is man-made. It only knows what it has been fed from us. Its intelligence is currently limited to what humans know. And it's definitely not as intelligent as humans because of the lack of emotional intelligence (which is a lot harder to program because it's more than math, repetition and coding). At this stage, I don't think AI is going to replace humans. Truthfully I don't know if it ever can. What I do know is that even if you don’t agree with everything else, you can’t disagree with the environmental factor. We can't really have AI without the resources to help run it.
Which leads us back to: finite number of resources. I'm not sure if you're aware of how much water and energy go into running even generative AI, but I can tell you that it's not sustainable. This is important because we're already in an irrevocable stage of the climate crisis and scientists are unsure if Earth as we know it can last another decade, let alone century. AI does not help in the slightest. It actually adds to the crisis, we're just uncertain to what degree at this point. It's not looking good though.
I am not against AI being used as a tool if it was sustainable. You can refute all my other arguments, but you can't refute this. It's a fact and your denial or lack of care won't change the outcome.
My final and probably the most insignificant reason on this list but it matters to me: It’s contributing to humans becoming dumber and lazier.
It's no secret that humans are declining in intelligence. What makes AI so attractive is its ability to provide quick solutions. It gathers the information we're looking for at record speed and saves us the time of having to do the work ourselves.
And I suppose that is the point of invention, to make human life easier. I am of the belief that too much is of anything is every good, though. Too much hardship is not good but neither is everything being too easy. Problem solving pushes intellectual growth, but it can't happen if we never solver our own problems.
Allowing humans to believe that they can stop learning to do even basic tasks (such as writing an email, learning to cite sources, etc) because 'AI can do it for you' is not helping us. This is really just more of a personal grievance and therefore does not matter. I just wanted to say it.
"What about an argument for instances where AI is more helpful than harmful?"
I would love for you to write about it and show me because unfortunately in all my research on this topic, the statistics do not lean in favour of that question. Of course there's always pros and cons to everything. Including phones, computers, the internet, etc. There are definitely instances of AI being helpful. Just not to the scale or same level of impact of all the negatives. And when the bad outweighs the good it's not something worst keeping around in my opinion.
In a perfect world, AI would take over the boring corporate tasks and stuff so that humans can enjoy life– recreation, art and music– as we were meant to. However in this capitalist world, that is not a possiblility and AI is killing joy and abolish AI and AI users DNI and I will probably not be talking about this anymore and if you want to send hate to my inbox on this don't bother because I'll block your anon and you won't get a response to feed your eristicism and you can never send anything anonymous again💙
66 notes · View notes
makethatelevenrings · 5 months ago
Note
Hi im very sorry for what is happening in LA and im not trying to be rude
I didn’t know AI used water can you please explain how?
“According to a report by Goldman Sachs, a ChatGPT query needs nearly 10 times as much electricity as a Google search query.”
“Google said its greenhouse gas emissions rose last year by 48% since 2019.”
“AI requires computer power from thousands of servers that are housed in data centers; and those data centers need massive amounts of electricity to meet that demand.”
“AI server cooling consumes significant water, with data centers using cooling towers and air mechanisms to dissipate heat, causing up to 9 liters of water to evaporate per kWh of energy used.”
“Already AI's projected water usage could hit 6.6 billion m³ by 2027, signaling a need to tackle its water footprint.”
“The huge computer clusters powering ChatGPT need four times as much water to deliver answers than previously thought, it has been claimed. Using the chatbot for between ten to 50 queries consumes about two litres of water, according to experts from the University of California, Riverside.”
Sources:
59 notes · View notes
probablyasocialecologist · 10 months ago
Text
The programmer Simon Willison has described the training for large language models as “money laundering for copyrighted data,” which I find a useful way to think about the appeal of generative-A.I. programs: they let you engage in something like plagiarism, but there’s no guilt associated with it because it’s not clear even to you that you’re copying. Some have claimed that large language models are not laundering the texts they’re trained on but, rather, learning from them, in the same way that human writers learn from the books they’ve read. But a large language model is not a writer; it’s not even a user of language. Language is, by definition, a system of communication, and it requires an intention to communicate. Your phone’s auto-complete may offer good suggestions or bad ones, but in neither case is it trying to say anything to you or the person you’re texting. The fact that ChatGPT can generate coherent sentences invites us to imagine that it understands language in a way that your phone’s auto-complete does not, but it has no more intention to communicate. It is very easy to get ChatGPT to emit a series of words such as “I am happy to see you.” There are many things we don’t understand about how large language models work, but one thing we can be sure of is that ChatGPT is not happy to see you. A dog can communicate that it is happy to see you, and so can a prelinguistic child, even though both lack the capability to use words. ChatGPT feels nothing and desires nothing, and this lack of intention is why ChatGPT is not actually using language. What makes the words “I’m happy to see you” a linguistic utterance is not that the sequence of text tokens that it is made up of are well formed; what makes it a linguistic utterance is the intention to communicate something. Because language comes so easily to us, it’s easy to forget that it lies on top of these other experiences of subjective feeling and of wanting to communicate that feeling. We’re tempted to project those experiences onto a large language model when it emits coherent sentences, but to do so is to fall prey to mimicry; it’s the same phenomenon as when butterflies evolve large dark spots on their wings that can fool birds into thinking they’re predators with big eyes. There is a context in which the dark spots are sufficient; birds are less likely to eat a butterfly that has them, and the butterfly doesn’t really care why it’s not being eaten, as long as it gets to live. But there is a big difference between a butterfly and a predator that poses a threat to a bird. A person using generative A.I. to help them write might claim that they are drawing inspiration from the texts the model was trained on, but I would again argue that this differs from what we usually mean when we say one writer draws inspiration from another. Consider a college student who turns in a paper that consists solely of a five-page quotation from a book, stating that this quotation conveys exactly what she wanted to say, better than she could say it herself. Even if the student is completely candid with the instructor about what she’s done, it’s not accurate to say that she is drawing inspiration from the book she’s citing. The fact that a large language model can reword the quotation enough that the source is unidentifiable doesn’t change the fundamental nature of what’s going on. As the linguist Emily M. Bender has noted, teachers don’t ask students to write essays because the world needs more student essays. The point of writing essays is to strengthen students’ critical-thinking skills; in the same way that lifting weights is useful no matter what sport an athlete plays, writing essays develops skills necessary for whatever job a college student will eventually get. Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.
31 August 2024
106 notes · View notes
thepastisalreadywritten · 2 months ago
Text
AI Tool Reproduces Ancient Cuneiform Characters with High Accuracy
Tumblr media
ProtoSnap, developed by Cornell and Tel Aviv universities, aligns prototype signs to photographed clay tablets to decode thousands of years of Mesopotamian writing.
Cornell University researchers report that scholars can now use artificial intelligence to “identify and copy over cuneiform characters from photos of tablets,” greatly easing the reading of these intricate scripts​.
The new method, called ProtoSnap, effectively “snaps” a skeletal template of a cuneiform sign onto the image of a tablet, aligning the prototype to the strokes actually impressed in the clay​.
By fitting each character’s prototype to its real-world variation, the system can produce an accurate copy of any sign and even reproduce entire tablets​.
"Cuneiform, like Egyptian hieroglyphs, is one of the oldest known writing systems and contains over 1,000 unique symbols​.
Its characters change shape dramatically across different eras, cultures and even individual scribes so that even the same character… looks different across time,” Cornell computer scientist Hadar Averbuch-Elor explains​.
This extreme variability has long made automated reading of cuneiform a very challenging problem.
The ProtoSnap technique addresses this by using a generative AI model known as a diffusion model.
It compares each pixel of a photographed tablet character to a reference prototype sign, calculating deep-feature similarities.
Once the correspondences are found, the AI aligns the prototype skeleton to the tablet’s marking and “snaps” it into place so that the template matches the actual strokes​.
In effect, the system corrects for differences in writing style or tablet wear by deforming the ideal prototype to fit the real inscription.
Crucially, the corrected (or “snapped”) character images can then train other AI tools.
The researchers used these aligned signs to train optical-character-recognition models that turn tablet photos into machine-readable text​.
They found the models trained on ProtoSnap data performed much better than previous approaches at recognizing cuneiform signs, especially the rare ones or those with highly varied forms.
In practical terms, this means the AI can read and copy symbols that earlier methods often missed.
This advance could save scholars enormous amounts of time.
Traditionally, experts painstakingly hand-copy each cuneiform sign on a tablet.
The AI method can automate that process, freeing specialists to focus on interpretation.
It also enables large-scale comparisons of handwriting across time and place, something too laborious to do by hand.
As Tel Aviv University archaeologist Yoram Cohen says, the goal is to “increase the ancient sources available to us by tenfold,” allowing big-data analysis of how ancient societies lived – from their religion and economy to their laws and social life​.
The research was led by Hadar Averbuch-Elor of Cornell Tech and carried out jointly with colleagues at Tel Aviv University.
Graduate student Rachel Mikulinsky, a co-first author, will present the work – titled “ProtoSnap: Prototype Alignment for Cuneiform Signs” – at the International Conference on Learning Representations (ICLR) in April.
In all, roughly 500,000 cuneiform tablets are stored in museums worldwide, but only a small fraction have ever been translated and published​.
By giving AI a way to automatically interpret the vast trove of tablet images, the ProtoSnap method could unlock centuries of untapped knowledge about the ancient world.
5 notes · View notes
posttexasstressdisorder · 3 months ago
Text
CNN 3/29/2025
Elon Musk says he sold X to his AI company
By Lisa Eadicicco, CNN
Updated: 6:36 PM EDT, Fri March 28, 2025
Source: CNN
Elon Musk on Friday evening announced he has sold his social media company, X, to xAI, his artificial intelligence company. xAI will pay $45 billion for X, slightly more than Musk paid for it in 2022, but the new deal includes $12 billion of debt.
Musk wrote on his X account that the deal gives X a valuation of $33 billion.
“xAI and X’s futures are intertwined,” Musk said in a post on X. “Today, we officially take the step to combine the data, models, compute, distribution and talent. This combination will unlock immense potential by blending xAI’s advanced AI capability and expertise with X’s massive reach.”
Musk didn’t announce any immediate changes to X, although xAI’s Grok chatbot is already integrated into the social media platform. Musk said that the combined platform will “deliver smarter, more meaningful experiences.” He said the value of the combined company was $80 billion.
Musk has made a slew of changes to the platform once known as Twitter since he purchased it in 2022, prompting some major advertisers to flee. He laid off 80% of the company’s staff, upended the platform’s verification system and reinstated suspended accounts of White supremacists within months of the acquisition.
While X’s valuation is lower than what Musk paid for the social outlet, it’s still a reversal of fortunes for the company. Investment firm Fidelity estimated in October that X was worth nearly 80% less than when Musk bought it. By December, X had recovered somewhat but was still worth only around 30% of what Musk paid, according to Fidelity, whose Blue Chip fund holds a stake in X.
The news also comes as Musk has been in the spotlight for his role at the Department of Government Efficiency in the Trump administration, which has raised questions about how much attention he’s paying to his companies, particularly Tesla. Combining X and xAI could allow Musk to streamline his efforts.
Musk has also been working to establish himself as a leader in the AI space, a big focus for both the Trump administration and the tech industry. Earlier this year, he led a group of investors attempting to purchase ChatGPT maker OpenAI for nearly $100 billion, another escalation in the longtime rivalry between Musk and OpenAI CEO Sam Altman.
It’s unclear precisely how the acquisition will benefit Musk’s AI ambitions. But the tighter integration with X could allow xAI to push its latest AI models and features to a broad audience more quickly.
A significant reversal of X’s fortunes
Big advertisers, who had largely abandoned X after hate speech surged on the platform and ads were seen running alongside pro-Nazi content, have begun to return. (X made several pro-Nazi accounts ineligible for ads following advertiser departures.) Amazon and Apple are both reportedly reinvesting in X campaigns again, a remarkable endorsement from two brands with mass appeal.
The brand’s stabilization helped a group of bondholders, who had been deep underwater in their investments, sell billions of dollars in their X debt holdings at 97 cents on the dollar earlier this month — albeit with exceedingly high interest rates — according to several recent reports.
Bloomberg in February reported that X was in talks to raise money that would value the company at $44 billion. It’s not clear what came of those talks and why xAI is valuing X at less than it could reportedly fetch from investors. X needs to pay down its massive debt load, which Musk on Friday said totals $12 billion.
A big part of why X’s valuation has rebounded in recent months is xAI, which X reportedly held a stake in. Last month, xAI was seeking a $75 billion valuation in a funding round, according to Bloomberg.
But the biggest factor in X’s stunning bounce-back is almost certainly Musk himself: Musk’s elevation to a special government employee under President Donald Trump has empowered the world’s richest person with large sway over the operations of the federal government, which he has rapidly sought to reshape.
Investors betting on X are probably making a gamble on its leader, not its business. Last year, Musk turned X into a pro-Trump machine, using the platform to boost the president’s campaign. In posts to his 200 million followers, he pushed racist conspiracy theories about the Biden administration’s immigration policies and obsessed over the “woke mind virus,” a term used by some conservatives to describe progressive causes.
And now, with Trump back in office and Musk working in the executive branch, X has once again become the most important social media platform for following and interacting with the Trump administration. Musk has also used X to broadcast some of his changes with his Department of Government Efficiency.
This story has been updated with additional context and developments.
See Full Web Article
Go to the full CNN experience
© 2025 Cable News Network. A Warner Bros. Discovery Company. All Rights Reserved.
Terms of Use | Privacy Policy | Ad Choices | Do Not Sell or Share My Personal Information
Tumblr media
6 notes · View notes
jcmarchi · 3 months ago
Text
How debugging and data lineage techniques can protect Gen AI investments - AI News
New Post has been published on https://thedigitalinsider.com/how-debugging-and-data-lineage-techniques-can-protect-gen-ai-investments-ai-news/
How debugging and data lineage techniques can protect Gen AI investments - AI News
Tumblr media Tumblr media
As the adoption of AI accelerates, organisations may overlook the importance of securing their Gen AI products. Companies must validate and secure the underlying large language models (LLMs) to prevent malicious actors from exploiting these technologies. Furthermore, AI itself should be able to recognise when it is being used for criminal purposes.
Enhanced observability and monitoring of model behaviours, along with a focus on data lineage can help identify when LLMs have been compromised. These techniques are crucial in strengthening the security of an organisation’s Gen AI products. Additionally, new debugging techniques can ensure optimal performance for those products.
It’s important, then, that given the rapid pace of adoption, organisations should take a more cautious approach when developing or implementing LLMs to safeguard their investments in AI.
Establishing guardrails
The implementation of new Gen AI products significantly increases the volume of data flowing through businesses today. Organisations must be aware of the type of data they provide to the LLMs that power their AI products and, importantly, how this data will be interpreted and communicated back to customers.
Due to their non-deterministic nature, LLM applications can unpredictably “hallucinate”, generating inaccurate, irrelevant, or potentially harmful responses. To mitigate this risk, organisations should establish guardrails to prevent LLMs from absorbing and relaying illegal or dangerous information.
Monitoring for malicious intent
It’s also crucial for AI systems to recognise when they are being exploited for malicious purposes. User-facing LLMs, such as chatbots, are particularly vulnerable to attacks like jailbreaking, where an attacker issues a malicious prompt that tricks the LLM into bypassing the moderation guardrails set by its application team. This poses a significant risk of exposing sensitive information.
Monitoring model behaviours for potential security vulnerabilities or malicious attacks is essential. LLM observability plays a critical role in enhancing the security of LLM applications. By tracking access patterns, input data, and model outputs, observability tools can detect anomalies that may indicate data leaks or adversarial attacks. This allows data scientists and security teams proactively identify and mitigate security threats, protecting sensitive data, and ensuring the integrity of LLM applications.
Validation through data lineage
The nature of threats to an organisation’s security – and that of its data – continues to evolve. As a result, LLMs are at risk of being hacked and being fed false data, which can distort their responses. While it’s necessary to implement measures to prevent LLMs from being breached, it is equally important to closely monitor data sources to ensure they remain uncorrupted.
In this context, data lineage will play a vital role in tracking the origins and movement of data throughout its lifecycle. By questioning the security and authenticity of the data, as well as the validity of the data libraries and dependencies that support the LLM, teams can critically assess the LLM data and accurately determine its source. Consequently, data lineage processes and investigations will enable teams to validate all new LLM data before integrating it into their Gen AI products.
A clustering approach to debugging
Ensuring the security of AI products is a key consideration, but organisations must also maintain ongoing performance to maximise their return on investment. DevOps can use techniques such as clustering, which allows them to group events to identify trends, aiding in the debugging of AI products and services.
For instance, when analysing a chatbot’s performance to pinpoint inaccurate responses, clustering can be used to group the most commonly asked questions. This approach helps determine which questions are receiving incorrect answers. By identifying trends among sets of questions that are otherwise different and unrelated, teams can better understand the issue at hand.
A streamlined and centralised method of collecting and analysing clusters of data, the technique helps save time and resources, enabling DevOps to drill down to the root of a problem and address it effectively. As a result, this ability to fix bugs both in the lab and in real-world scenarios improves the overall performance of a company’s AI products.
Since the release of LLMs like GPT, LaMDA, LLaMA, and several others, Gen AI has quickly become more integral to aspects of business, finance, security, and research than ever before. In their rush to implement the latest Gen AI products, however, organisations must remain mindful of security and performance. A compromised or bug-ridden product could be, at best, an expensive liability and, at worst, illegal and potentially dangerous. Data lineage, observability, and debugging are vital to the successful performance of any Gen AI investment.  
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
0 notes
workforcesolution · 1 year ago
Text
USE OF BASE SAS CERTIFICATION
Tumblr media
Discover the impact of SAS certification on your career! Gain proof of expertise, enhance learning, and boost credibility. Explore global opportunities and free prep materials. Read more! https://www.rangtech.com/blog/data-science/use-of-base-sas-certification
0 notes
sab-cat · 3 months ago
Text
Mar 18, 2025
The Trump administration’s Federal Trade Commission has removed four years’ worth of business guidance blogs as of Tuesday morning, including important consumer protection information related to artificial intelligence and the agency’s landmark privacy lawsuits under former chair Lina Khan against companies like Amazon and Microsoft. More than 300 blogs were removed.
On the FTC’s website, the page hosting all of the agency’s business-related blogs and guidance no longer includes any information published during former president Joe Biden’s administration, current and former FTC employees, who spoke under anonymity for fear of retaliation, tell WIRED. These blogs contained advice from the FTC on how big tech companies could avoid violating consumer protection laws....
Removing blogs raises serious compliance concerns under the Federal Records Act and the Open Government Data Act, one former FTC official tells WIRED. During the Biden administration, FTC leadership would place “warning” labels above previous administrations’ public decisions it no longer agreed with, the source said, fearing that removal would violate the law.
4 notes · View notes
darkmaga-returns · 15 days ago
Text
“The emails showed the world’s leading climatologists busily working to organize a research cartel. Peer review was a legitimate source of authority when the process supported their positions. It was compromised, if not malicious, when it offered critics of the orthodoxy a platform. The wish to crush dissenting views, in their minds, had become indistinguishable from the pursuit of truth.”  
– Martin Gurri
Over the last two decades, exafloods of Internet content have educated and entertained beyond imagination. Exponentially-growing communications bandwidth and data transparency empowered regular people, elevated previously unknown geniuses, and helped expose deep dysfunction among many existing “experts.” A tsunami of social media also generated psychedelic confusion, not least among the experts themselves, leading to, in Martin Gurri’s words, a “crisis of authority.”
Now, artificial intelligence is about to amplify this infowarp a million-fold, for good and ill, producing both unprecedented knowledge and wealth and new epistemic challenges. 
If you thought the battles over social media “misinformation” were intense, just wait for the A.I. era. 
Lots of failed experts are engaged in a tactical retreat, regrouping for the coming battles. They passively admit “mistakes were made” but dodge specific accountability and refuse to acknowledge those who got the big questions right. 
At the same time, they are busy establishing new gatekeepers, taboos, and approved voices. The very people who got so many giant questions so very wrong over the last two decades are attempting to build a new information fortress for the next 20 years.
2 notes · View notes
rjzimmerman · 17 days ago
Text
Excerpt from this New York Times story:
The cost of electricity is rising across the country, forcing Americans to pay more on their monthly bills and squeezing manufacturers and small businesses that rely on cheap power.
And some of President Trump’s policies risk making things worse, despite his promises to slash energy prices, companies and researchers say.
This week, the Senate is taking up Mr. Trump’s sweeping domestic policy bill, which has already passed the House. In its current form, that bill would abruptly end most of the Biden-era federal tax credits for low-carbon sources of electricity like wind, solar, batteries and geothermal power.
Repealing those credits could increase the average family’s energy bill by as much as $400 per year within a decade, according to several studies published this year.
The studies rely on similar reasoning: Electricity demand is surging for the first time in decades, partly because of data centers needed for artificial intelligence, and power companies are already struggling to keep up. Ending tax breaks for solar panels, wind turbines and batteries would make them more expensive and less plentiful, increasing demand for energy from power plants that burn natural gas.
That could push up the price of gas, which currently generates 43 percent of America’s electricity.
On top of that, the Trump administration’s efforts to sell more gas overseas could further hike prices, while Mr. Trump’s new tariffs on steel, aluminum and other materials would raise the cost of transmission lines and other electrical equipment.
These cascading events could lead to further painful increases in electric bills.
“There’s a lot of concern about some pretty big price spikes,” said Rich Powell, chief executive of the Clean Energy Buyers Association, which represents companies that have committed to buying renewable energy, including General Motors, Honda, Intel and Microsoft.
A study commissioned by the association found that repealing the clean electricity credits could cause power prices to surge more than 13 percent in states like Arizona, Kansas, New Jersey and North Carolina and lead to thousands of job losses nationwide by 2032.
33 notes · View notes
mariacallous · 2 months ago
Text
The damage the Trump administration has done to science in a few short months is both well documented and incalculable, but in recent days that assault has taken an alarming twist. Their latest project is not firing researchers or pulling funds—although there’s still plenty of that going on. It’s the inversion of science itself.
Here’s how it works. Three “dire wolves” are born in an undisclosed location in the continental United States, and the media goes wild. This is big news for Game of Thrones fans and anyone interested in “de-extinction,” the promise of bringing back long-vanished species.
There’s a lot to unpack here: Are these dire wolves really dire wolves? (They’re technically grey wolves with edited genes, so not everyone’s convinced.) Is this a publicity stunt or a watershed moment of discovery? If we’re staying in the Song of Ice and Fire universe, can we do ice dragons next?
All more or less reasonable reactions. And then there’s secretary of the interior Doug Burgum, a former software executive and investor now charged with managing public lands in the US. “The marvel of ‘de-extinction’ technology can help forge a future where populations are never at risk,” Burgum wrote in a post on X this week. “The revival of the Dire Wolf heralds the advent of a thrilling new era of scientific wonder, showcasing how the concept of ‘de-extinction’ can serve as a bedrock for modern species conservation.”
What Burgum is suggesting here is that the answer to 18,000 threatened species—as classified and tallied by the nonprofit International Union for Conservation of Nature—is that scientists can simply slice and dice their genes back together. It’s like playing Contra with the infinite lives code, but for the global ecosystem.
This logic is wrong, the argument is bad. More to the point, though, it’s the kind of upside-down takeaway that will be used not to advance conservation efforts but to repeal them. Oh, fracking may kill off the California condor? Here’s a mutant vulture as a make-good.
“Developing genetic technology cannot be viewed as the solution to human-caused extinction, especially not when this administration is seeking to actively destroy the habitats and legal protections imperiled species need,” said Mike Senatore, senior vice president of conservation programs at the nonprofit Defenders of Wildlife, in a statement. “What we are seeing is anti-wildlife, pro-business politicians vilify the Endangered Species Act and claim we can Frankenstein our way to the future.”
On Tuesday, Donald Trump put on a show of signing an executive order that promotes coal production in the United States. The EO explicitly cites the need to power data centers for artificial intelligence. Yes, AI is energy-intensive. They’ve got that right. Appropriate responses to that fact might include “can we make AI more energy-efficient?” or “Can we push AI companies to draw on renewable resources.” Instead, the Trump administration has decided that the linchpin technology of the future should be driven by the energy source of the past. You might as well push UPS to deliver exclusively by Clydesdale. Everything is twisted and nothing makes sense.
The nonsense jujitsu is absurd, but is it sincere? In some cases, it’s hard to say. In others it seems more likely that scientific illiteracy serves a cover for retribution. This week, the Commerce Department canceled federal support for three Princeton University initiatives focused on climate research. The stated reason, for one of those programs: “This cooperative agreement promotes exaggerated and implausible climate threats, contributing to a phenomenon known as ‘climate anxiety,’ which has increased significantly among America’s youth.”
Commerce Department, you’re so close! Climate anxiety among young people is definitely something to look out for. Telling them to close their eyes and stick their fingers in their ears while the world burns is probably not the best way to address it. If you think their climate stress is bad now, just wait until half of Miami is underwater.
There are two important pieces of broader context here. First is that Donald Trump does not believe in climate change, and therefore his administration proceeds as though it does not exist. Second is that Princeton University president Christopher Eisengruber had the audacity to suggest that the federal government not routinely shake down academic institutions under the guise of stopping antisemitism. Two weeks later, the Trump administration suspended dozens of research grants to Princeton totaling hundreds of millions of dollars. And now, “climate anxiety.”
This is all against the backdrop of a government whose leading health officials are Robert F. Kennedy Jr. and Mehmet Oz, two men who, to varying degrees, have built their careers peddling unscientific malarky. The Trump administration has made clear that it will not stop at the destruction and degradation of scientific research in the United States. It will also misrepresent, misinterpret, and bastardize it to achieve distinctly unscientific ends.
Those dire wolves aren’t going to solve anything; they’re not going to be reintroduced to the wild, they’re not going to help thin out deer and elk populations.
But buried in the announcement was something that could make a difference. It turns out Colossal also cloned a number of red wolves—a species that is critically endangered but very much not extinct—with the goal of increasing genetic diversity among the population. It doesn’t resurrect a species that humanity has wiped out. It helps one survive.
25 notes · View notes
joemardesichcms · 4 months ago
Text
The Future of Commercial Loan Brokering: Trends to Watch!
The commercial loan brokering industry is evolving rapidly, driven by technological advancements, changing market dynamics, and shifting borrower expectations. As businesses continue to seek financing solutions, brokers must stay ahead of emerging trends to remain competitive. Here are some key developments shaping the future of commercial loan brokering:
1. Rise of AI and Automation
Artificial intelligence (AI) and automation are revolutionizing loan processing. From AI-driven underwriting to automated document verification, these technologies are streamlining workflows, reducing manual effort, and speeding up loan approvals. Brokers who leverage AI-powered tools can offer faster and more efficient services.
2. Alternative Lending is Gaining Momentum
Traditional banks are no longer the only players in commercial lending. Alternative lenders, including fintech platforms and private lenders, are expanding options for businesses that may not qualify for conventional loans. As a result, brokers must build relationships with non-bank lenders to provide flexible financing solutions.
3. Data-Driven Decision Making
Big data and analytics are transforming how loans are assessed and approved. Lenders are increasingly using alternative data sources, such as cash flow analysis and digital transaction history, to evaluate creditworthiness. Brokers who understand and utilize data-driven insights can better match clients with the right lenders.
4. Regulatory Changes and Compliance Requirements
The commercial lending landscape is subject to evolving regulations. Compliance with federal and state laws is becoming more complex, requiring brokers to stay updated on industry guidelines. Implementing compliance-friendly processes will be essential for long-term success.
5. Digital Marketplaces and Online Lending Platforms
Online lending marketplaces are making it easier for businesses to compare loan offers from multiple lenders. These platforms provide transparency, efficiency, and better loan matching. Brokers who integrate digital platforms into their services can enhance customer experience and expand their reach.
6. Relationship-Based Lending Still Matters
Despite digital advancements, relationship-based lending remains crucial. Many businesses still prefer working with brokers who offer personalized service, industry expertise, and lender connections. Building trust and maintaining strong relationships with both clients and lenders will continue to be a key differentiator.
7. Increased Focus on ESG (Environmental, Social, and Governance) Lending
Sustainability-focused lending is gaining traction, with more lenders prioritizing ESG factors in their financing decisions. Brokers who understand green financing and social impact lending can tap into a growing market of businesses seeking sustainable funding options.
Final Thoughts
The commercial loan brokering industry is undergoing a transformation, with technology, alternative lending, and regulatory changes shaping the future. Brokers who embrace innovation, stay informed on market trends, and continue building strong relationships will thrive in this evolving landscape.
Are you a commercial loan broker? What trends are you seeing in the industry? Share your thoughts in the comments below!
Tumblr media
3 notes · View notes