#automated research algorithms
Explore tagged Tumblr posts
ailifehacks · 9 days ago
Text
🌐 AlphaEvolve AI Discovery: Automated Scientific Breakthroughs in Research Algorithms
AlphaEvolve AI discovery revolutionizes research with automated scientific insights. Explore how this breakthrough tool reshapes algorithms, experimentation, data analysis, and lab efficiency today. When researchers seek cutting-edge innovation, AlphaEvolve AI discovery accelerates science by automating hypothesis generation, data interpretation, and algorithmic optimization through advanced

0 notes
jcmarchi · 12 days ago
Text
Sam Altman, OpenAI: The superintelligence era has begun
New Post has been published on https://thedigitalinsider.com/sam-altman-openai-the-superintelligence-era-has-begun/
Sam Altman, OpenAI: The superintelligence era has begun
OpenAI chief Sam Altman has declared that humanity has crossed into the era of artificial superintelligence—and there’s no turning back.
“We are past the event horizon; the takeoff has started,” Altman states. “Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.”
The lack of visible signs – robots aren’t yet wandering our high streets, disease remains unconquered – masks what Altman characterises as a profound transformation already underway. Behind closed doors at tech firms like his own, systems are emerging that can outmatch general human intellect.
“In some big sense, ChatGPT is already more powerful than any human who has ever lived,” Altman claims, noting that “hundreds of millions of people rely on it every day and for increasingly important tasks.”
This casual observation hints at a troubling reality: such systems already wield enormous influence, with even minor flaws potentially causing widespread harm when multiplied across their vast user base.
The road to superintelligence
Altman outlines a timeline towards superintelligence that might leave many readers checking their calendars.
By next year, he expects “the arrival of agents that can do real cognitive work,” fundamentally transforming software development. The following year could bring “systems that can figure out novel insights”—meaning AI that generates original discoveries rather than merely processing existing knowledge. By 2027, we might see “robots that can do tasks in the real world.”
Each prediction seems to leap beyond the previous one in capability, drawing a line that points unmistakably toward superintelligence—systems whose intellectual capacity vastly outstrips human potential across most domains.
“We do not know how far beyond human-level intelligence we can go, but we are about to find out,” Altman states.
This progression has sparked fierce debate among experts, with some arguing these capabilities remain decades away. Yet Altman’s timeline suggests OpenAI has internal evidence for this accelerated path that isn’t yet public knowledge.
A feedback loop that changes everything
What makes current AI development uniquely concerning is what Altman calls a “larval version of recursive self-improvement”—the ability of today’s AI to help researchers build tomorrow’s more capable systems.
“Advanced AI is interesting for many reasons, but perhaps nothing is quite as significant as the fact that we can use it to do faster AI research,” he explains. “If we can do a decade’s worth of research in a year, or a month, then the rate of progress will obviously be quite different.”
This acceleration compounds as multiple feedback loops intersect. Economic value drives infrastructure development, which enables more powerful systems, which generate more economic value. Meanwhile, the creation of physical robots capable of manufacturing more robots could create another explosive cycle of growth.
“The rate of new wonders being achieved will be immense,” Altman predicts. “It’s hard to even imagine today what we will have discovered by 2035; maybe we will go from solving high-energy physics one year to beginning space colonisation the next year.”
Such statements would sound like hyperbole from almost anyone else. Coming from the man overseeing some of the most advanced AI systems on the planet, they demand at least some consideration.
Living alongside superintelligence
Despite the potential impact, Altman believes many aspects of human life will retain their familiar contours. People will still form meaningful relationships, create art, and enjoy simple pleasures.
But beneath these constants, society faces profound disruption. “Whole classes of jobs” will disappear—potentially at a pace that outstrips our ability to create new roles or retrain workers. The silver lining, according to Altman, is that “the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before.”
For those struggling to imagine this future, Altman offers a thought experiment: “A subsistence farmer from a thousand years ago would look at what many of us do and say we have fake jobs, and think that we are just playing games to entertain ourselves since we have plenty of food and unimaginable luxuries.”
Our descendants may view our most prestigious professions with similar bemusement.
The alignment problem
Amid these predictions, Altman identifies a challenge that keeps AI safety researchers awake at night: ensuring superintelligent systems remain aligned with human values and intentions.
Altman states the need to solve “the alignment problem, meaning that we can robustly guarantee that we get AI systems to learn and act towards what we collectively really want over the long-term”. He contrasts this with social media algorithms that maximise engagement by exploiting psychological vulnerabilities.
This isn’t merely a technical issue but an existential one. If superintelligence emerges without robust alignment, the consequences could be devastating. Yet defining “what we collectively really want” will be almost impossible in a diverse global society with competing values and interests.
“The sooner the world can start a conversation about what these broad bounds are and how we define collective alignment, the better,” Altman urges.
OpenAI is building a global brain
Altman has repeatedly characterised what OpenAI is building as “a brain for the world.”
This isn’t meant metaphorically. OpenAI and its competitors are creating cognitive systems intended to integrate into every aspect of human civilisation—systems that, by Altman’s own admission, will exceed human capabilities across domains.
“Intelligence too cheap to meter is well within grasp,” Altman states, suggesting that superintelligent capabilities will eventually become as ubiquitous and affordable as electricity.
For those dismissing such claims as science fiction, Altman offers a reminder that merely a few years ago, today’s AI capabilities seemed equally implausible: “If we told you back in 2020 we were going to be where we are today, it probably sounded more crazy than our current predictions about 2030.”
As the AI industry continues its march toward superintelligence, Altman’s closing wish – “May we scale smoothly, exponentially, and uneventfully through superintelligence” – sounds less like a prediction and more like a prayer.
While timelines may (and will) be disputed, the OpenAI chief makes clear the race toward superintelligence isn’t coming—it’s already here. Humanity must grapple with what that means.
See also: Magistral: Mistral AI challenges big tech with reasoning model
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
0 notes
troylambert · 1 year ago
Text
Using AI to Do Keyword Research for Authors
Introduction SEO for authors isn’t just a fancy buzzword; it’s the secret sauce to getting your books noticed online. Imagine your book as a needle in a haystack. SEO—or Search Engine Optimization—helps readers find that needle with ease. It’s all about making sure your content appears at the top of search engine results. Keyword research is the cornerstone of effective SEO. By understanding what

Tumblr media
View On WordPress
0 notes
reasonsforhope · 7 days ago
Text
"Canadian scientists have developed a blood test and portable device that can determine the onset of sepsis faster and more accurately than existing methods.
Published today [May 27, 2025] in Nature Communications, the test is more than 90 per cent accurate at identifying those at high risk of developing sepsis and represents a major milestone in the way doctors will evaluate and treat sepsis.
“Sepsis accounts for roughly 20 per cent of all global deaths,” said lead author Dr. Claudia dos Santos, a critical care physician and scientist at St. Michael’s Hospital. “Our test could be a powerful game changer, allowing physicians to quickly identify and treat patients before they begin to rapidly deteriorate.”
Sepsis is the body’s extreme reaction to an infection, causing the immune system to start attacking one’s own organs and tissues. It can lead to organ failure and death if not treated quickly. Predicting sepsis is difficult: early symptoms are non-specific, and current tests can take up to 18 hours and require specialized labs. This delay before treatment increases the chance of death by nearly eight per cent per hour.
[Note: The up to 18 hour testing window for sepsis is a huge cause of sepsis-related mortality, because septic shock can kill in as little as 12 hours, long before the tests are even done.]
[Analytical] AI helps predict sepsis
Examining blood samples from more than 3,000 hospital patients with suspected sepsis, researchers from UBC and Sepset, a UBC spin-off biotechnology company, used machine learning to identify a six-gene expression signature “Sepset” that predicted sepsis nine times out of 10, and well before a formal diagnosis. With 248 additional blood samples using RT-PCR, (Reverse Transcription Polymerase Chain Reaction), a common hospital laboratory technique, the test was 94 per cent accurate in detecting early-stage sepsis in patients whose condition was about to worsen.
“This demonstrates the immense value of AI in analyzing extremely complex data to identify the important genes for predicting sepsis and writing an algorithm that predicts sepsis risk with high accuracy,” said co-author Dr. Bob Hancock, UBC professor of microbiology and immunology and CEO of Sepset.
Bringing the test to point of care
To bring the test closer to the bedside, the National Research Council of Canada (NRC) developed a portable device they called PowerBlade that uses a drop of blood and an automated sequence of steps to efficiently detect sepsis. Tested with 30 patients, the device was 92 per cent accurate in identifying patients at high risk of sepsis and 89 per cent accurate in ruling out those not at risk.
“PowerBlade delivered results in under three hours. Such a device can make treatment possible wherever a patient may be, including in the emergency room or remote health care units,” said Dr. Hancock.
“By combining cutting-edge microfluidic research with interdisciplinary collaboration across engineering, biology, and medicine, the Centre for Research and Applications in Fluidic Technologies (CRAFT) enables rapid, portable, and accessible testing solutions,” said co-author Dr. Teodor Veres, of the NRC’s Medical Devices Research Centre and CRAFT co-director. CRAFT, a joint venture between the University of Toronto, Unity Health Toronto and the NRC, accelerates the development of innovative devices that can bring high-quality diagnostics to the point of care.
Dr. Hancock’s team, including UBC research associate and co-author Dr. Evan Haney, has also started commercial development of the Sepset signature. “These tests detect the early warnings of sepsis, allowing physicians to act quickly to treat the patient, rather than waiting until the damage is done,” said Dr. Haney."
-via University of British Columbia, May 27, 2025
821 notes · View notes
ladyaldhelm · 4 months ago
Text
This post is a very long rant about Generative AI. If you are not in the headspace to read such content right now, please continue scrolling.
....
....
It has come to my attention that a person who I deeply admire is Pro-AI. Not just Pro-AI, but has become a shill for a multi-billion dollar corporation to promote their destructive generative AI tools, and is doing it voluntarily and willingly. This person is a creative professional and should know better, and this decision by them shows a lack of integrity and empathy for their fellow creatives. They have sold out to not just their own destruction, but to everyone around them, without any concern. It thoroughly disgusts and disappoints me.
Listen, I am not against technological advancements. While I am never the first to adopt a new technology, I have marveled at the leaps and bounds that have been made within my own lifetime, and welcome progress. Artificial Intelligence and Machine Learning models certainly have their place in this world. Right now, scientific researchers are using advanced AI modeling to discover new protein configurations using a program called Alpha-Fold, and the millions of new proteins that were discovered have gone on to the development of life saving cancer treatments, vaccine development, and looking for new ways to battle drug-resistant bacterial infections. Machine learning models are being developed to track and predict climate change with terrifying accuracy, discover new species, researching new ways of dealing with plastic waste and CO2/methane, and developing highly accurate tools for early detection of cancers. These are all amazing advancements that have only been made possible by AI and will save countless millions of lives. THIS is what AI should be used for.
Generative AI, however, is a different beast entirely. It is problematic in many ways, and is destructive by its very nature. All the current models were trained on BILLIONS of copyrighted materials (images, music, text), without the creator's consent or knowledge. That in and of itself is highly unethical. In addition, these computers that run these GenAI programs use an insane amount of resources to run, and are a major contributor to climate change right now, even worse than the NFT and blockchain stuff a few years ago.
GenAI literally takes someone's hard work, puts it into an algorithm that chews it up and spits out some kind of abomination, all with no effort on the part of the user. And then these "creations" are being sold by the boatload, crowding out legitimate artists and professional creatives. Artists like myself and thousands of others who rely on income from art. Musicians, film makers, novelists, and writers are losing as well. It is an uphill battle. The market is flooded right now with so many AI generated art and books that actual artists and writers are being buried. To make matters worse, these generated works often have inaccuracies and spread misinformation and and lead to injury or even death. There are so many AI generated books, for example, about pet care and foraging for plants that are littered with inaccurate and downright dangerous information. Telling people that certain toxic plants are safe to eat, or giving information on pet care that will lead to the animal suffering and dying. People are already being affected by this. It is bad enough when actual authors spread misinformation, but when someone can generate an entire book in a few seconds, this gets multiplied by several orders of magnitude. It makes finding legitimate information difficult or even downright impossible.
GenAI seeks to turn the arts into a commodity, a get-rich-quick money making scheme, which is not the point of art. Automating art should never be the goal of humanity. Automating dangerous and tedious tasks is important for progress, but automating art is taking away our humanity. Art is all about the human experience and human expression, something a machine cannot ever replicate and it SHOULDN'T. Art should come from the heart and soul, not some crap that is mass produced to make a quick buck. Also developing your skills as an artist, whether that is through drawing, painting, sculpture, composing music, songwriting, poetry, creative writing, animation, photography, or making films, are not just about human expression but develop your brain and make you a more well rounded person, with a rich and deep experience and emotional connection to others. Shitting out crappy art and writing just to make a quick dollar defeats the entire purpose of all of that.
In addition, over-reliance on automated and AI tools is already leading to cognitive decline and the deterioration of critical thinking skills. When it is so easy to click a button and generate a research paper why bother putting the work in? Students are already doing this. Taking the easy way out to get a grade, but they are only hurting themselves. When machines do your thinking for you, what is there left to do? People will lose the ability to develop even basic skills.
/rant
By the way if any tech bros come at me you will be blocked without warning. This is not up for debate or discussion.
110 notes · View notes
kenyatta · 2 months ago
Text
AkiraBot is a program that fills website comments sections and customer service chat bots with AI-generated spam messages. Its goal is simple: it wants you to sign up for an SEO scheme that costs about $30 a month. For that low price it swears it can enchant Google’s algorithms to get you on the frontpage. But it’s a scam. A new report from researchers at cybersecurity firm SentinelOne documented how scammers deployed AkiraBot, the tool’s use of OpenAI generated messages, and how it avoided multiple CAPTCHA systems and network detection techniques. According to the report, the bot targeted 420,000 unique domains and successfully spammed 80,000.
Whoever runs AkirBot operates their SEO company under a bunch of different names, but they all tend to use the words “Akira” or “ServiceWrap.” SentinelOne says the tool finds websites crafted by third party software like Wix or Squarespace and spams comments sections and automated chatbots with a promise to get the site on the frontpage of various search engines. If you have a small business that exists on the web or have run a WordPress-based website in the last 15 years, you’ve likely seen messages like those AkiraBot crafts. 
28 notes · View notes
orriculum · 2 years ago
Text
Working in marketing in this day and age is wild because I get to see upclose the way AI is being pushed as a tool to use to churn out content by our bosses because a lot of companies still have this 2010s mindset that they need to be populating the internet regularly with useless articles on how to do things just to stay relevant on search engines but this takes so much time and effort on the part of humans because the articles have to make at least a little sense and require research on an industry you may know nothing about but marketing firms require you to be able to write a nonsense essay about for the SEO but of course if it's just automated we don't need to worry about it, this isn't a serious writing endeavor intent on helping end users, it's about pushing the product at the end of another 300 word article, living and dying by the whims of yet another algorithm
89 notes · View notes
jcmarchi · 20 days ago
Text
The Future of Investment Research with Autonomous AI Agents
New Post has been published on https://thedigitalinsider.com/the-future-of-investment-research-with-autonomous-ai-agents/
The Future of Investment Research with Autonomous AI Agents
Tumblr media Tumblr media
The finance industry has always valued speed and precision. Historically, these characteristics depended wholly on human foresight and spreadsheet sorcery. The emergence of autonomous AI agents is poised to fundamentally transform this landscape.
AI agents are already widely employed across industries: to automate customer service, write code, and screen interview candidates. But Wall Street? That’s always been a tougher nut to crack, for multiple reasons. Stakes are high, accuracy bar is high, data is messy, and the pressure is unrelenting.
As nobody wants to ride a fax machine to work and miss out on all the AI hype, fintech’s already showing us just how game-changing this wave is. Automation, for instance, is eliminating inefficiencies for investment research and due diligence. The rise of financial-grade autonomous agents feels less like a trend and more like a turning point.
Autonomous AI agents for investment research: what are they?
Let’s start with the basics. What are autonomous AI agents? In essence, they’re specialized software equipped with large language models, memory, and agent orchestration to perform highly cognitive tasks that typically require humans. Autonomous AI agents to digest enormous datasets, spot patterns, and return insights that used to take weeks to uncover. This isn’t some middle-of-the-road automation. AI agents have the potential to cut through information noise, accurately track market signals, and generate research that meets the bar of serious institutional rigor.
Picture AI agents as always-on digital analysts tapping into everything from SEC filings and earnings calls to patent databases, user reviews, and news feeds. Unlike legacy tools that just organize data into neat folders, these agents can mirror actual “thinking.” They frame context, connect dots, and produce insights worth being strategic briefings. They can even format it all into investor-ready slide decks. In an industry where every minute matters, that kind of intelligence isn’t just helpful — it can be decisive.
Tools like those created by Wokelo AI are a clear signal of where things are going. As the first AI agent custom-built for institutional finance, it’s already picking up steam across firms like KPMG, Berkshire Partners, EY, Google, and Guggenheim. By scanning over 100,000 live sources and producing high-quality research in minutes, autonomous AI agents are turning what used to be a bottleneck into a superpower. Take the example of M&A. AI-powered research tools can dig into product offerings and synergy potential, enabling investors or consultants discover unexpected investment opportunities in a fraction of the time. Real-time data analytics and on-demand deep dives allow us to catch early market signals when they give investors the most competitive edge.
None of this happened in a vacuum. The industry’s quietly evolved: where early tools were rigid and reactive; today’s AI agents are agile, contextual, and constantly learning. The new financial intelligence is built to save us time, money, and human mistakes.
The power of pattern recognition at scale
And it’s not just speed that makes AI agents a good fit for investment research. If anything, it’s scale. Human researchers hit cognitive limits, bring unconscious bias to the table, and can’t always perform at the top of their ability. Well, AI doesn’t flinch. It ingests everything: , deal data, news sentiment, customer reviews, social signals — you name it. It can flag anomalies across quarterly reports, spot sector momentum before it trends, and tie disparate data points together to reveal shifts no human could track in real time.
For instance, AI tools for financial research can surface early indicators of biotech breakthroughs or trace the downstream effects of a major M&A move across global supply chains. All without the marathon hours analysts are used to. Is this a way to get more tasks done? Yes. But it also unlocks a literally superhuman level of pattern recognition.
Besides, the accuracy is unprecedented. Unlike humans, AI doesn’t know burnout, and it doesn’t miss signals buried in noise. That alone upgrades the quality of insight firms are working with. In terms of overall productivity, it means, for instance, a 50-70% reduction in research hours per prospective deal and a 40% reduction in FTE research effort required for diligence reports. But the real unlock? Letting analysts spend less time on dry research tasks and more time on higher order tasks, like judgment calls, narratives, client relationships, and high-leverage decisions. AI handles the heavy data lifting, answering what, why, how; humans focus on what next. That’s not just cost-efficiency but a smarter division of labor.
Challenges? Yes, those are being worked on
Let’s get one thing straight: AI agents aren’t magic. They’re only as sharp as the data they’re trained on. Feed them noise, and you’ll get noise back, just faster—that’s the good old “garbage in, garbage out” problem. Data quality is still the Achilles’ heel of autonomous agents. Incomplete datasets, stale intel, or baked-in bias can throw even the most advanced models off course. Companies pioneering AI for financial research are actively mitigating this challenge by pulling from a vetted, ever-expanding set of high-integrity sources.
Next big issue is the regulatory maze. Financial markets are a compliance battlefield, and any autonomous AI agent employed there must align with evolving legal and policy standards. For companies delivering these tools to the market, this means constant calibration, legal oversight baked into development cycles, and deep collaboration between data science and compliance teams. Some already feature SOC 2-compliant, zero-trust architecture, ensuring data privacy, and more tools are being developed to fit highly-regulated industries like finance.
When algorithms drive decisions at any level at all, accountability for when things go sideways is paramount. The logic behind an AI’s call needs to be transparent at all times, which forms an active challenge for anyone employing AI in high-stakes environments like financial research. While AI can crunch numbers, surface signals at superhuman speed, and even pass the Turing test, at this very moment it still lacks human capacity for contextual judgment. When markets get unpredictable, this can form a serious problem. That’s why the future isn’t AI versus human analysts. It’s AI with analysts, where AI takes care of the legwork, so human experts can focus on what they do best: spotting what machines might miss.
Rethinking the analyst’s role in the age of AI
Here’s the mind-bender: the financial analyst of the near future will go beyond just using AI. As autonomous AI agents for research become more widely spread and better embedded in workflows, the human job is very likely to morph into that of a curator, trainer, and strategic partner to the robot. That means a skill set shift: from finance as such to interdisciplinary fluency, where understanding machine learning, prompting at a pro-level, spotting gaps in logic, and interpreting black-box outputs become paramount dexterities.
And we shouldn’t view it as a threat — because it’s more of an upgrade. The analysts who thrive will be those who can steer AI, question it, and push it to its limits. Good thing it’s about time to spend less time proving things and more time asking better questions. AI tools aren’t eliminating analysts — they’re unburdening them. In doing so, the entire practice of investment research is elevating. Less stress, more insight. Less noise, more signal. And it’s already happening.
What to expect next
So the hybrid future of investment research looks very much powered by AI and steered by humans. That would mean deeper integrations where autonomous agents learn from analyst feedback, constantly refining their output based on machine-human interaction.
It isn’t a stretch to think that in the shortest time, multimodal agents will be able to analyze not just text. Charts, audio, and video are up next. Agents like that won’t just anticipate market moves, they’ll be able to predict investor behavior. Now, picture real-time collaboration where AI delivers top-notch research and actively collaborates with human analysts in the strategic process. Will this disrupt the old guard? Without a doubt. The legacy research model — slow, expensive, labor-heavy — is out of step with today’s velocity. For traditional firms unwilling to adapt, the options are stark: evolve, consolidate, or get left behind.
VCs and private equity teams are early movers. Many of them already use AI to expand deal pipelines and sharpen due diligence. Hedge funds and asset managers aren’t far behind, especially as returns get squeezed and edge becomes harder to find. Eventually, we’ll see this trickle down: retail investors tapping “lite” versions of autonomous agents, putting elite-level insight into the hands of the many.
Rewriting the research playbook
Clinging to traditional research models in finance research doesn’t seem a smart choice. Embracing a new paradigm powered by autonomous AI agents will make those who act early the biggest winners. The future is all about human analysts working together with the machine. In investment research, that might just be the ultimate edge.
0 notes
mariacallous · 2 years ago
Text
It’s now well understood that generative AI will increase the spread of disinformation on the internet. From deepfakes to fake news articles to bots, AI will generate not only more disinformation, but more convincing disinformation. But what people are only starting to understand is how disinformation will become more targeted and better able to engage with people and sway their opinions.
When Russia tried to influence the 2016 US presidential election via the now disbanded Internet Research Agency, the operation was run by humans who often had little cultural fluency or even fluency in the English language and so were not always able to relate to the groups they were targeting. With generative AI tools, those waging disinformation campaigns will be able to finely tune their approach by profiling individuals and groups. These operatives can produce content that seems legitimate and relatable to the people on the other end and even target individuals with personalized disinformation based on data they’ve collected. Generative AI will also make it much easier to produce disinformation and will thus increase the amount of disinformation that’s freely flowing on the internet, experts say.
“Generative AI lowers the financial barrier for creating content that’s tailored to certain audiences,” says Kate Starbird, an associate professor in the Department of Human Centered Design & Engineering at the University of Washington. “You can tailor it to audiences and make sure the narrative hits on the values and beliefs of those audiences, as well as the strategic part of the narrative.”
Rather than producing just a handful of articles a day,  Starbird adds, “You can actually write one article and tailor it to 12 different audiences. It takes five minutes for each one of them.”
Considering how much content people post to social media and other platforms, it’s very easy to collect data to build a disinformation campaign. Once operatives are able to profile different groups of people throughout a country, they can teach the generative AI system they’re using to create content that manipulates those targets in highly sophisticated ways.
“You’re going to see that capacity to fine-tune. You’re going to see that precision increase. You’re going to see the relevancy increase,” says Renee Diresta, the technical research manager at Stanford Internet Observatory.
Hany Farid, a professor of computer science at the University of California, Berkeley, says this kind of customized disinformation is going to be “everywhere.” Though bad actors will probably target people by groups when waging a large-scale disinformation campaign, they could also use generative AI to target individuals.
“You could say something like, ‘Here’s a bunch of tweets from this user. Please write me something that will be engaging to them.’ That’ll get automated. I think that’s probably coming,” Farid says.
Purveyors of disinformation will try all sorts of tactics until they find what works best, Farid says, and much of what’s happening with these disinformation campaigns likely won’t be fully understood until after they’ve been in operation for some time. Plus, they only need to be somewhat effective to achieve their aims.
“If I want to launch a disinformation campaign, I can fail 99 percent of the time. You fail all the time, but it doesn’t matter,” Farid says. “Every once in a while, the QAnon gets through. Most of your campaigns can fail, but the ones that don’t can wreak havoc.”
Farid says we saw during the 2016 election cycle how the recommendation algorithms on platforms like Facebook radicalized people and helped spread disinformation and conspiracy theories. In the lead-up to the 2024 US election, Facebook’s algorithm—itself a form of AI—will likely be recommending some AI-generated posts instead of only pushing content created entirely by human actors. We’ve reached the point where AI will be used to create disinformation that another AI then recommends to you.
“We’ve been pretty well tricked by very low-quality content. We are entering a period where we’re going to get higher-quality disinformation and propaganda,” Starbird says. “It’s going to be much easier to produce content that’s tailored for specific audiences than it ever was before. I think we’re just going to have to be aware that that’s here now.”
What can be done about this problem? Unfortunately, only so much. Diresta says people need to be made aware of these potential threats and be more careful about what content they engage with. She says you’ll want to check whether your source is a website or social media profile that was created very recently, for example. Farid says AI companies also need to be pressured to implement safeguards so there’s less disinformation being created overall.
The Biden administration recently struck a deal with some of the largest AI companies—ChatGPT maker OpenAI, Google, Amazon, Microsoft, and Meta—that encourages them to create specific guardrails for their AI tools, including external testing of AI tools and watermarking of content created by AI. These AI companies have also created a group focused on developing safety standards for AI tools, and Congress is debating how to regulate AI.
Despite such efforts, AI is accelerating faster than it’s being reined in, and Silicon Valley often fails to keep promises to only release safe, tested products. And even if some companies behave responsibly, that doesn’t mean all of the players in this space will act accordingly.
“This is the classic story of the last 20 years: Unleash technology, invade everybody’s privacy, wreak havoc, become trillion-dollar-valuation companies, and then say, ‘Well, yeah, some bad stuff happened,’” Farid says. “We’re sort of repeating the same mistakes, but now it’s supercharged because we’re releasing this stuff on the back of mobile devices, social media, and a mess that already exists.”
109 notes · View notes
reasonsforhope · 11 months ago
Text
"When bloodstream infections set in, fast treatment is crucial — but it can take several days to identify the bacteria responsible. A new, rapid-diagnosis sepsis test could cut down on the wait, reducing testing time from as much as a few days to about 13 hours by cutting out a lengthy blood culturing step, researchers report July 24 [2024] in Nature.
“They are pushing the limits of rapid diagnostics for bloodstream infections,” says Pak Kin Wong, a biomedical engineer at Penn State who was not involved in the research. “They are driving toward a direction that will dramatically improve the clinical management of bloodstream infections and sepsis.”
Sepsis — an immune system overreaction to an infection — is a life-threatening condition that strikes nearly 2 million people per year in the United States, killing more than 250,000 (SN: 5/18/08). The condition can also progress to septic shock, a steep drop in blood pressure that damages the kidneys, lungs, liver and other organs. It can be caused by a broad range of different bacteria, making species identification key for personalized treatment of each patient.
In conventional sepsis testing, the blood collected from the patient must first go through a daylong blood culturing step to grow more bacteria for detection. The sample then goes through a second culture for purification before undergoing testing to find the best treatment. During the two to three days required for testing, patients are placed on broad-spectrum antibiotics — a blunt tool designed to stave off a mystery infection that’s better treated by targeted antibiotics after figuring out the specific bacteria causing the infection.
Nanoengineer Tae Hyun Kim and colleagues found a way around the initial 24-hour blood culture.
The workaround starts by injecting a blood sample with nanoparticles decorated with a peptide designed to bind to a wide range of blood-borne pathogens. Magnets then pull out the nanoparticles, and the bound pathogens come with them. Those bacteria are sent directly to the pure culture. Thanks to this binding and sorting process, the bacteria can grow faster without extraneous components in the sample, like blood cells and the previously given broad-spectrum antibiotics, says Kim, of Seoul National University in South Korea.
Cutting out the initial blood culturing step also relies on a new imaging algorithm, Kim says. To test bacteria’s susceptibility to antibiotics, both are placed in the same environment, and scientists observe if and how the antibiotics stunt the bacteria’s growth or kill them. The team’s image detection algorithm can detect subtler changes than the human eye can. So it can identify the species and antibiotic susceptibility with far fewer bacteria cells than the conventional method, thereby reducing the need for long culture times to produce larger colonies.
Though the new method shows promise, Wong says, any new test carries a risk of false negatives, missing bacteria that are actually present in the bloodstream. That in turn can lead to not treating an active infection, and “undertreatment of bloodstream infection can be fatal,” he says. “While the classical blood culture technique is extremely slow, it is very effective in avoiding false negatives.”
Following their laboratory-based experiments, Kim and colleagues tested their new method clinically, running it in parallel with conventional sepsis testing on 190 hospital patients with suspected infections. The testing obtained a 100 percent match on correct bacterial species identification, the team reports. Though more clinical tests are needed, these accuracy results are encouraging so far, Kim says.
The team is continuing to refine their design in hopes of developing a fully automated sepsis blood test that can quickly produce results, even when hospital laboratories are closed overnight. “We really wanted to commercialize this and really make it happen so that we could make impacts to the patients,” Kim says."
-via Science News, July 24, 2024
2K notes · View notes
zerofuckingwaste · 5 months ago
Text
I see a lot of people- and I mean a LOT of people- criticizing AI and its use in creative fields. Which, yes. More of that please.
But I'm unnerved that I see even more people criticizing its use in labor that is, to put it simply, unskilled drudgery. Things that can indeed be simplified to a limited algorithm, making human jobs pointless. The argument is that AI is taking jobs from people, taking their livelihoods...
Three things:
We should be able to automate the drudgery, so that we may pursue artistic and other more fun endeavors instead. Moving closer to a society that doesn't require people to work those jobs means that we as a society can begin to dedicate our time to things we actually want to do.
As time moves forward, certain jobs are made irrelevant, and that's often for the better. Yes, there are jobs that are a lot rarer now because of the advancement in technology that we look wistfully on as awesome relics- blacksmith, groom, and the like- but I'm talking about the jobs like, phone operators, human computers, lamplighter. Jobs that don't NEED to exist, because technology can do it better, faster, cheaper. Are those folks screwed? No- they've developed skills that can be applied to numerous other jobs. Let jobs that don't need to exist be automated, so the people in those roles can pursue better things- to make everyone's lives better, and easier.
That's the same argument bigots use when talking about illegal immigrants- that they're taking our jobs. Do you really want to continue that rhetoric?
The only legitimate argument I've heard is that AI uses up insane resources, which, oh my GOD, it's horrible. But computers used to be the size of my house, and now I carry one in my pocket. It takes time, but not long from now, I'm sure it will be made more efficient. That takes research, support, and MONEY.
If the people in charge of these AI things don't think that they can essentially revamp labor with their work in a way that is profitable, they have no incentive to make it more efficient. Make it clear that you WANT AI to be used for purposes that better humanity, but that it needs to be more efficient to be justified in doing so. Make it clear that you want to be able to pursue your passions in a way that is profitable to you, and AI that supports that endeavor will only lead to good things.
Tl;Dr: AI is not morally aligned, positively or negatively. It's not ethical, it's not unethical. It's about how it's used, who's using it, and how its use impacts the world. Get your head on straight, and advocate for its use in a way that helps you.
6 notes · View notes
shituationist · 1 year ago
Text
assuaging my anxieties about machine learning over the last week, I learn that despite there being about ten years of doom-saying about the full automation of radiomics, there's actually a shortage of radiologists now (and, also, the machine learning algorithms that are supposed to be able to detect cancers better than human doctors are very often giving overconfident predictions). truck driving was supposed to be completely automated by now, but my grampa is still truckin' and will probably get to retire as a trucker. companies like GM are now throwing decreasing amounts of money at autonomous vehicle research after throwing billions at cars that can just barely ferry people around san francisco (and sometimes still fails), the most mapped and trained upon set of roads in the world. (imagine the cost to train these things for a city with dilapidated infrastructure, where the lines in the road have faded away, like, say, Shreveport, LA).
we now have transformer-based models that are able to provide contextually relevant responses, but the responses are often wrong, and often in subtle ways that require expertise to needle out. the possibility of giving a wrong response is always there - it's a stochastic next-word prediction algorithm based on statistical inferences gleaned from the training data, with no innate understanding of the symbols its producing. image generators are questionably legal (at least the way they were trained and how that effects the output of essentially copyrighted material). graphic designers, rather than being replaced by them, are already using them as a tool, and I've already seen local designers do this (which I find cheap and ugly - one taco place hired a local designer to make a graphic for them - the tacos looked like taco bell's, not the actual restaurant's, and you could see artefacts from the generation process everywhere). for the most part, what they produce is visually ugly and requires extensive touchups - if the model even gives you an output you can edit. the role of the designer as designer is still there - they are still the arbiter of good taste, and the value of a graphic designer is still based on whether or not they have a well developed aesthetic taste themself.
for the most part, everything is in tech demo phase, and this is after getting trained on nearly the sum total of available human produced data, which is already a problem for generalized performance. while a lot of these systems perform well on older, flawed, benchmarks, newer benchmarks show that these systems (including GPT-4 with plugins) consistently fail to compete with humans equipped with everyday knowledge.
there is also a huge problem with the benchmarks typically used to measure progress in machine learning that impact their real world use (and tell us we should probably be more cautious because the human use of these tools is bound to be reckless given the hype they've received). back to radiomics, some machine learning models barely generalize at all, and only perform slightly better than chance at identifying pneumonia in pediatric cases when it's exposed to external datasets (external to the hospital where the data it was trained on came from). other issues, like data leakage, make popular benchmarks often an overoptimistic measure of success.
very few researchers in machine learning are recognizing these limits. that probably has to do with the academic and commercial incentives towards publishing overconfident results. many papers are not even in principle reproducible, because the code, training data, etc., is simply not provided. "publish or perish", the bias journals have towards positive results, and the desire of tech companies to get continued funding while "AI" is the hot buzzword, all combined this year for the perfect storm of techno-hype.
which is not to say that machine learning is useless. their use as glorified statistical methods has been a boon for scientists, when those scientists understand what's going on under the hood. in a medical context, tempered use of machine learning has definitely saved lives already. some programmers swear that copilot has made them marginally more productive, by autocompleting sometimes tedious boilerplate code (although, hey, we've had code generators doing this for several decades). it's probably marginally faster to ask a service "how do I reverse a string" than to look through the docs (although, if you had read the docs to begin with would you even need to take the risk of the service getting it wrong?) people have a lot of fun with the image generators, because one-off memes don't require high quality aesthetics to get a chuckle before the user scrolls away (only psychopaths like me look at these images for artefacts). doctors will continue to use statistical tools in the wider machine learning tool set to augment their provision of care, if these were designed and implemented carefully, with a mind to their limitations.
anyway, i hope posting this will assuage my anxieties for another quarter at least.
35 notes · View notes
weaveology · 8 months ago
Text
Hello world weaveology!
weavology - an inFORMed Weaved Matter research project
by Maude Guirault - Computationa and Textile designer & Andrea Graziano - Computational designer - Co-de-iT
"The Analytical Engine weaves algebraic patterns, just as the Jacquard loom weaves flowers and leaves." Ada Lovelace
The deep connection between the automation of weaving looms and the mechanical processing of information is well-known. It lies at the root of modern information technology. Weaving looms are implemented using patterns called ‘weaving patterns’, i.e. a binary code made of 0 and 1 and visually described as black and white pixels that control the behaviour of the threads to weave complex patterns.
But, if 0 and 1 are the basic set of information, the transition between them defines the weaving operation; the number of transitions and the positioning of them define not only the aesthetic result but also the tactile and physical properties of the fabric. These are precisely the premises of our research.
It is relatively easy nowadays to generate algorithms exploring all the possible billions of permutations of textile patterns but how can we organize all this complexity? What are the resulting differences in terms of physical properties of this incredible abundance of variation going beyond the mere aesthetic result? Is it possible to recognize and establish the correspondence between the 0s and 1s and the resulting physical behavior of the fabric?
A computational design strategy to detect the physical properties as textile features, to clusterize the incredible amount of possibly generated patterns and to articulate and organize that variety into a weaving process could eventually enable a novel approach to textile design & weaving.
Is there still room for novelty in one of the most explored human-machine interactions?
Tumblr media
Weaveology starts as an open research, as it could lead in very different directions depending on the specific investigated aspects, the chosen materials, the used machines and facilities and the research opportunities. As an independent research group, we are open to collaborations and looking for research funding opportunities. We are also available for mentoring workshops and seminars in both academic and non-academic environments.
For info:
instagram - @maude_guirault mail - [email protected]
instagram @arch.a.graziano mail - [email protected]
5 notes · View notes
wickedsnack-art · 11 months ago
Note
Can I ask about your online selling experience? I know you're moving away from Etsy (and I don't blame you!) but just curious about what's worked/what hasn't. Trying to help my mom sell her art but like. She's older and I have never sold art online and can't give her any advice.
Please feel free to tag other artists selling their work as well! Thank you 🙏
Oh I have to be honest I am probably not the best person to ask about this as it's something I've definitely struggled with myself! I will say for what it's worth that I've been seeing a lot of artists struggling lately; no matter how "popular" an artist is, many seem to be reporting sales that are lower than expected even if market research/interest checks/preorders were done.
Everyone is struggling financially right now---it's been this way for YEARS---and art is one of the first luxuries to go.
That said, Etsy worked well for me for a long time. If your mom isn't doing fanart, she should be fine, because my biggest issue with them arose about the fact that I was selling fanart. Some tips I have;
USE EVERY TAG. Etsy has, if I recall correctly, 13 tags you can put on an individual item. Use all 13. Use them for alternative words for what you're selling (if it's a sticker, tag it as a decal, if it's a print, tag it as a poster). Think about who you think would buy this and what weird search terms they may use to try to find it, and think about alternative uses for the item (a trinket dish might also be an ashtray).
HAVE DESCRIPTIVE TITLES. Put the most important words first so people can see what they're getting right away, but don't be afraid of slightly longer titles. Honestly my titles could've and should've been longer, like Sailor Moon Art Nouveau Digital Art Poster Print Multiple Sizes or something.
RUN SALES OFTEN. Even if it's just 10-15% off people will buy something they've been eying for a while when there's a sale, or they'll feel more eager to buy something they've just found if it's on sale.
USE ETSY'S AUTOMATED DISCOUNT OFFERS. Etsy can automatically send a discount code to people who have interacted with your shop, use it. I made more sales from the automated 10% off code sent to people who favorited items than my monthly Patreon discount.
USE ETSY FREE SHIPPING. Shipping via Etsy is pretty cheap, and activating the "free shipping on orders over $35" will boost your spot in the algorithm, will boost the likelihood that people will order from you, and will boost the average cart size of people that order from you.
I RECOMMEND PRINTFUL. I used Printful for selling my larger prints, but they also offer other items if you want to branch out. If you don't want to get in trouble with Etsy, make sure you register it as a manufacturing partner and assign every item that Printful makes for you. Dropshipping on Etsy is a problem, but the problem is people who steal art or use AI to generate images to sell. I don't personally see a problem with someone who makes their art themselves going through a print shop to sell products they don't have the means to create at home. If you don't want to do that, you can check out inprnt. I haven't used them, but many other people have and seem to like it well enough.
SHIP THROUGH ETSY. It doesn't take very long to set up a shipping profile for your items, and it makes shipping easier and cheaper. As long as you get your items out on time, you'll get their shipping star or whatever very quickly and easily and maintain it without problem. It also has the benefit that if a buyer ever has an issue with the shipping, Etsy is more likely to have your back. If for any reason you can't ship through Etsy, I recommend pirateship. Also!! Be more careful about international shipping than you think you should be. There are a lot of confusing international laws regarding sending items as a corporation to those countries that you may not expect, so before you agree to selling something to a foreign country, make sure you check their laws.
I have also tried having an Instagram shop and I'll be fully honest I don't do what I should do with my Instagram. Maybe other people have more successful Instagram shops, but the process it took for me to get it started compared to how many sales I've made as a result of it (literally ZERO), I would not recommend it.
Shopify is good if you have a following somewhere, because you have to bring all the traffic there yourself. That's the benefit of Etsy and Instagram; they are able to make traffic for you. I've never had a following large enough anywhere to feel like I could run a Shopify of my art. Maybe one day.
I don't personally know a lot of artists who sell online successfully, so if you see this and you fall under that category PLEASE SHARE TIPS!!
16 notes · View notes
mordellestories · 2 months ago
Text
Want to actually help steer us away from the dystopian iceberg?
Whether you use AI or not isn’t the point anymore—it’s already here. The conversation now needs to be about regulation and responsibility, not purity tests.
Real activism? It looks like research. It looks like advocacy. It looks like asking: What’s being done to keep us all safe—and how can I help?
Follow and amplify voices calling for safe AI:
Timnit Gebru
Joy Buolamwini
Eliezer Yudkowsky
Dr. Stephen Hassan
Push for regulation: Advocate for laws that demand transparency in AI development.
Here are some resources:
3 notes · View notes
wolfliving · 3 months ago
Text
A scholarly bibliography of Design Fiction, AI and the news, 2025
ACKNOWLEDGMENTS
The work was funded by the Helsingin Sanomat Foundation and the Kone Foundation. We thank all the ideation workshop participants.
REFERENCES
[1] Naseem Ahmadpour, Sonja Pedell, Angeline Mayasari, and Jeanie Beh. 2019. Co-creating and Assessing Future Wellbeing Technology Using Design
Fiction. She Ji 5, 3 (2019), 209 230. DOI:https://doi.org/10.1016/j.sheji.2019.08.003
[2] ArtefactGroup. The Tarot Cards of Tech. Retrieved August 10, 2024 from https://tarotcardsoftech.artefactgroup.com
[3] Reuben Binns. 2018. Algorithmic Accountability and Public Reason. Philos. Technol. 31, 4 (December 2018), 543 556.
DOI:https://doi.org/10.1007/S13347-017-0263-5/METRICS
[4] Julian Bleecker. 2009. Design Fiction: A Short Essay on Design, Science, Fact and Fiction. Retrieved January 9, 2020 from
http://drbfw5wfjlxon.cloudfront.net/writing/DesignFiction_WebEdition.pdf
[5] [6] Julian Bleecker, Nick Foster, Fabien Girardin, and Nicolas Nova. 2022. The Manual of Design Fiction.
Mark Blythe. 2014. Research through design fiction: Narrative in real and imaginary abstracts. Conf. Hum. Factors Comput. Syst. - Proc. (2014), 703
712. DOI:https://doi.org/10.1145/2556288.2557098
[7] Mark Blythe and Enrique Encinas. 2016. The co-ordinates of design fiction: Extrapolation, irony, ambiguity and magic. Proc. Int. ACM Siggr. Conf.
Support. Gr. Work 13-16-Nove, (2016), 345 354. DOI:https://doi.org/10.1145/2957276.2957299
[8] J. Broekens, M. Heerink, and H. Rosendal. 2009. Assistive social robots in elderly care: a review. Gerontechnology 8, 2 (2009).
DOI:https://doi.org/10.4017/gt.2009.08.02.002.00
[9] Kevin Matthe Caramancion. 2023. News Verifiers Showdown: A Comparative Performance Evaluation of ChatGPT 3.5, ChatGPT 4.0, Bing AI, and
Bard in News Fact-Checking. Proc. - 2023 IEEE Futur. Networks World Forum Futur. Networks Imagining Netw. Futur. FNWF 2023 (2023).
DOI:https://doi.org/10.1109/FNWF58287.2023.10520446
[10] Mia Carbone, Stuart Soroka, and Johanna Dunaway. 2024. The Psychophysiology of News Avoidance: Does Negative Affect Drive Both Attention and Inattention to News? Journal. Stud. (September 2024). DOI:https://doi.org/10.1080/1461670X.2024.2310672
[11] John M. Carroll. 1997. Human computer interaction: psychology as a science of design. Int. J. Hum. Comput. Stud. 46, 4 (April 1997), 501 522.
DOI:https://doi.org/10.1006/IJHC.1996.0101
[12] Mark Chignell, Lu Wang, Atefeh Zare, and Jamy Li. 2023. The Evolution of HCI and Human Factors: Integrating Human and Artificial Intelligence.
ACM Trans. Comput. Interact. 30, 2 (March 2023). DOI:https://doi.org/10.1145/3557891/ASSET/0C373876-F5A8-40E8-A46E-
273567CE2001/ASSETS/GRAPHIC/TOCHI-2021-0178-F03.JPG
[13] Justin Clark, Robert Faris, Urs Gasser, Adam Holland, Hilary Ross, and Casey Tilton. 2019. Content and Conduct: How English Wikipedia Moderates
Harmful Speech. Retrieved September 11, 2024 from https://papers.ssrn.com/abstract=3489176
[14] Marios Constantinides, John Dowell, David Johnson, and Sylvain Malacria. 2015. Exploring mobile news reading interactions for news app
personalisation. MobileHCI 2015 - Proc. 17th Int. Conf. Human-Computer Interact. with Mob. Devices Serv. (August 2015), 457 462.
DOI:https://doi.org/10.1145/2785830.2785860/SUPPL_FILE/P457-CONSTANTINIDES-SUPPL.ZIP
[15] Henry Kudzanai Dambanemuya and Nicholas Diakopoulos. 2021. Auditing the Information Quality of News-Related Queries on the Alexa Voice
Assistant. Proc. ACM Human-Computer Interact. 5, CSCW1 (April 2021). DOI:https://doi.org/10.1145/3449157
[16] [17] [18] Nicholas Diakopoulos. 2019. Automating the news: how algorithms are rewriting the media. (2019), 326.
Carl DiSalvo. 2012. Adversarial design as inquiry and practice. MIT Press.
Abraham Doris-Down, Husayn Versee, and Eric Gilbert. 2013. Political blend: An application designed to bring people together based on political
differences. ACM Int. Conf. Proceeding Ser. (2013), 120 130. DOI:https://doi.org/10.1145/2482991.2483002
[19] Konstantin Nicholas Dörr. 2016. Mapping the field of Algorithmic Journalism. Digit. Journal. 4, 6 (2016), 700 722.
DOI:https://doi.org/10.1080/21670811.2015.1096748
[20] Konstantin Nicholas Dörr and Katharina Hollnbuchner. 2017. Ethical Challenges of Algorithmic Journalism. Digit. Journal. 5, 4 (April 2017), 404
419. DOI:https://doi.org/10.1080/21670811.2016.1167612
[21] Tomislav Duricic, Dominik Kowald, Emanuel Lacic, and Elisabeth Lex. 2023. Beyond-accuracy: a review on diversity, serendipity, and fairness in
recommender systems based on graph neural networks. Front. Big Data 6, (December 2023), 1251072.
DOI:https://doi.org/10.3389/FDATA.2023.1251072/BIBTEX
[22] Seth Flaxman, Sharad Goel, Justin M Rao, David Blei, Ceren Budak, Susan Dumais, Andrew Gelman, Dan Goldstein, Matt Salganik, Tim Wu, and
Georgios Zervas. 2016. Filter Bubbles, Echo Chambers, and Online News Consumption. Public Opin. Q. 80, S1 (January 2016), 298 320.
DOI:https://doi.org/10.1093/POQ/NFW006
[23] [24] Richard Fletcher and R. Nielsen. 2024. What does the public in six countries think of generative AI in news?
Terry Flew, Christina Spurgeon, Anna Daniel, and Adam Swift. 2012. The Promise of Computational Journalism. Journal. Pract. 6, 2 (2012), 157 171.
DOI:https://doi.org/10.1080/17512786.2011.616655
[25] Julian De Freitas, Stuti Agarwal, Bernd Schmitt, and Nick Haslam. 2023. Psychological factors underlying attitudes toward AI tools. Nat. Hum. Behav.
242023 711 7, 11 (November 2023), 1845 1854. DOI:https://doi.org/10.1038/s41562-023-01734-2
[26] Fiona Fui-Hoon Nah, Ruilin Zheng, Jingyuan Cai, Keng Siau, and Langtao Chen. 2023. Generative AI and ChatGPT: Applications, challenges, and
AI-human collaboration. J. Inf. Technol. Case Appl. Res. 25, 3 (July 2023), 277 304. DOI:https://doi.org/10.1080/15228053.2023.2233814
[27] Fuse. 2024. Fuse - Personalized News. Retrieved August 10, 2024 from https://pageone.livesemantics.com/
[28] hary Kenton, Mikel
Rodriguez, Seliem El-Sayed, Sasha Brown, Canfer Akbulut, Andrew Trask, Edward Hughes, and Et Al. 2024. The Ethics of Advanced AI Assistants.
(April 2024), 2024 2028. Retrieved September 9, 2024 from https://arxiv.org/abs/2404.16244v2
[29] Mingqi Gao, Xinyu Hu, Jie Ruan, Xiao Pu, and Xiaojun Wan. 2024. LLM-based NLG Evaluation: Current Status and Challenges. (February 2024).
Retrieved September 8, 2024 from https://arxiv.org/abs/2402.01383v2
[30] William W Gaver, Peter Gall Krogh, and Andy Boucher. 2022. Emergence as a Feature of Practice-based Design Research. In Designing Interactive
, 517 526.
[31] Sabine Geers. 2020. News Consumption across Media Platforms and Content: A Typology of Young News Users. Public Opin. Q. 84, S1 (August 2020),
332 354. DOI:https://doi.org/10.1093/POQ/NFAA010
[32] Nicole Gillespie, STeven Lockey, Caitlin Curtis, Javad Pool, and Ali Akbari. 2023. Trust in Artificial Intelligence: Meta-Analytic Findings. Univ.
Queensl. KPMG Aust. 10, (2023). DOI:https://doi.org/10.14264/00d3c94
[33] [34] GroundNews. 2024. Ground News. Retrieved August 3, 2024 from https://ground.news/about
Michael M. Grynbaum and Ryan Mac. 2023. The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work. The New York Times.
Retrieved January 15, 2024 from https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit
[35] Derek Hales. 2013. Design fictions an introduction and provisional taxonomy. Digit. Creat. 24, 1 (March 2013), 1 10.
DOI:https://doi.org/10.1080/14626268.2013.769453
[36] Vikas Hassija, Vinay Chamola, Atmesh Mahapatra, Abhinandan Singal, Divyansh Goel, Kaizhu Huang, Simone Scardapane, Indro Spinelli, Mufti Mahmud, and Amir Hussain. 2024. Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence. Cognit. Comput. 16, 1 (January
2024), 45 74. DOI:https://doi.org/10.1007/S12559-023-10179-8/FIGURES/14
[37] Michael Townsen Hicks, James Humphries, and Joe Slater. 2024. ChatGPT is bullshit. Ethics Inf. Technol. 26, 2 (June 2024), 1 10.
DOI:https://doi.org/10.1007/S10676-024-09775-5/METRICS
[38] Lennart Hofeditz, Milad Mirbabaie, Jasmin Holstein, and Stefan Stieglitz. 2021. Do You Trust an AI-Journalist? A Credibility Analysis of News
Content with AI-Authorship. ECIS (2021), 6 14. Retrieved from https://aisel.aisnet.org/ecis2021_rp/50
[39] Naja Holten Holten MÞller, Trine Rask Nielsen, and Christopher Le Dantec. 2021. Work of the Unemployed. DIS 2021 - Proc. 2021 ACM Des. Interact.
Syst. Conf. Nowhere Everywhere (June 2021), 438 448. DOI:https://doi.org/10.1145/3461778.3462003/ASSETS/HTML/IMAGES/IMAGE3.JPG
[40] Avery E. Holton and Hsiang Iris Chyi. 2012. News and the Overloaded Consumer: Factors Influencing Information Overload Among News
Consumers. https://home.liebertpub.com/cyber 15, 11 (November 2012), 619 624. DOI:https://doi.org/10.1089/CYBER.2011.0610
[41] Chenyan Jia, Martin J. Riedl, and Samuel Woolley. 2024. Promises and Perils of Automated Journalism: Algorithms, Experimentat
Journal. Stud. 25, 1 (January 2024), 38 57. DOI:https://doi.org/10.1080/1461670X.2023.2289881
[42] Sangyeon Kim, Insil Huh, and Sangwon Lee. 2022. No Movie to Watch: A Design Strategy for Enhancing Content Diversity through Social
Recommendation in the Subscription-Video-On-Demand Service. Appl. Sci. 2023, Vol. 13, Page 279 13, 1 (December 2022), 279.
DOI:https://doi.org/10.3390/APP13010279
[43] Joel Kiskola, Thomas Olsson, Heli VÀÀtÀjÀ, Aleksi H. SyrjÀmÀki, Anna Rantasila, Poika Isokoski, Mirja Ilves, and Veikko Surakka. 2021. Applying critical voice in design of user interfaces for supporting self-reflection and emotion regulation in online news commenting. In of the 2021 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery.
DOI:https://doi.org/10.1145/3411764.3445783
[44] n with Black Mirror.
SIGCSE 2022 - Proc. 53rd ACM Tech. Symp. Comput. Sci. Educ. 1, (February 2022), 836 842. DOI:https://doi.org/10.1145/3478431.3499308
[45] Tomoko Komatsu, Marisela Gutierrez Lopez, Stephann Makri, Colin Porlezza, Glenda Cooper, Andrew MacFarlane, and Sondess Missaoui. 2020. AI
should embody our values: Investigating journalistic values to inform AI technology design. ACM Int. Conf. Proceeding Ser. (October 2020).
DOI:https://doi.org/10.1145/3419249.3420105
[46] Peter Gall Krogh, Thomas Markussen, and Anne Louise Bang. 2015. Ways of drifting Five methods of experimentation in research through design.
Smart Innov. Syst. Technol. 34, (2015), 39 50. DOI:https://doi.org/10.1007/978-81-322-2232-3_4/TABLES/1
[47] Shaun Lawson, Ben Kirman, Conor Linehan, Tom Feltwell, and Lisa Hopkins. 2015. Problematising Upstream Technology through Speculative
Design: The Case of Quantified Cats and Dogs. DOI:https://doi.org/10.1145/2702123.2702260
[48] Hao Ping Lee, Yu Ju Yang, Thomas Serban von Davier, Jodi Forlizzi, and Sauvik Das. 2024. Deepfakes, Phrenology, Surveillance, and More! A
Taxonomy of AI Privacy Risks. Conf. Hum. Factors Comput. Syst. - Proc. (May 2024).
DOI:https://doi.org/10.1145/3613904.3642116/SUPPL_FILE/PN8548-SUPPLEMENTAL-MATERIAL-1.XLSX
[49] Sunok Lee, Minha Lee, and Sangsu Lee. 2023. What If Artificial Intelligence Become Completely Ambient in Our Daily Lives? Exploring Future
Human-AI Interaction through High Fidelity Illustrations. Int. J. Hum. Comput. Interact. 39, 7 (2023), 1371 1389.
DOI:https://doi.org/10.1080/10447318.2022.2080155
[50] rceptions of
Generative Artificial Intelligence. Conf. Hum. Factors Comput. Syst. - Proc. 18, 24 (May 2024).
25DOI:https://doi.org/10.1145/3613904.3642114/SUPPL_FILE/PN9381-SUPPLEMENTAL-MATERIAL-2.PDF
[51] Sixian Li, Alessandro M. Peluso, and Jinyun Duan. 2023. Why do we prefer humans to artificial intelligence in telemarketing? A mind perception
explanation. J. Retail. Consum. Serv. 70, (January 2023), 103139. DOI:https://doi.org/10.1016/J.JRETCONSER.2022.103139
[52] Joseph Lindley and Paul Coulton. 2015. Back to the future: 10 years of design fiction. ACM Int. Conf. Proceeding Ser. (2015), 210 211.
DOI:https://doi.org/10.1145/2783446.2783592
[53] Comput. Hum. Behav. Artif. Humans 2, 1
(January 2024), 100054. DOI:https://doi.org/10.1016/J.CHBAH.2024.100054
[54] [55] Listen2.AI. 2024. Listen2.AI. Retrieved August 7, 2024 from https://listen2.ai/
Andrés Lucero and Juha Arrasvuori. 2010. PLEX Cards: A source of inspiration when designing for playfulness. ACM Int. Conf. Proceeding Ser. (2010),
28 37. DOI:https://doi.org/10.1145/1823818.1823821
[56] Thomas Markussen and Eva Knutz. 2013. The poetics of design fiction. Proc. 6th Int. Conf. Des. Pleasurable Prod. Interfaces, DPPI 2013 (2013), 231
240. DOI:https://doi.org/10.1145/2513506.2513531
[57] Suvodeep Misra, Debayan Dhar, and Sukumar Nandi. 2023. Design Fiction: A Way to Foresee the Future of Human Computer Interaction Design
Challenges. Smart Innov. Syst. Technol. 343, (2023), 809 822. DOI:https://doi.org/10.1007/978-981-99-0293-4_65
[58] Rachel E. Moran and Sonia Jawaid Shaikh. 2022. Robots in the News and Newsrooms: Unpacking Meta-Journalistic Discourse on the Use of Artificial
Intelligence in Journalism. Digit. Journal. 10, 10 (November 2022), 1756 1774. DOI:https://doi.org/10.1080/21670811.2022.2085129
[59] Victoria Moreno-Gil, Xavier Ramon-Vegas, Ruth Rodríguez-Martínez, and Marcel Mauri-Ríos. 2023. Explanatory Journalism within European Fact
Checking Platforms: An Ally against Disinformation in the Post-COVID-19 Era. Soc. 2023, Vol. 13, Page 237 13, 11 (November 2023), 237.
DOI:https://doi.org/10.3390/SOC13110237
[60] Sean A. Munson, Stephanie Y. Lee, and Paul Resnick. 2013. Encouraging Reading of Diverse Political Viewpoints with a Browser Widget. Proc. Int.
AAAI Conf. Web Soc. Media 7, 1 (2013), 419 428. DOI:https://doi.org/10.1609/ICWSM.V7I1.14429
[61] Kevin P. Murphy. 2023. Probabilistic machine learning: Advanced topics. MIT Press.
[62] Nic Newman, Richard Fletcher, Craig T. Robertson, A. Ross Arguedas, and Rasmus Kleis Nielsen. 2024. Reuters Institute digital news report 2024.
[63] Safiya Umoja Noble. 2020. Algorithms of Oppression. Algorithms of Oppression (December 2020).
DOI:https://doi.org/10.18574/NYU/9781479833641.001.0001
[64] Donald Norman. 2024. Design for a Better World: Meaningful, Sustainable, Humanity Centered. MIT Press.
[65] treme Right
and Online Recommender Systems. Soc. Sci. Comput. Rev. 33, 4 (August 2015), 459 478.
DOI:https://doi.org/10.1177/0894439314555329/ASSET/IMAGES/LARGE/10.1177_0894439314555329-FIG5.JPEG
[66] Andreas L. Opdahl, BjÞrnar Tessem, Duc Tien Dang-Nguyen, Enrico Motta, Vinay Setty, Eivind Throndsen, Are Tverberg, and Christoph Trattner.
2023. Trustworthy journalism through AI. Data Knowl. Eng. 146, April (2023), 102182. DOI:https://doi.org/10.1016/j.datak.2023.102182
[67] Sharon Oviatt. 2006. Human-centered design meets cognitive load theory: Designing interfaces that help people think. Proc. 14th Annu. ACM Int.
Conf. Multimedia, MM 2006 (2006), 871 880. DOI:https://doi.org/10.1145/1180639.1180831
[68] Ozlem Ozmen Garibay, Brent Winslow, Salvatore Andolina, Margherita Antona, Anja Bodenschatz, Constantinos Coursaris, Gregory Falco, Stephen
M. Fiore, Ivan Garibay, Keri Grieman, John C. Havens, Marina Jirotka, Hernisa Kacorri, Waldemar Karwowski, Joe Kider, Joseph Konstan, Sean
Koon, Monica Lopez-Gonzalez, Iliana Maifeld-Carucci, Sean McGregor, Gavriel Salvendy, Ben Shneiderman, Constantine Stephanidis, Christina
Strobel, Carolyn Ten Holter, and Wei Xu. 2023. Six Human-Centered Artificial Intelligence Grand Challenges. Int. J. Human Computer Interact. 39,
3 (2023), 391 437. DOI:https://doi.org/10.1080/10447318.2022.2153320
[69] Sumit Pahwa and Nusrat Khan. 2022. Factors Affecting Emotional Resilience in Adults. Manag. Labour Stud. 47, 2 (May 2022), 216 232.
DOI:https://doi.org/10.1177/0258042X211072935/ASSET/IMAGES/LARGE/10.1177_0258042X211072935-FIG1.JPEG
[70] Rock Yuren Pang, Sebastin Santy, René Just, and Katharina Reinecke. 2024. BLIP: Facilitating the Exploration of Undesirable Consequences of Digital
Technologies. Conf. Hum. Factors Comput. Syst. - Proc. (May 2024). DOI:https://doi.org/10.1145/3613904.3642054/SUPPL_FILE/PN1128-
SUPPLEMENTAL-MATERIAL-1.PDF
[71]
[72] Jonathan Perry. 2021. Trust in Public Institutions: Trends and Implications for Economic Security. Aff. United Nations Dep. Econ. Soc. (July 2021).
DOI:https://doi.org/10.18356/27081990-108
[73] James Pierce. 2021. In tension with progression: Grasping the frictional tendencies of speculative, critical, and other alternative designs. In Conference
on Human Factors in Computing Systems - Proceedings, Association for Computing Machinery. DOI:https://doi.org/10.1145/3411764.3445406
[74] Amanda RamsÀlv, Mats Ekström, and Oscar Westlund. 2023. The epistemologies of data journalism. https://doi.org/10.1177/14614448221150439
(January 2023). DOI:https://doi.org/10.1177/14614448221150439
[75] Jeba Rezwana and Mary Lou Maher. 2023. User Perspectives on Ethical Challenges in Human-AI Co-Creativity: A Design Fiction Study. ACM Int.
Conf. Proceeding Ser. (2023), 62 74. DOI:https://doi.org/10.1145/3591196.3593364
[76] Francesco Ricci, Lior Rokach, and Bracha Shapira. 2022. Recommender Systems Handbook: Third Edition. Recomm. Syst. Handb. Third Ed. (January
2022), 1 1060. DOI:https://doi.org/10.1007/978-1-0716-2197-4
[77] Ronda Ringfort-Felner, Robin Neuhaus, Judith DörrenbĂ€cher, Sabrina Großkopp, Dimitra Theofanou-fuelbier, and Marc Hassenzahl. 2023. Design
Fiction in a Corporate Setting a Case Study. In 14, 2023, 2093 2108.
DOI:https://doi.org/10.1145/3563657.3596126
26[78] Francisco Javier Rodrigo-Ginés, Jorge Carrillo-de-Albornoz, and Laura Plaza. 2024. A systematic review on media bias detection: What is media bias,
how it is expressed, and how to detect it. Expert Syst. Appl. 237, (March 2024), 121641. DOI:https://doi.org/10.1016/J.ESWA.2023.121641
[79] LambÚr Royakkers, Jelte Timmer, Linda Kool, and Rinie van Est. 2018. Societal and ethical issues of digitization. Ethics Inf. Technol. 20, 2 (June 2018),
127 142. DOI:https://doi.org/10.1007/S10676-018-9452-X/TABLES/1
[80] Alan M. Rubin, Elizabeth M. Perse, and Robert A. Powell. 1985. Loneliness, Parasocial Interaction, And Local Television News Viewing. Hum.
Commun. Res. 12, 2 (December 1985), 155 180. DOI:https://doi.org/10.1111/J.1468-2958.1985.TB00071.X
[81] Henrik Rydenfelt. 2022. Transforming media agency? Approaches to automation in Finnish legacy media. New Media Soc. 24, 12 (March 2022), 2598
2613. DOI:https://doi.org/10.1177/1461444821998705
[82] Henrik Rydenfelt, Lauri Haapanen, Jesse Haapoja, and Tuukka Lehtiniemi. 2024. Personalisation in Journalism: Ethical insights and blindspots in
Finnish legacy media. Journalism 25, 2 (November 2024), 313 333. DOI:https://doi.org/10.1177/14648849221138424
[83] Henrik Rydenfelt, Tuukka Lehtiniemi, Jesse Haapoja, and Lauri Haapanen. 2025. Autonomy and Algorithms: Tracing the Significance of Content
Personalization. Int. J. Commun. 19, 0 (January 2025), 20. Retrieved January 27, 2025 from https://ijoc.org/index.php/ijoc/article/view/23474
[84] Aljosha Karim Schapals, Colin Porlezza, and Rodrigo Zamith. 2020. Assistance or Resistance? Evaluating the Intersection of Automated Journalism
and Journalistic Role Conceptions. Media Commun. 8, 3 (July 2020), 16 26. DOI:https://doi.org/10.17645/MAC.V8I3.3054
[85] Jordan Richard Schoenherr, Roba Abbas, Katina Michael, Pablo Rivas, and Theresa Dirndorfer Anderson. 2023. Designing AI Using a Human-
Centered Approach: Explainability and Accuracy Toward Trustworthiness. IEEE Trans. Technol. Soc. 4, 1 (March 2023), 9 23.
DOI:https://doi.org/10.1109/TTS.2023.3257627
[86] Rifat Ara Shams, · Didar Zowghi, and · Muneera Bano. 2023. AI and the quest for diversity and inclusion: a systematic literature review. AI Ethics
2023 (November 2023), 1 28. DOI:https://doi.org/10.1007/S43681-023-00362-W
[87] Donghee Shin and Shuhua Zhou. 2024. A Value and Diversity-Aware News Recommendation Systems: Can Algorithmic Gatekeeping Nudge Readers
to View Diverse News? Journal. Mass Commun. Q. (June 2024).
DOI:https://doi.org/10.1177/10776990241246680/ASSET/IMAGES/LARGE/10.1177_10776990241246680-FIG3.JPEG
[88] Felix M. Simon. 2024. Artificial Intelligence in the News: How AI Retools, Rationalizes, and Reshapes Journalism and the Public Arena. Columbia
Journalism Review. Retrieved August 29, 2024 from https://www.cjr.org/tow_center_reports/artificial-intelligence-in-the-news.php
[89] Marie Louise Juul SÞndergaard and Lone Koefoed Hansen. 2018. Intimate futures: Staying with the trouble of digital personal assistants through
design fiction. DIS 2018 - Proc. 2018 Des. Interact. Syst. Conf. (June 2018), 869 880.
DOI:https://doi.org/10.1145/3196709.3196766/SUPPL_FILE/DISFP430.MP4
[90] Catherine Sotirakou and Constantinos Mourlas. 2016. A Gamified News Application for Mobile Devices: An Approach that Turns Digital News
Readers into Players of a Social Network. Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics) 9599, (2016),
480 493. DOI:https://doi.org/10.1007/978-3-319-40216-1_53
[91] [92] Bruce Sterling. 2005. Shaping Things. MIT Press.
Miriam Sturdee, Paul Coulton, Joseph G. Lindley, Mike Stead, Haider Ali Akmal, and Andy Hudson-Smith. 2016. Design fiction: How to build a
voight-kampff machine. Conf. Hum. Factors Comput. Syst. - Proc. 07-12-May-, (May 2016), 375 385. DOI:https://doi.org/10.1145/2851581.2892574
[93] Edson C. Tandoc and Soo Kwang Oh. 2017. Small Departures, Big Continuities? Journal. Stud. 18, 8 (August 2017), 997 1015.
DOI:https://doi.org/10.1080/1461670X.2015.1104260
[94] Neil Thurman, Seth C. Lewis, and Jessica Kunert. 2019. Algorithms, Automation, and News. Digit. Journal. 7, 8 (2019), 980 992.
DOI:https://doi.org/10.1080/21670811.2019.1685395
[95] Tamås Tóth, Manuel Goyanes, Mårton Demeter, and Francisco Campos-Freire. 2022. Social Implications of Paywalls in a Polarized Society:
Stud. Big Data 97, (2022), 169 179. DOI:https://doi.org/10.1007/978-3-030-
88028-6_13
[96] Tommaso Turchi, Alessio Malizia, and Simone Borsci. 2024. Reflecting on Algorithmic Bias With Design Fiction: The MiniCoDe Workshops. IEEE
Intell. Syst. 39, 2 (March 2024), 40 50. DOI:https://doi.org/10.1109/MIS.2024.3352977
[97] tember 11, 2024
from https://books.google.com/books/about/Antisocial_Media.html?id=h05WDwAAQBAJ
[98] [99] Stephen J. Ward. 2019. Journalism ethics. In The handbook of journalism studies. Taylor & Francis, 307 323.
Stephen John Anthony Ward. 2015. The invention of journalism ethics: The path to objectivity and beyond. McGill- -MQUP.
[100] mated journalism.
Journalism 22, 1 (January 2021), 86 103. DOI:https://doi.org/10.1177/1464884918757072/ASSET/IMAGES/LARGE/10.1177_1464884918757072-
FIG1.JPEG
[101] Richmond Y. Wong, Deirdre K. Mulligan, Ellen Van Wyk, James Pierce, and John Chuang. 2017. Eliciting values reflections by engaging privacy
futures using design workbooks. Proc. ACM Human-Computer Interact. 1, CSCW (2017). DOI:https://doi.org/10.1145/3134746
[102] Richmond Y Wong and Vera Khovanskaya. 2018. Speculative Design in HCI: From Corporate Imaginations to Critical Orientations. Comput. Interact.
2, (2018). DOI:https://doi.org/10.1007/978-3-319-73374-6_10
[103] Nan Yu and Jun Kong. 2016. User experience with web browsing on small screens: Experimental investigations of mobile-page interface design and
homepage design for news websites. Inf. Sci. (Ny). 330, (February 2016), 427 443. DOI:https://doi.org/10.1016/J.INS.2015.06.004
[104] Mi Zhou, Vibhanshu Abhishek, Timothy Derdenger, Jaymo Kim, and Kannan Srinivasan. 2024. Bias in Generative AI. (March 2024). Retrieved
January 23, 2025 from https://arxiv.org/abs/2403.02726v1
27[105] John Zimmerman and Jodi Forlizzi. 2014. Research through design in HCI. In Ways of Knowing in HCI, Judith S. Olson and Wendy A. Kellogg (eds.).
Springer New York, New York, 167 189. DOI:https://doi.org/10.1007/978-1-4939-0378-8_8
3 notes · View notes