#data aggregator platform
Explore tagged Tumblr posts
Text
Explore the best developer friendly API platforms designed to streamline integration, foster innovation, and accelerate development for seamless user experiences.
Developer Friendly Api Platform
#Developer Friendly Api Platform#Consumer Driven Banking#Competitive Market Advantage Through Data#Banking Data Aggregation Services#Advanced Security Architecture#Adr Open Banking#Accredited Data Recipient
0 notes
Text
Palantir’s NHS-stealing Big Lie

I'm on tour with my new, nationally bestselling novel The Bezzle! Catch me in TUCSON (Mar 9-10), then SAN FRANCISCO (Mar 13), Anaheim, and more!
Capitalism's Big Lie in four words: "There is no alternative." Looters use this lie for cover, insisting that they're hard-nosed grownups living in the reality of human nature, incentives, and facts (which don't care about your feelings).
The point of "there is no alternative" is to extinguish the innovative imagination. "There is no alternative" is really "stop trying to think of alternatives, dammit." But there are always alternatives, and the only reason to demand that they be excluded from consideration is that these alternatives are manifestly superior to the looter's supposed inevitability.
Right now, there's an attempt underway to loot the NHS, the UK's single most beloved institution. The NHS has been under sustained assault for decades – budget cuts, overt and stealth privatisation, etc. But one of its crown jewels has been stubbournly resistant to being auctioned off: patient data. Not that HMG hasn't repeatedly tried to flog patient data – it's just that the public won't stand for it:
https://www.theguardian.com/society/2023/nov/21/nhs-data-platform-may-be-undermined-by-lack-of-public-trust-warn-campaigners
Patients – quite reasonably – do not trust the private sector to handle their sensitive medical records.
Now, this presents a real conundrum, because NHS patient data, taken as a whole, holds untold medical insights. The UK is a large and diverse country and those records in aggregate can help researchers understand the efficacy of various medicines and other interventions. Leaving that data inert and unanalysed will cost lives: in the UK, and all over the world.
For years, the stock answer to "how do we do science on NHS records without violating patient privacy?" has been "just anonymise the data." The claim is that if you replace patient names with random numbers, you can release the data to research partners without compromising patient privacy, because no one will be able to turn those numbers back into names.
It would be great if this were true, but it isn't. In theory and in practice, it is surprisingly easy to "re-identify" individuals in anonymous data-sets. To take an obvious example: we know which two dates former PM Tony Blair was given a specific treatment for a cardiac emergency, because this happened while he was in office. We also know Blair's date of birth. Check any trove of NHS data that records a person who matches those three facts and you've found Tony Blair – and all the private data contained alongside those public facts is now in the public domain, forever.
Not everyone has Tony Blair's reidentification hooks, but everyone has data in some kind of database, and those databases are continually being breached, leaked or intentionally released. A breach from a taxi service like Addison-Lee or Uber, or from Transport for London, will reveal the journeys that immediately preceded each prescription at each clinic or hospital in an "anonymous" NHS dataset, which can then be cross-referenced to databases of home addresses and workplaces. In an eyeblink, millions of Britons' records of receiving treatment for STIs or cancer can be connected with named individuals – again, forever.
Re-identification attacks are now considered inevitable; security researchers have made a sport out of seeing how little additional information they need to re-identify individuals in anonymised data-sets. A surprising number of people in any large data-set can be re-identified based on a single characteristic in the data-set.
Given all this, anonymous NHS data releases should have been ruled out years ago. Instead, NHS records are to be handed over to the US military surveillance company Palantir, a notorious human-rights abuser and supplier to the world's most disgusting authoritarian regimes. Palantir – founded by the far-right Trump bagman Peter Thiel – takes its name from the evil wizard Sauron's all-seeing orb in Lord of the Rings ("Sauron, are we the baddies?"):
https://pluralistic.net/2022/10/01/the-palantir-will-see-you-now/#public-private-partnership
The argument for turning over Britons' most sensitive personal data to an offshore war-crimes company is "there is no alternative." The UK needs the medical insights in those NHS records, and this is the only way to get at them.
As with every instance of "there is no alternative," this turns out to be a lie. What's more, the alternative is vastly superior to this chumocratic sell-out, was Made in Britain, and is the envy of medical researchers the world 'round. That alternative is "trusted research environments." In a new article for the Good Law Project, I describe these nigh-miraculous tools for privacy-preserving, best-of-breed medical research:
https://goodlawproject.org/cory-doctorow-health-data-it-isnt-just-palantir-or-bust/
At the outset of the covid pandemic Oxford's Ben Goldacre and his colleagues set out to perform realtime analysis of the data flooding into NHS trusts up and down the country, in order to learn more about this new disease. To do so, they created Opensafely, an open-source database that was tied into each NHS trust's own patient record systems:
https://timharford.com/2022/07/how-to-save-more-lives-and-avoid-a-privacy-apocalypse/
Opensafely has its own database query language, built on SQL, but tailored to medical research. Researchers write programs in this language to extract aggregate data from each NHS trust's servers, posing medical questions of the data without ever directly touching it. These programs are published in advance on a git server, and are preflighted on synthetic NHS data on a test server. Once the program is approved, it is sent to the main Opensafely server, which then farms out parts of the query to each NHS trust, packages up the results, and publishes them to a public repository.
This is better than "the best of both worlds." This public scientific process, with peer review and disclosure built in, allows for frequent, complex analysis of NHS data without giving a single third party access to a a single patient record, ever. Opensafely was wildly successful: in just months, Opensafely collaborators published sixty blockbuster papers in Nature – science that shaped the world's response to the pandemic.
Opensafely was so successful that the Secretary of State for Health and Social Care commissioned a review of the programme with an eye to expanding it to serve as the nation's default way of conducting research on medical data:
https://www.gov.uk/government/publications/better-broader-safer-using-health-data-for-research-and-analysis/better-broader-safer-using-health-data-for-research-and-analysis
This approach is cheaper, safer, and more effective than handing hundreds of millions of pounds to Palantir and hoping they will manage the impossible: anonymising data well enough that it is never re-identified. Trusted Research Environments have been endorsed by national associations of doctors and researchers as the superior alternative to giving the NHS's data to Peter Thiel or any other sharp operator seeking a public contract.
As a lifelong privacy campaigner, I find this approach nothing short of inspiring. I would love for there to be a way for publishers and researchers to glean privacy-preserving insights from public library checkouts (such a system would prove an important counter to Amazon's proprietary god's-eye view of reading habits); or BBC podcasts or streaming video viewership.
You see, there is an alternative. We don't have to choose between science and privacy, or the public interest and private gain. There's always an alternative – if there wasn't, the other side wouldn't have to continuously repeat the lie that no alternative is possible.

Name your price for 18 of my DRM-free ebooks and support the Electronic Frontier Foundation with the Humble Cory Doctorow Bundle.
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/03/08/the-fire-of-orodruin/#are-we-the-baddies
Image: Gage Skidmore (modified) https://commons.m.wikimedia.org/wiki/File:Peter_Thiel_(51876933345).jpg
CC BY-SA 2.0 https://creativecommons.org/licenses/by-sa/2.0/deed.en
#pluralistic#peter thiel#trusted research environment#opensafely#medical data#floss#privacy#reidentification#anonymization#anonymisation#nhs#ukpoli#uk#ben goldacre#goldacre report#science#evidence-based medicine#goldacre review#interoperability#transparency
530 notes
·
View notes
Text
Libraries have traditionally operated on a basic premise: Once they purchase a book, they can lend it out to patrons as much (or as little) as they like. Library copies often come from publishers, but they can also come from donations, used book sales, or other libraries. However the library obtains the book, once the library legally owns it, it is theirs to lend as they see fit. Not so for digital books. To make licensed e-books available to patrons, libraries have to pay publishers multiple times over. First, they must subscribe (for a fee) to aggregator platforms such as Overdrive. Aggregators, like streaming services such as HBO’s Max, have total control over adding or removing content from their catalogue. Content can be removed at any time, for any reason, without input from your local library. The decision happens not at the community level but at the corporate one, thousands of miles from the patrons affected. Then libraries must purchase each individual copy of each individual title that they want to offer as an e-book. These e-book copies are not only priced at a steep markup—up to 300% over consumer retail—but are also time- and loan-limited, meaning the files self-destruct after a certain number of loans. The library then needs to repurchase the same book, at a new price, in order to keep it in stock. This upending of the traditional order puts massive financial strain on libraries and the taxpayers that fund them. It also opens up a world of privacy concerns; while libraries are restricted in the reader data they can collect and share, private companies are under no such obligation. Some libraries have turned to another solution: controlled digital lending, or CDL, a process by which a library scans the physical books it already has in its collection, makes secure digital copies, and lends those out on a one-to-one “owned to loaned” ratio. The Internet Archive was an early pioneer of this technique. When the digital copy is loaned, the physical copy is sequestered from borrowing; when the physical copy is checked out, the digital copy becomes unavailable. The benefits to libraries are obvious; delicate books can be circulated without fear of damage, volumes can be moved off-site for facilities work without interrupting patron access, and older and endangered works become searchable and can get a second chance at life. Library patrons, who fund their local library’s purchases with their tax dollars, also benefit from the ability to freely access the books. Publishers are, unfortunately, not a fan of this model, and in 2020 four of them sued the Internet Archive over its CDL program. The suit ultimately focused on the Internet Archive’s lending of 127 books that were already commercially available through licensed aggregators. The publisher plaintiffs accused the Internet Archive of mass copyright infringement, while the Internet Archive argued that its digitization and lending program was a fair use. The trial court sided with the publishers, and on September 4, the Court of Appeals for the Second Circuit reaffirmed that decision with some alterations to the underlying reasoning. This decision harms libraries. It locks them into an e-book ecosystem designed to extract as much money as possible while harvesting (and reselling) reader data en masse. It leaves local communities’ reading habits at the mercy of curatorial decisions made by four dominant publishing companies thousands of miles away. It steers Americans away from one of the few remaining bastions of privacy protection and funnels them into a surveillance ecosystem that, like Big Tech, becomes more dangerous with each passing data breach. And by increasing the price for access to knowledge, it puts up even more barriers between underserved communities and the American dream.
11 September 2024
154 notes
·
View notes
Text
My youtube has been exactly like this too! It’s very frustrating.
I came across this video (in my recommendations despite not being what I usually watch and having very few views at the time) that puts forward a compelling explanation for this, and also just shows that many many people are noticing this problem:
youtube
my youtube home page recommended videos these days are like
video i've already watched
actual video i'd like to watch from a creator i follow
extremely upsetting video that has zero (0) relevance to anything i ever watch
video with ten views of someone's high school graduation or something
shorts i don't want
video i've already watched
video from my watch-later playlist that i saved five years ago
six (6) videos related to home improvement bc i made the mistake of watching one (1) video about fixing something once
tomska????
#the tomska stuff might be my fault though#I’m sense of being a genuine recommendation via cross platform data aggregation#I get recommended Pokémon stuff all the time because of this#anyway#YouTube#YouTube culture
7 notes
·
View notes
Text
Living online means never quite understanding what’s happening to you at a given moment. Why these search results? Why this product recommendation? There is a feeling—often warranted, sometimes conspiracy-minded—that we are constantly manipulated by platforms and websites.
So-called dark patterns, deceptive bits of web design that can trick people into certain choices online, make it harder to unsubscribe from a scammy or unwanted newsletter; they nudge us into purchases. Algorithms optimized for engagement shape what we see on social media and can goad us into participation by showing us things that are likely to provoke strong emotional responses. But although we know that all of this is happening in aggregate, it’s hard to know specifically how large technology companies exert their influence over our lives.
This week, Wired published a story by the former FTC attorney Megan Gray that illustrates the dynamic in a nutshell. The op-ed argued that Google alters user searches to include more lucrative keywords. For example, Google is said to surreptitiously replace a query for “children’s clothing” with “NIKOLAI-brand kidswear” on the back end in order to direct users to lucrative shopping links on the results page. It’s an alarming allegation, and Ned Adriance, a spokesperson for Google, told me that it’s “flat-out false.” Gray, who is also a former vice president of the Google Search competitor DuckDuckGo, had seemingly misinterpreted a chart that was briefly presented during the company’s ongoing U.S. et al v. Google trial, in which the company is defending itself against charges that it violated federal antitrust law. (That chart, according to Adriance, represents a “phrase match” feature that the company uses for its ads product; “Google does not delete queries and replace them with ones that monetize better as the opinion piece suggests, and the organic results you see in Search are not affected by our ads systems,” he said.)
Gray told me, “I stand by my larger point—the Google Search team and Google ad team worked together to secretly boost commercial queries, which triggered more ads and thus revenue. Google isn’t contesting this, as far as I know.” In a statement, Chelsea Russo, another Google spokesperson, reiterated that the company’s products do not work this way and cited testimony from Google VP Jerry Dischler that “the organic team does not take data from the ads team in order to affect its ranking and affect its result.” Wired did not respond to a request for comment. Last night, the publication removed the story from its website, noting that it does not meet Wired’s editorial standards.
It’s hard to know what to make of these competing statements. Gray’s specific facts may be wrong, but the broader concerns about Google’s business—that it makes monetization decisions that could lead the product to feel less useful or enjoyable—form the heart of the government’s case against the company. None of this is easy to untangle in plain English—in fact, that’s the whole point of the trial. For most of us, evidence about Big Tech’s products tends to be anecdotal or fuzzy—more vibes-based than factual. Google may not be altering billions of queries in the manner that the Wired story suggests, but the company is constantly tweaking and ranking what we see, while injecting ads and proprietary widgets into our feed, thereby altering our experience. And so we end up saying that Google Search is less useful now or that shopping on Amazon has gotten worse. These tools are so embedded in our lives that we feel acutely that something is off, even if we can’t put our finger on the technical problem.
That’s changing. In the past month, thanks to a series of antitrust actions on behalf of the federal government, hard evidence of the ways that Silicon Valley’s biggest companies are wielding their influence is trickling out. Google’s trial is under way, and while the tech giant is trying to keep testimony locked down, the past four weeks have helped illustrate—via internal company documents and slide decks like the one cited by Wired—how Google has used its war chest to broker deals and dominate the search market. Perhaps the specifics of Gray’s essay were off, but we have learned, for instance, how company executives considered adjusting Google’s products to lead to more “monetizable queries.” And just last week, the Federal Trade Commission filed a lawsuit against Amazon alleging anticompetitive practices. (Amazon has called the suit “misguided.”)
Filings related to that suit have delivered a staggering revelation concerning a secretive Amazon algorithm code-named Project Nessie. The particulars of Nessie were heavily redacted in the public complaint, but this week The Wall Street Journal revealed details of the program. According to the unredacted complaint, a copy of which I have also viewed, Nessie—which is no longer in use—monitored industry prices of specific goods to determine whether competitors were algorithmically matching Amazon’s prices. In the event that competitors were, Nessie would exploit this by systematically raising prices on goods across Amazon, encouraging its competitors to follow suit. Amazon, via the algorithm, knew that it would be able to charge more on its own site, because it didn’t have to worry about being undercut elsewhere, thereby making the broader online shopping experience worse for everyone. An Amazon spokesperson told the Journal that the FTC is mischaracterizing the tool, and suggested that Nessie was a way to monitor competitor pricing and keep price-matching algorithms from dropping prices to unsustainable levels (the company did not respond to my request for comment).
In the FTC’s telling, Project Nessie demonstrates the sheer scope of Amazon’s power in online markets. The project arguably amounted to a form of unilateral price fixing, where Amazon essentially goaded its competitors into acting like cartel members without even knowing they’d done so—all while raising prices on consumers. It’s an astonishing form of influence, powered by behind-the-scenes technology.
The government will need to prove whether this type of algorithmic influence is illegal. But even putting legality aside, Project Nessie is a sterling example of the way that Big Tech has supercharged capitalistic tendencies and manipulated markets in unnatural and opaque ways. It demonstrates the muscle that a company can throw around when it has consolidated its position in a given sector. The complaint alleges that Amazon’s reach and logistics capabilities force third-party sellers to offer products on Amazon and for lower prices than other retailers. Once it captured a significant share of the retail market, Amazon was allegedly able to use algorithmic tools such as Nessie to drive prices up for specific products, boosting revenues and manipulating competitors.
Reading about Project Nessie, I was surprised to feel a sense of relief. In recent years, customer-satisfaction ratings have dipped among Amazon shoppers who have cited delivery disruptions, an explosion of third-party sellers, and poor-quality products as reasons for frustration. In my own life and among friends and relatives, there has been a growing feeling that shopping on the platform has become a slog, with fewer deals and far more junk to sift through. Again, these feelings tend to occupy vibe territory: Amazon’s bigness seems stifling or grating in ways that aren’t always easy to explain. But Nessie offers a partial explanation for this frustration, as do revelations about Google’s various product adjustments. We have the sense that we’re being manipulated because, well, we are. It’s a bit like feeling vaguely sick, going to the doctor, and receiving a blood-test result confirming that, yes, the malaise you experienced is actually an iron deficiency. It is the catharsis of, at long last, receiving a diagnosis.
This is the true power of the surge in anti-monopoly litigation. (According to experts in the field, September was “the most extraordinary month they have ever seen in antitrust.”) Whether or not any of these lawsuits results in corporate breakups or lasting change, they are, effectively, an MRI of our sprawling digital economy—a forensic look at what these larger-than-life technology companies are really doing, and how they are exerting their influence and causing damage. It is confirmation that what so many of us have felt—that the platforms dictating our online experiences are behaving unnaturally and manipulatively—is not merely a paranoid delusion, but the effect of an asymmetrical relationship between the giants of scale and us, the users.
In recent years, it’s been harder to love the internet, a miracle of connectivity that feels ever more bloated, stagnant, commercialized, and junkified. We are just now starting to understand the specifics of this transformation—the true influence of Silicon Valley’s vise grip on our lives. It turns out that the slow rot we might feel isn’t just in our heads, after all.
213 notes
·
View notes
Text
These days, when Nicole Yelland receives a meeting request from someone she doesn’t already know, she conducts a multi-step background check before deciding whether to accept. Yelland, who works in public relations for a Detroit-based non-profit, says she’ll run the person’s information through Spokeo, a personal data aggregator that she pays a monthly subscription fee to use. If the contact claims to speak Spanish, Yelland says, she will casually test their ability to understand and translate trickier phrases. If something doesn’t quite seem right, she’ll ask the person to join a Microsoft Teams call—with their camera on.
If Yelland sounds paranoid, that’s because she is. In January, before she started her current non-profit role, Yelland says she got roped into an elaborate scam targeting job seekers. “Now, I do the whole verification rigamarole any time someone reaches out to me,” she tells WIRED.
Digital imposter scams aren’t new; messaging platforms, social media sites, and dating apps have long been rife with fakery. In a time when remote work and distributed teams have become commonplace, professional communications channels are no longer safe, either. The same artificial intelligence tools that tech companies promise will boost worker productivity are also making it easier for criminals and fraudsters to construct fake personas in seconds.
On LinkedIn, it can be hard to distinguish a slightly touched-up headshot of a real person from a too-polished, AI-generated facsimile. Deepfake videos are getting so good that longtime email scammers are pivoting to impersonating people on live video calls. According to the US Federal Trade Commission, reports of job and employment related scams nearly tripled from 2020 to 2024, and actual losses from those scams have increased from $90 million to $500 million.
Yelland says the scammers that approached her back in January were impersonating a real company, one with a legitimate product. The “hiring manager” she corresponded with over email also seemed legit, even sharing a slide deck outlining the responsibilities of the role they were advertising. But during the first video interview, Yelland says, the scammers refused to turn their cameras on during a Microsoft Teams meeting and made unusual requests for detailed personal information, including her driver’s license number. Realizing she’d been duped, Yelland slammed her laptop shut.
These kinds of schemes have become so widespread that AI startups have emerged promising to detect other AI-enabled deepfakes, including GetReal Labs, and Reality Defender. OpenAI CEO Sam Altman also runs an identity-verification startup called Tools for Humanity, which makes eye-scanning devices that capture a person’s biometric data, create a unique identifier for their identity, and store that information on the blockchain. The whole idea behind it is proving “personhood,” or that someone is a real human. (Lots of people working on blockchain technology say that blockchain is the solution for identity verification.)
But some corporate professionals are turning instead to old-fashioned social engineering techniques to verify every fishy-seeming interaction they have. Welcome to the Age of Paranoia, when someone might ask you to send them an email while you’re mid-conversation on the phone, slide into your Instagram DMs to ensure the LinkedIn message you sent was really from you, or request you text a selfie with a timestamp, proving you are who you claim to be. Some colleagues say they even share code words with each other, so they have a way to ensure they’re not being misled if an encounter feels off.
“What’s funny is, the low-fi approach works,” says Daniel Goldman, a blockchain software engineer and former startup founder. Goldman says he began changing his own behavior after he heard a prominent figure in the crypto world had been convincingly deepfaked on a video call. “It put the fear of god in me,” he says. Afterwards, he warned his family and friends that even if they hear what they believe is his voice or see him on a video call asking for something concrete—like money or an internet password—they should hang up and email him first before doing anything.
Ken Schumacher, founder of the recruitment verification service Ropes, says he’s worked with hiring managers who ask job candidates rapid-fire questions about the city where they claim to live on their resume, such as their favorite coffee shops and places to hang out. If the applicant is actually based in that geographic region, Schumacher says, they should be able to respond quickly with accurate details.
Another verification tactic some people use, Schumacher says, is what he calls the “phone camera trick.” If someone suspects the person they’re talking to over video chat is being deceitful, they can ask them to hold up their phone camera to their laptop. The idea is to verify whether the individual may be running deepfake technology on their computer, obscuring their true identity or surroundings. But it’s safe to say this approach can also be off-putting: Honest job candidates may be hesitant to show off the inside of their homes or offices, or worry a hiring manager is trying to learn details about their personal lives.
“Everyone is on edge and wary of each other now,” Schumacher says.
While turning yourself into a human captcha may be a fairly effective approach to operational security, even the most paranoid admit these checks create an atmosphere of distrust before two parties have even had the chance to really connect. They can also be a huge time suck. “I feel like something’s gotta give,” Yelland says. “I’m wasting so much time at work just trying to figure out if people are real.”
Jessica Eise, an assistant professor studying climate change and social behavior at Indiana University-Bloomington, says that her research team has been forced to essentially become digital forensics experts, due to the amount of fraudsters who respond to ads for paid virtual surveys. (Scammers aren’t as interested in the unpaid surveys, unsurprisingly.) If the research project is federally funded, all of the online participants have to be over the age of 18 and living in the US.
“My team would check time stamps for when participants answered emails, and if the timing was suspicious, we could guess they might be in a different time zone,” Eise says. “Then we’d look for other clues we came to recognize, like certain formats of email address or incoherent demographic data.”
Eise says the amount of time her team spent screening people was “exorbitant,” and that they’ve now shrunk the size of the cohort for each study and have turned to “snowball sampling” or having recruiting people they know personally to join their studies. The researchers are also handing out more physical flyers to solicit participants in person. “We care a lot about making sure that our data has integrity, that we’re studying who we say we’re trying to study,” she says. “I don’t think there’s an easy solution to this.”
Barring any widespread technical solution, a little common sense can go a long way in spotting bad actors. Yelland shared with me the slide deck that she received as part of the fake job pitch. At first glance, it seemed like legit pitch, but when she looked at it again, a few details stood out. The job promised to pay substantially more than the average salary for a similar role in her location, and offered unlimited vacation time, generous paid parental leave, and fully-covered health care benefits. In today’s job environment, that might have been the biggest tipoff of all that it was a scam.
27 notes
·
View notes
Text
By Frank Bergman April 19, 2025
An alarming study involving over 2 million participants has confirmed that Covid mRNA “vaccines” cause devastating long-term harm by sabotaging people’s immune systems.
The major study found that mRNA injections attack thyroid function, making recipients vulnerable to deadly diseases such as cancer.
The group of leading researchers behind the study is sounding the alarm with an urgent warning about the long-term effects of Covid mRNA injections on thyroid health.
The study was led by renowned neurology and radiology experts Drs. Kai-Lun Cheng and Hsiang-Lin Lee, both with Chung Shan Medical University in Taichung, Taiwan.
The groundbreaking study was published by Oxford University Press in the Journal of Clinical Endocrinology & Metabolism.
During the study, which included over 2.3 million patients, the researchers used the TriNetX federated data platform, which aggregates real-world electronic medical records.
The researchers analyzed a staggering 2,333,496 patients through a retrospective cohort study spanning two years.
18 notes
·
View notes
Text
I genuinely wasn't expecting the enormous amount of statistical data linked and referenced in the #renewasacrew change.org petition, but damn.
I don't think Zaslav will walk back on this, but I do think there's a strong chance for another platform to pick it up-- in the modern streaming era we've seen it for Brooklyn 99, Community, Lucifer, The Expanse, One Day at a Time, and others, and that's not including shows that got final seasons (or even movies) well after their original airing, or classic shows that formed the blueprint for how this sort of pick-up could happen, or--
Well, okay, and also not counting that time enough people bugged the shit out of Arthur Conan Doyle that he finally threw his hands up and said FINE.
Anyway. Check out the data research embedded in the petition. In aggregate... it's pretty goddamn fishy that Zaslav decided this was a show to be cut. But it's a strong case as to why another platform should pick it up:
#a huge chunk of the work is done#it's only one season#and a fanbase ready to STRONGLY reward any platform that comes to the rescue#it's money on the table tbh#with potential for a big revenue boost for a startup#or a flagging (heh) streamer#and a lot of goodwill earned after the fact#our flag means death#ofmd cancellation#renew as a crew
97 notes
·
View notes
Note
Hi, I love hearing about the scale and magnitude of Tumblr. Are there any numbers you could share with us nerds? Daily users, daily new posts, daily reblogs, whatever you're allowed to share. Because, yeah, even if there's always something new to discover, it does tend to feel like I'm in a bubble or some sort of echo chamber, and you have said that users like us (however you name the chatty, bloggy, unhinged type of user, you know what I'm talking about) are the lowest percent, so, like, what even IS going on at Tumblr day round? Are there a bajillion taylor swift and K-pop blogs just uploading GIFs without having conversations? A thousand Turkish users uploading pics of their day to no followers like it's Instagram?
It feels like we're the top of an iceberg made of an eldrich sized collection of isolated communities, light years away from one another, with completely unrelated cultures and uses.
Can you share numbers? Can you share what the average user/blog/post is like? Anything else you would like to share?
i don't feel comfortable sharing most numbers, but we do share posts created per day on our About page, along with a couple of other numbers, like the total number of blogs on tumblr and how many blogs are created per day. tumblr is still a bit distinct from other social media (and blogging) platforms in terms of average user behavior.
we have millions of daily active users, and most of them aren't posting anything at all. but the percentage of people posting is higher than the typical assumed 1% rule of creation-versus-consumption, which is nice. the reblog-to-original-post ratio is like... 8 to 1 last time i checked. and likes-to-reblogs is like 10 to 1 or higher, at certain parts of the day.
most people on tumblr are "lurkers" who use the like button a lot, and sporadically reblog. also, most people only see ~25 posts or so per day, even if they have hundreds to see in their Following feed, which is why "Best Stuff First" is actually an important and used setting for many people. and the For You feed is similarly used and enjoyed way, way more than the typical old school tumblr power user would believe.
and yes, most content being posted are images, and a lot of it is stuff like taylor swift and kpop. if you go to the Explore Trending page logged out (like in an incognito window), you do get a sense for what's circulating around the platform. same with the Popular Reblogs dashboard tab.
regardless, even for me, who can look at this data in aggregate all day, it's impossible to get a birds-eye view of what's truly happening across the platform, let alone make sense of it. it's like trying to look at a city full of people and make broad strokes generalizations about it; sure, you can, but there's so much missed nuance that the numbers can't tell a story about.
166 notes
·
View notes
Text
̲𝚄̲̲𝚗̲̲𝚒̲̲𝚟̲̲𝚎̲̲𝚛̲̲𝚜̲̲𝚊̲̲𝚕̲ ̲𝙸̲̲𝚗̲̲𝚎̲̲𝚚̲̲𝚞̲̲𝚊̲̲𝚕̲̲𝚒̲̲𝚝̲̲𝚒̲̲𝚎̲̲𝚜̲
Jensen’s Inequality:
If φ is a convex function and X is a random variable, then:
φ(E[X]) ≤ E[φ(X])
⸻
- Mathematically:
It formalizes the idea that the function of an average is less than or equal to the average of the function—a cornerstone in information theory, economics, and entropy models.
- Philosophically:
It encodes a deep truth about aggregation vs individuality:
The average path does not capture the richness of individual variation.
You can’t just compress people, ideas, or experiences into a mean and expect to preserve their depth.
- Theologically (Logos lens):
God doesn’t save averages. He saves individuals.
Jensen’s Inequality reminds us: truth emerges not from flattening, but from preserving the shape of each curve.
Cauchy-Schwarz Inequality:
|⟨u, v⟩| ≤ ||u|| · ||v||
- Meaning:
The inner product (projection) of two vectors is always less than or equal to the product of their lengths.
- Why it matters metaphysically:
No interaction (⟨u,v⟩) can exceed the potential of its participants (||u||, ||v||).
Perfect alignment (equality) happens only when one vector is a scalar multiple of the other—i.e., they share direction.
- Philosophical resonance:
Love (inner product) can never exceed the strength of self and other—unless they are one in direction.
⸻
Triangle Inequality
||x + y|| ≤ ||x|| + ||y||
- Meaning:
The shortest path between two points is not through combining detours.
- Metaphysical translation:
Every time you try to shortcut wholeness by adding parts, you risk increasing the distance.
Truth is straight. But sin adds loops.
This is why grace cuts cleanly—it does not add noise.
⸻
Entropic Inequality (Data Processing Inequality):
I(X; Y) ≥ I(f(X); Y)
- Meaning:
You can’t increase information about a variable by processing it. Filtering always loses some signal.
- Deep translation:
Every time you mediate the truth through an agenda, a platform, or an ego, you lose information.
This is a law of entropy and a law of theology.
“Now we see through a glass, darkly…”
—1 Corinthians 13:12
Only unfiltered presence (Logos) sees all.
⸻
Christic Inequality (Cruciform Principle):
Here’s a theological-metaphysical inequality you won’t find in a textbook:
Power - Sacrifice ≤ 0
(unless crucified)
- Meaning:
Power without self-giving always decays into corruption.
Only when power is poured out (Philippians 2:6–8) does it become greater than itself.
So paradoxically:
True Power = Power · (Sacrifice > 0)
8 notes
·
View notes
Text
I know I put a lot of attention on Steam because of the sheer size of the marketplace and the effort Steam itself takes in marketing for devs but I really wanted to take a second to shout out TCM's numbers on itch.io because I really feel like the game found it's first platform there and I especially want to highlight what a great community it is for Indie Devs of all experience levels.
So I have TCM split up between 4 titles on itch- the main one is for all the new stuff and then each beta has it's own homepage. Downside, it kinda splits all my metrics up but the plus side its much easier to navigate for yall so I'll refrain from complaining lol.
Now given we started with just the Mori beta in late 2021, and added chapters slowly over time, here's where we're at right now.
Views: 312k
Downloads: 22.3k
Browser Plays: 35.6k
Ratings: 347
Collections: 5295
Comments: 189
So there are a couple really interesting things going on with this data. Let's analyze
Firstly, the numbers on the main chapter beat the *hell* out of the beta numbers. BUT this makes sense as more people are going to find the main game or PLAY the main game first at a vastly higher rate. So even though that game page has been up the least amount of time, it gets *by far* the most traffic. For example, if we take away the main page numbers, here's how the betas are doing on their own:
Views: 63.3k
Downloads: 5.4k
Browser Plays: 18.2k
Ratings: 133
Collections: 847
Comments: 42
So, if you were an indie dev posting your game on itch.io, these numbers should tell you to carefully consider how you're going to organize your game- especially if it comes in multiple parts. When I was going through the betas I did consider keeping everything on one page and therefore aggregating all of my traffic stats into one place but there are pros and cons.
Mostly, I went with separate pages because:
It's easier to organize files for downloads per character/game piece than to have a huge list of system-specific builds for every character that players have to scroll through. It's just hard to parse out.
Second, I thought that breaking up the chapters like this might help me better gauge each character's popularity via their stats. This... sort of worked. Because the Mori beta went up almost a year before Amir's, his numbers are MUCH higher and I have to be careful not to conflate that with his raw popularity. Another tricky note is that since Mori was the first chapter uploaded, many people will play his beta and then if they decide they're not into the game, won't play the other two characters, which again inflates Mori's numbers.
It was obvious in the gap after Spooktober 2021 and Amir's chapter that I had a project worth pursuing but the way I structured itch.io has made it hard to accurately gauge how popular exactly each character is.
Most of you know I'm running a popularity poll right now for some milestone art and while I expected Mori to lead (even with all the caveats I just listed, he does tend to be the most popular of the bunch) but I did not expect Akello to be *right* on his ass, even before weighing the patreon votes so.
Goes to show you that understanding structure and traffic trends can really go a very very long way to engaging your audience and build a stable, fun community around your game.
Another huge advantage to itch is that- in generalities- the community and ecosystem there is much kinder to beginner devs and passion projects. On steam, I'm taking up the same marketplace space as AAA multimillion-dollar games and while the eyeballs that comes with that is great for TCMs longevity hopefully, it also comes with the reality that I'm marking a queer niche adult visual novel right next to Mainstream Gamers. Now, I do want to be extremely clear that my experience with Steam so far has been really good- TCM has good and (more importantly) honest reviews, people have passed constructive critique to me and been extremely reasonable, I've managed to connect to some content curators who have similar tastes... But Steam is also the home to like. "Oooh Naur Woke Games Kill Art" Lists and stuff so. My experience on Itch is that- while some of that exists to a certain degree- the general ecosystem is much more forgiving and less sharply fractured.
I'm not sure that I would change anything I've done in the point leading me here so far, I think that by and large I've made the best choices I could given what I knew at the time and also managed to roll with the punches as the come but my experienced advice at this stage is definitely for an indie dev who hasn't landed a solid success yet or a hobby dev looking for feedback to start with Itch.io as a place to build your game's community.
There are other game hosting sites too, like Gamejolt, for instance, but while TCM used to be on Gamejolt their content policies and audience demographics were not a great fit, as was my experience with Newgrounds.
So. there are MANY choices but in all I'm grateful I didn't jump right into steam and also that my itch.io audience has been SO supportive and so enthusiastic about rating/commenting/and curating TCM to help spread the word. Especially since early in the project I had basically no marketing budget (I have a very small one now that covers the occasional blazed post but still).
ANYWAY thanks for reading my big dumb rambling posts but I really wanted to shed some light on the virtues of Itch after I've been chasing my own tail trying to get Steam working for me the way I want.
21 notes
·
View notes
Text
Bob Dyachenko, a cybersecurity researcher and owner of SecurityDiscovery.com, and the Cybernews research team discovered an unprotected Elasticsearch index. Elasticsearch is a platform for data analytics and search in near real-time.
The instance was hosted on a server owned by a Germany-based cloud service provider. The data contained a wide range of sensitive personal details related to citizens of the Republic of Georgia.
One of the exposed indices included nearly five million individuals’ personal data records, and another contained over seven million phone records with associated personal information. For comparison, Georgia has a population of almost four million. The data may include duplicate entries and records on deceased people.
The sensitive personal data included the following:
ID numbers
Full names
Birth dates
Genders
Certificate-like numbers (potentially insurance)
Phone numbers with descriptive information about the owner
“The data appears to have been collected or aggregated from multiple sources, potentially including governmental or commercial data sets and number identification services,” Dyachenko said.
Part of the data appears to be linked to a leak from 2020, however, the data was seemingly combined with 7.2 million citizen phone numbers and identifiers, as well as 1.45 million car owner details.
No direct information identifies the entity responsible for managing the Elasticsearch index.
Shortly after the discovery, the server was taken offline, and public access to the exposed data was closed.
However, the potential dangers for millions of people remain.
“Without clarity on data ownership, recourse for affected individuals is limited, and it remains challenging to enforce data protection laws or seek accountability,” the researcher said.
“This leak highlights the complexities of cross-border data protection and regulation.”
10 notes
·
View notes
Text
How Do Trading Applications Help You Stay Ahead in the Stock Market?

Understanding Stock Trading Apps
Stock trading apps are mobile tools that let users manage investments and execute trades in real time. Designed for both beginners and seasoned traders, they simplify market participation through intuitive interfaces and instant access to data.
The best trading apps integrate features like live market updates, customizable watchlists, and analytical tools to support informed decision-making. Users can monitor portfolio performance, receive price alerts, and track trends, all while managing investments on the go. This flexibility has made such apps indispensable for modern investors seeking efficiency.
The Evolution of Stock Trading Through Mobile Apps
Mobile applications have transformed how individuals engage with financial markets, making stock trading more accessible than ever. While these the best Stock market mobile trading app offer unparalleled convenience, they also come with challenges, including technical glitches and cybersecurity risks.
Mechanics Behind Stock Trading Apps
These apps act as gateways to brokerage accounts, enabling users to buy or sell assets with a few taps. After setting up an account—which involves identity verification and funding—traders can execute orders, analyze charts, and access news seamlessly.
These the best trading platform provide detailed reviews and comparisons, helping users evaluate app reliability, data accuracy, and service quality. By aggregating expert insights, these resources simplify the process of identifying apps that best meet individual needs.
Advantages of Stock Trading Apps
Instant Market Access: Real-time data and lightning-fast trade execution allow users to capitalize on market shifts immediately.
Portability: Manage investments anytime, anywhere, eliminating the need for desktop-bound trading.
Educational Resources: Many apps offer tutorials, webinars, and research tools to help users refine their strategies.
Potential Drawbacks
Technical Vulnerabilities: Server outages or lagging data can delay trades, potentially leading to missed opportunities.
Security Risks: Despite encryption protocols, the risk of cyberattacks remains a concern, necessitating strong passwords and vigilance.
Limited Guidance: Automated platforms lack personalized advice, which may leave novice investors unsure about complex decisions.
Choosing the Right App
When selecting a stock trading app, prioritize factors like security measures, fee structures, and user experience. Cross-referencing reviews on sites can highlight strengths and weaknesses. Additionally, assess whether the app’s features—such as advanced charting or educational content—align with your skill level and goals. Many of these apps also simplify the demat account opening process, allowing users to quickly start trading with seamless onboarding and verification.
Final Thoughts
Stock trading apps democratize market participation but require careful consideration of their pros and cons. By balancing convenience with due diligence, investors can leverage these tools to build and manage portfolios effectively. Always supplement app-based trading with independent research or professional advice to navigate the markets confidently.
For more information, visit https://www.indiratrade.com/
#stock trading apps#best trading apps#online trading#stock market apps#trading platforms#investing apps#demat account opening#mobile trading#best trading platform#stock market investing#portfolio management apps
5 notes
·
View notes
Text
i wonder if it'd ever be possible to have a more democratic web 2.0 like social media platform. but like. selling and aggregating data will probably continue to be huge as long as everything that enables it continues to exist. i still have a kind of optimistic love for what the internet enables but like. idk everything is dying a bit
7 notes
·
View notes
Text
RealPage says it isn’t doing anything wrong by suggesting to landlords how much rent they could charge. In a move to reclaim its own narrative, the property management software company published a microsite and a digital booklet it’s calling “The Real Story,” as it faces multiple lawsuits and a reported federal criminal probe related to allegations of rental price fixing.
RealPage’s six-page digital booklet, published on the site in mid-June, addresses what it calls “false and misleading claims about its software”—the myriad of allegations it faces involving price-fixing and rising rents—and contends that the software benefits renters and landlords and increases competition. It also said landlords accept RealPage’s price recommendations for new leases less than 50 percent of the time and that the software recommends competitive prices to help fill units.
“‘The heart of this case’ never had a heartbeat—the data clearly shows that RealPage does not set customers’ prices and customers do what they believe is best for their respective properties to vigorously compete against each other in the market,” the digital booklet says.
But landlords are left without concrete answers, as questions around the legality of this software are ongoing as they continue renting properties. “I don’t think we’re seeing this as a RealPage issue but rather as a revenue management software issue,” says Alexandra Alvarado, the director of marketing and education at the American Apartment Owners Association, the largest association of landlords in the US.
Alvarado says some landlords are taking pause and asking questions before using the tech. Software like RealPage “has made it much easier to understand what is happening in the market,” Alvarado says. “Technology has helped us in so many ways to make all these processes more efficient. In this case, it’s now borderline too efficient.” And members of the AAOA are asking questions about the legality of revenue management, she says. “The first thing landlords typically think is, what is the legal repercussion? Am I going to be in trouble for using this software? If the answer is maybe, it’s usually off the table.”
Dana Jones, president and CEO of RealPage, said in a statement released alongside the booklet that “the time is now to address a number of false claims about RealPage’s revenue management software, and how rental housing providers operate when setting rent prices.” RealPage did not respond to WIRED’s queries asking what prompted the lengthy statement in June. Officials appear to be narrowing in on RealPage, as the Justice Department is allegedly planning to sue the company, according to a report from Politico last week. The company declined a request to comment on the latest in the ongoing Department of Justice probe.
Allegations of price-fixing that may constitute antitrust violations have dogged the software company since late 2022, when ProPublica published an investigation alleging that RealPage’s software was linked to rent rises in some US cities, as the company used private, aggregated data provided by its customers to suggest rental prices. (In response to ProPublica's reporting, RealPage commented that it “uses aggregated market data from a variety of sources in a legally compliant manner.”)
RealPage’s software is powerful because it anonymizes rental data and can provide landlords and property managers with nonpublic and public data about rentals, which may be different from that advertised publicly on platforms like real estate marketplace Zillow. The company contends that it’s not engaging in price-fixing, as landlords are not forced to accept the rents that RealPage’s algorithm suggests. Sometimes it even recommends landlords lower the rent, RealPage claims. But antitrust enforcers have alleged that even sharing private information via an algorithm and using it for price recommendations can be as conspiratorial as back-room handshake deals, even if landlords don’t end up renting apartments at those rates. The reported antitrust investigation is ongoing.
RealPage’s algorithmic pricing model is among one of the first subject to scrutiny, perhaps due to its involvement in housing, a necessity that has ballooned in price as housing supply languishes. Typical rent in the US is just under $2,000, according to Zillow, up from around $1,500 in early 2020. “Housing affordability is a national problem created by economic and political forces—not by the use of revenue management software,” Realpage says. But renters can’t tell whether their rates are rising because of algorithms or not.
“It’s almost impossible to know if you are just a spectator or a victim,” says Shanti Singh, legislative and communications director with Tenants Together, a California-based coalition of tenants activists. If tenants call a hotline over raised rent or fees, “we’re not necessarily going to be able to see or connect that their landlord is using RealPage.”
The state of Arizona sued RealPage and nine landlords in February, claiming a conspiracy between the company and landlords led renters in Phoenix and Tucson to pay “millions of dollars” more in rent. That followed a similar lawsuit out of Washington, DC. In the capital’s greater metropolitan area, more than 90 percent of rental units in large apartment buildings were priced using RealPage software, according to DC’s attorney general.
The cases against RealPage puts algorithmic pricing to the test; as the technology becomes more common, antitrust law has yet to keep pace. Officials have other concerns around algorithms used for alleged hotel price fixing, as well as e-commerce algorithms. “The concern of regulators that algorithms can be used in ways that harm competition—that idea is here to stay,” says Ed Rogers, a partner at law firm Ballard Spahr who focuses on antitrust cases. “RealPage could end up really being a test case, not just for the real estate rental industry but for this aspect of AI and software and its role in a competitive landscape.”
The impact of algorithmic pricing varies greatly. Amazon has been accused of pushing up prices with a secret algorithm. (Amazon has said the “allegation that we somehow force sellers to use our optional services is simply not true.”) But others operate in plain sight, like dynamic pricing for rideshare costs, and don’t involve multiple companies sharing information. Not all of these algorithms are engaged in activity that may be considered anticompetitive. A Nevada judge in May dismissed a suit brought by hotel guests against several Las Vegas hotel operators, finding there was no agreement among them to fix prices using shared algorithms.
Yardi Systems, another US property management company, is also facing a class action suit regarding antitrust violations for artificially inflating rent prices. The company has said it did “nothing illegal,” as it does not mandate rent prices through its software or make “collusive pricing decisions.”
Typical rental costs in Phoenix have increased by more than about $500 a month from April 2020 to 2024, and by around $400 in Washington, DC, in the same period, according to Zillow.
Renters have also filed numerous class action suits against RealPage and property owners that have been consolidated. Some landlords named in those settled claims earlier this year. The court threw out a lawsuit regarding price fixing for student housing but has said the class action from renters can go forward. Attorneys representing some of the plaintiffs in the class action did not respond to requests to comment.
RealPage laid off about 4 percent of staff in June. “RealPage is hyper-focused on innovation and accelerating its business growth in 2024 and beyond, and as a result has made the decision to eliminate a small number of roles within the company,” Jennifer Bowcock, a spokesperson for the company, says. The layoffs were not connected to the antitrust lawsuit, she says. Thoma Bravo, the owner of RealPage, did not respond to a request for comment for this story.
As of 2020, RealPage said it was collecting data on some 16 million rental units across the US. There are 44 million renter households in the US, and nearly 22 million rental units are owned by for-profit businesses. RealPage grew when it acquired Lease Rent Options (LRO) in 2017, after clearing antitrust scrutiny by the Justice Department. The DOJ did not comment on questions from WIRED about its reported investigation into RealPage or its approval of RealPage’s acquisition of Lease Rent Options in 2017.
When asked about the latest in the probe, RealPage referred to a portion of its recent lengthy statement, which said: “The DOJ extensively reviewed LRO and YieldStar in 2017, without objecting to, much less challenging, any feature of the products.” RealPage also says that its “products are fundamentally the same today” as they were when the acquisition received approval.
In June, The New York Times asked assistant US attorney general Jonathan Kanter, the Justice Department’s top antitrust official, if he would view an AI tool communicating pricing information as the same as humans colluding, with the question referencing the reported RealPage investigation. Kanter replied: “I often say that if your dog bites somebody, you’re responsible for your dog biting somebody. If your AI fixes prices, you’re just as responsible.”
The Justice Department also last year filed a statement of interest in the RealPage combined class action lawsuit, as the case could become a precedent setter in algorithmic pricing. The statement mirrored Kanter’s argument that the method of price setting doesn’t matter, and algorithms are just the latest evolution in information gathering and sharing.
“In-person handshakes gave way to phone and fax, and later to email. Algorithms are the new frontier,” the Justice Department argued in a statement of interest it filed in the class action lawsuit against RealPage and landlords. “And, given the amount of information an algorithm can access and digest, this new frontier poses an even greater anti-competitive threat than the last.”
30 notes
·
View notes
Text
A major new study has confirmed that Covid mRNA shots cause “vaccine-induced AIDS” to develop in those who receive the injections.
The study, involving 2.3 million patients, found that mRNA injections trigger a condition known as Vaccine-Acquired Immunodeficiency Syndrome (VAIDS).
The researchers found that VAIDS, or “vaccine-induced AIDS,” is caused by Covid mRNA shots destroying the human immune system by attacking the thyroid function.
The group of leading researchers behind the study is sounding the alarm with an urgent warning about the long-term effects of Covid mRNA injections on thyroid health.
The bombshell study has sent shockwaves through the medical and scientific communities after researchers concluded that Covid mRNA shots have caused a global surge in cases of VAIDS.
The findings have debunked claims from the corporate media and so-called “fact-checkers” that previously dismissed reports of VAIDS as “conspiracy theories.”
The study was led by renowned neurology and radiology experts Drs. Kai-Lun Cheng and Hsiang-Lin Lee, both with Chung Shan Medical University in Taichung, Taiwan.
The groundbreaking study was published by Oxford University Press in the Journal of Clinical Endocrinology & Metabolism.
During the study, the researchers used the TriNetX federated data platform, which aggregates real-world electronic medical records.
10 notes
·
View notes