#Big Data Open Source Tools
Explore tagged Tumblr posts
Text
With the field of data analytics constantly evolving, organizations are embracing open-source tools due to their flexibility, lower pricing, and solid features. Open-source applications, including data analysis and data visualization tools, are useful for organizations that want to use their data efficiently. This article focuses on the best open-source data analytics tools, their comparison, and tools that will suit organizational requirements best.       Â
https://www.sganalytics.com/blog/open-source-data-analytics-tools/
#data analytics tools#open source data analytics#open source tools#open source#big data#data analytics
0 notes
Text
What kind of bubble is AI?

My latest column for Locus Magazine is "What Kind of Bubble is AI?" All economic bubbles are hugely destructive, but some of them leave behind wreckage that can be salvaged for useful purposes, while others leave nothing behind but ashes:
https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/
Think about some 21st century bubbles. The dotcom bubble was a terrible tragedy, one that drained the coffers of pension funds and other institutional investors and wiped out retail investors who were gulled by Superbowl Ads. But there was a lot left behind after the dotcoms were wiped out: cheap servers, office furniture and space, but far more importantly, a generation of young people who'd been trained as web makers, leaving nontechnical degree programs to learn HTML, perl and python. This created a whole cohort of technologists from non-technical backgrounds, a first in technological history. Many of these people became the vanguard of a more inclusive and humane tech development movement, and they were able to make interesting and useful services and products in an environment where raw materials â compute, bandwidth, space and talent â were available at firesale prices.
Contrast this with the crypto bubble. It, too, destroyed the fortunes of institutional and individual investors through fraud and Superbowl Ads. It, too, lured in nontechnical people to learn esoteric disciplines at investor expense. But apart from a smattering of Rust programmers, the main residue of crypto is bad digital art and worse Austrian economics.
Or think of Worldcom vs Enron. Both bubbles were built on pure fraud, but Enron's fraud left nothing behind but a string of suspicious deaths. By contrast, Worldcom's fraud was a Big Store con that required laying a ton of fiber that is still in the ground to this day, and is being bought and used at pennies on the dollar.
AI is definitely a bubble. As I write in the column, if you fly into SFO and rent a car and drive north to San Francisco or south to Silicon Valley, every single billboard is advertising an "AI" startup, many of which are not even using anything that can be remotely characterized as AI. That's amazing, considering what a meaningless buzzword AI already is.
So which kind of bubble is AI? When it pops, will something useful be left behind, or will it go away altogether? To be sure, there's a legion of technologists who are learning Tensorflow and Pytorch. These nominally open source tools are bound, respectively, to Google and Facebook's AI environments:
https://pluralistic.net/2023/08/18/openwashing/#you-keep-using-that-word-i-do-not-think-it-means-what-you-think-it-means
But if those environments go away, those programming skills become a lot less useful. Live, large-scale Big Tech AI projects are shockingly expensive to run. Some of their costs are fixed â collecting, labeling and processing training data â but the running costs for each query are prodigious. There's a massive primary energy bill for the servers, a nearly as large energy bill for the chillers, and a titanic wage bill for the specialized technical staff involved.
Once investor subsidies dry up, will the real-world, non-hyperbolic applications for AI be enough to cover these running costs? AI applications can be plotted on a 2X2 grid whose axes are "value" (how much customers will pay for them) and "risk tolerance" (how perfect the product needs to be).
Charging teenaged D&D players $10 month for an image generator that creates epic illustrations of their characters fighting monsters is low value and very risk tolerant (teenagers aren't overly worried about six-fingered swordspeople with three pupils in each eye). Charging scammy spamfarms $500/month for a text generator that spits out dull, search-algorithm-pleasing narratives to appear over recipes is likewise low-value and highly risk tolerant (your customer doesn't care if the text is nonsense). Charging visually impaired people $100 month for an app that plays a text-to-speech description of anything they point their cameras at is low-value and moderately risk tolerant ("that's your blue shirt" when it's green is not a big deal, while "the street is safe to cross" when it's not is a much bigger one).
Morganstanley doesn't talk about the trillions the AI industry will be worth some day because of these applications. These are just spinoffs from the main event, a collection of extremely high-value applications. Think of self-driving cars or radiology bots that analyze chest x-rays and characterize masses as cancerous or noncancerous.
These are high value â but only if they are also risk-tolerant. The pitch for self-driving cars is "fire most drivers and replace them with 'humans in the loop' who intervene at critical junctures." That's the risk-tolerant version of self-driving cars, and it's a failure. More than $100b has been incinerated chasing self-driving cars, and cars are nowhere near driving themselves:
https://pluralistic.net/2022/10/09/herbies-revenge/#100-billion-here-100-billion-there-pretty-soon-youre-talking-real-money
Quite the reverse, in fact. Cruise was just forced to quit the field after one of their cars maimed a woman â a pedestrian who had not opted into being part of a high-risk AI experiment â and dragged her body 20 feet through the streets of San Francisco. Afterwards, it emerged that Cruise had replaced the single low-waged driver who would normally be paid to operate a taxi with 1.5 high-waged skilled technicians who remotely oversaw each of its vehicles:
https://www.nytimes.com/2023/11/03/technology/cruise-general-motors-self-driving-cars.html
The self-driving pitch isn't that your car will correct your own human errors (like an alarm that sounds when you activate your turn signal while someone is in your blind-spot). Self-driving isn't about using automation to augment human skill â it's about replacing humans. There's no business case for spending hundreds of billions on better safety systems for cars (there's a human case for it, though!). The only way the price-tag justifies itself is if paid drivers can be fired and replaced with software that costs less than their wages.
What about radiologists? Radiologists certainly make mistakes from time to time, and if there's a computer vision system that makes different mistakes than the sort that humans make, they could be a cheap way of generating second opinions that trigger re-examination by a human radiologist. But no AI investor thinks their return will come from selling hospitals that reduce the number of X-rays each radiologist processes every day, as a second-opinion-generating system would. Rather, the value of AI radiologists comes from firing most of your human radiologists and replacing them with software whose judgments are cursorily double-checked by a human whose "automation blindness" will turn them into an OK-button-mashing automaton:
https://pluralistic.net/2023/08/23/automation-blindness/#humans-in-the-loop
The profit-generating pitch for high-value AI applications lies in creating "reverse centaurs": humans who serve as appendages for automation that operates at a speed and scale that is unrelated to the capacity or needs of the worker:
https://pluralistic.net/2022/04/17/revenge-of-the-chickenized-reverse-centaurs/
But unless these high-value applications are intrinsically risk-tolerant, they are poor candidates for automation. Cruise was able to nonconsensually enlist the population of San Francisco in an experimental murderbot development program thanks to the vast sums of money sloshing around the industry. Some of this money funds the inevitabilist narrative that self-driving cars are coming, it's only a matter of when, not if, and so SF had better get in the autonomous vehicle or get run over by the forces of history.
Once the bubble pops (all bubbles pop), AI applications will have to rise or fall on their actual merits, not their promise. The odds are stacked against the long-term survival of high-value, risk-intolerant AI applications.
The problem for AI is that while there are a lot of risk-tolerant applications, they're almost all low-value; while nearly all the high-value applications are risk-intolerant. Once AI has to be profitable â once investors withdraw their subsidies from money-losing ventures â the risk-tolerant applications need to be sufficient to run those tremendously expensive servers in those brutally expensive data-centers tended by exceptionally expensive technical workers.
If they aren't, then the business case for running those servers goes away, and so do the servers â and so do all those risk-tolerant, low-value applications. It doesn't matter if helping blind people make sense of their surroundings is socially beneficial. It doesn't matter if teenaged gamers love their epic character art. It doesn't even matter how horny scammers are for generating AI nonsense SEO websites:
https://twitter.com/jakezward/status/1728032634037567509
These applications are all riding on the coattails of the big AI models that are being built and operated at a loss in order to be profitable. If they remain unprofitable long enough, the private sector will no longer pay to operate them.
Now, there are smaller models, models that stand alone and run on commodity hardware. These would persist even after the AI bubble bursts, because most of their costs are setup costs that have already been borne by the well-funded companies who created them. These models are limited, of course, though the communities that have formed around them have pushed those limits in surprising ways, far beyond their original manufacturers' beliefs about their capacity. These communities will continue to push those limits for as long as they find the models useful.
These standalone, "toy" models are derived from the big models, though. When the AI bubble bursts and the private sector no longer subsidizes mass-scale model creation, it will cease to spin out more sophisticated models that run on commodity hardware (it's possible that Federated learning and other techniques for spreading out the work of making large-scale models will fill the gap).
So what kind of bubble is the AI bubble? What will we salvage from its wreckage? Perhaps the communities who've invested in becoming experts in Pytorch and Tensorflow will wrestle them away from their corporate masters and make them generally useful. Certainly, a lot of people will have gained skills in applying statistical techniques.
But there will also be a lot of unsalvageable wreckage. As big AI models get integrated into the processes of the productive economy, AI becomes a source of systemic risk. The only thing worse than having an automated process that is rendered dangerous or erratic based on AI integration is to have that process fail entirely because the AI suddenly disappeared, a collapse that is too precipitous for former AI customers to engineer a soft landing for their systems.
This is a blind spot in our policymakers debates about AI. The smart policymakers are asking questions about fairness, algorithmic bias, and fraud. The foolish policymakers are ensnared in fantasies about "AI safety," AKA "Will the chatbot become a superintelligence that turns the whole human race into paperclips?"
https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space
But no one is asking, "What will we do if" â when â "the AI bubble pops and most of this stuff disappears overnight?"
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/12/19/bubblenomics/#pop
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
--
tom_bullock (modified) https://www.flickr.com/photos/tombullock/25173469495/
CC BY 2.0 https://creativecommons.org/licenses/by/2.0/
4K notes
·
View notes
Text
Often when I post an AI-neutral or AI-positive take on an anti-AI post I get blocked, so I wanted to make my own post to share my thoughts on "Nightshade", the new adversarial data poisoning attack that the Glaze people have come out with.
I've read the paper and here are my takeaways:
Firstly, this is not necessarily or primarily a tool for artists to "coat" their images like Glaze; in fact, Nightshade works best when applied to sort of carefully selected "archetypal" images, ideally ones that were already generated using generative AI using a prompt for the generic concept to be attacked (which is what the authors did in their paper). Also, the image has to be explicitly paired with a specific text caption optimized to have the most impact, which would make it pretty annoying for individual artists to deploy.
While the intent of Nightshade is to have maximum impact with minimal data poisoning, in order to attack a large model there would have to be many thousands of samples in the training data. Obviously if you have a webpage that you created specifically to host a massive gallery poisoned images, that can be fairly easily blacklisted, so you'd have to have a lot of patience and resources in order to hide these enough so they proliferate into the training datasets of major models.
The main use case for this as suggested by the authors is to protect specific copyrights. The example they use is that of Disney specifically releasing a lot of poisoned images of Mickey Mouse to prevent people generating art of him. As a large company like Disney would be more likely to have the resources to seed Nightshade images at scale, this sounds like the most plausible large scale use case for me, even if web artists could crowdsource some sort of similar generic campaign.
Either way, the optimal use case of "large organization repeatedly using generative AI models to create images, then running through another resource heavy AI model to corrupt them, then hiding them on the open web, to protect specific concepts and copyrights" doesn't sound like the big win for freedom of expression that people are going to pretend it is. This is the case for a lot of discussion around AI and I wish people would stop flagwaving for corporate copyright protections, but whatever.
The panic about AI resource use in terms of power/water is mostly bunk (AI training is done once per large model, and in terms of industrial production processes, using a single airliner flight's worth of carbon output for an industrial model that can then be used indefinitely to do useful work seems like a small fry in comparison to all the other nonsense that humanity wastes power on). However, given that deploying this at scale would be a huge compute sink, it's ironic to see anti-AI activists for that is a talking point hyping this up so much.
In terms of actual attack effectiveness; like Glaze, this once again relies on analysis of the feature space of current public models such as Stable Diffusion. This means that effectiveness is reduced on other models with differing architectures and training sets. However, also like Glaze, it looks like the overall "world feature space" that generative models fit to is generalisable enough that this attack will work across models.
That means that if this does get deployed at scale, it could definitely fuck with a lot of current systems. That said, once again, it'd likely have a bigger effect on indie and open source generation projects than the massive corporate monoliths who are probably working to secure proprietary data sets, like I believe Adobe Firefly did. I don't like how these attacks concentrate the power up.
The generalisation of the attack doesn't mean that this can't be defended against, but it does mean that you'd likely need to invest in bespoke measures; e.g. specifically training a detector on a large dataset of Nightshade poison in order to filter them out, spending more time and labour curating your input dataset, or designing radically different architectures that don't produce a comparably similar virtual feature space. I.e. the effect of this being used at scale wouldn't eliminate "AI art", but it could potentially cause a headache for people all around and limit accessibility for hobbyists (although presumably curated datasets would trickle down eventually).
All in all a bit of a dick move that will make things harder for people in general, but I suppose that's the point, and what people who want to deploy this at scale are aiming for. I suppose with public data scraping that sort of thing is fair game I guess.
Additionally, since making my first reply I've had a look at their website:
Used responsibly, Nightshade can help deter model trainers who disregard copyrights, opt-out lists, and do-not-scrape/robots.txt directives. It does not rely on the kindness of model trainers, but instead associates a small incremental price on each piece of data scraped and trained without authorization. Nightshade's goal is not to break models, but to increase the cost of training on unlicensed data, such that licensing images from their creators becomes a viable alternative.
Once again we see that the intended impact of Nightshade is not to eliminate generative AI but to make it infeasible for models to be created and trained by without a corporate money-bag to pay licensing fees for guaranteed clean data. I generally feel that this focuses power upwards and is overall a bad move. If anything, this sort of model, where only large corporations can create and control AI tools, will do nothing to help counter the economic displacement without worker protection that is the real issue with AI systems deployment, but will exacerbate the problem of the benefits of those systems being more constrained to said large corporations.
Kinda sucks how that gets pushed through by lying to small artists about the importance of copyright law for their own small-scale works (ignoring the fact that processing derived metadata from web images is pretty damn clearly a fair use application).
1K notes
·
View notes
Note
What objections would you actually accept to AI?
Roughly in order of urgency, at least in my opinion:
Problem 1: Curation
The large tech monopolies have essentially abandoned curation and are raking in the dough by monetizing the process of showing you crap you don't want.
The YouTube content farm; the Steam asset flip; SEO spam; drop-shipped crap on Etsy and Amazon.
AI makes these pernicious, user hostile practices even easier.
Problem 2: Economic disruption
This has a bunch of aspects, but key to me is that *all* automation threatens people who have built a living on doing work. If previously difficult, high skill work suddenly becomes low skill, this is economically threatening to the high skill workers. Key to me is that this is true of *all* work, independent of whether the work is drudgery or deeply fulfilling. Go automate an Amazon fulfillment center and the employees will not be thanking you.
There's also just the general threat of existing relationships not accounting for AI, in terms of, like, residuals or whatever.
Problem 3: Opacity
Basically all these AI products are extremely opaque. The companies building them are not at all transparent about the source of their data, how it is used, or how their tools work. Because they view the tools as things they own whose outputs reflect on their company, they mess with the outputs in order to attempt to ensure that the outputs don't reflect badly on their company.
These processes are opaque and not communicated clearly or accurately to end users; in fact, because AI text tools hallucinate, they will happily give you *fake* error messages if you ask why they returned an error.
There's been allegations that Mid journey and Open AI don't comply with European data protection laws, as well.
There is something that does bother me, too, about the use of big data as a profit center. I don't think it's a copyright or theft issue, but it is a fact that these companies are using public data to make a lot of money while being extremely closed off about how exactly they do that. I'm not a huge fan of the closed source model for this stuff when it is so heavily dependent on public data.
Problem 4: Environmental maybe? Related to problem 3, it's just not too clear what kind of impact all this AI stuff is having in terms of power costs. Honestly it all kind of does something, so I'm not hugely concerned, but I do kind of privately think that in the not too distant future a lot of these companies will stop spending money on enormous server farms just so that internet randos can try to get Chat-GPT to write porn.
Problem 5: They kind of don't work
Text programs frequently make stuff up. Actually, a friend pointed out to me that, in pulp scifi, robots will often say something like, "There is an 80% chance the guards will spot you!"
If you point one of those AI assistants at something, and ask them what it is, a lot of times they just confidently say the wrong thing. This same friend pointed out that, under the hood, the image recognition software is working with probabilities. But I saw lots of videos of the Rabbit AI assistant thing confidently being completely wrong about what it was looking at.
Chat-GPT hallucinates. Image generators are unable to consistently produce the same character and it's actually pretty difficult and unintuitive to produce a specific image, rather than a generic one.
This may be fixed in the near future or it might not, I have no idea.
Problem 6: Kinetic sameness.
One of the subtle changes of the last century is that more and more of what we do in life is look at a screen, while either sitting or standing, and making a series of small hand gestures. The process of writing, of producing an image, of getting from place to place are converging on a single physical act. As Marshall Macluhan pointed out, driving a car is very similar to watching TV, and making a movie is now very similar, as a set of physical movements, to watching one.
There is something vaguely unsatisfying about this.
Related, perhaps only in the sense of being extremely vague, is a sense that we may soon be mediating all, or at least many, of our conversations through AI tools. Have it punch up that email when you're too tired to write clearly. There is something I find disturbing about the idea of communication being constantly edited and punched up by a series of unrelated middlemen, *especially* in the current climate, where said middlemen are large impersonal monopolies who are dedicated to opaque, user hostile practices.
Given all of the above, it is baffling and sometimes infuriating to me that the two most popular arguments against AI boil down to "Transformative works are theft and we need to restrict fair use even more!" and "It's bad to use technology to make art, technology is only for boring things!"
90 notes
·
View notes
Note
Iâm in undergrad but I keep hearing and seeing people talking about using chatgpt for their schoolwork and it makes me want to rip my hair out lol. Like even the âradicalâ anti-chatgpt ones are like âOh yea itâs only good for outlines Iâd never use it for my actual essay.â Youâre using it for OUTLINES????? Thatâs the easy part!! I canât wait to get to grad school and hopefully be surrounded by people who actually want to be there đđđ
Not to sound COMPLETELY like a grumpy old codger (although lbr, I am), but I think this whole AI craze is the obvious result of an education system that prizes "teaching for the test" as the most important thing, wherein there are Obvious Correct Answers that if you select them, pass the standardized test and etc etc mean you are now Educated. So if there's a machine that can theoretically pick the correct answers for you by recombining existing data without the hard part of going through and individually assessing and compiling it yourself, Win!
... but of course, that's not the way it works at all, because AI is shown to create misleading, nonsensical, or flat-out dangerously incorrect information in every field it's applied to, and the errors are spotted as soon as an actual human subject expert takes the time to read it closely. Not to go completely KIDS THESE DAYS ARE JUST LAZY AND DONT WANT TO WORK, since finding a clever way to cheat on your schoolwork is one of those human instincts likewise old as time and has evolved according to tools, technology, and educational philosophy just like everything else, but I think there's an especial fear of Being Wrong that drives the recourse to AI (and this is likewise a result of an educational system that only prioritizes passing standardized tests as the sole measure of competence). It's hard to sort through competing sources and form a judgment and write it up in a comprehensive way, and if you do it wrong, you might get a Bad Grade! (The irony being, of course, that AI will *not* get you a good grade and will be marked even lower if your teachers catch it, which they will, whether by recognizing that it's nonsense or running it through a software platform like Turnitin, which is adding AI detection tools to its usual plagiarism checkers.)
We obviously see this mindset on social media, where Being Wrong can get you dogpiled and/or excluded from your peer groups, so it's even more important in the minds of anxious undergrads that they aren't Wrong. But yeah, AI produces nonsense, it is an open waste of your tuition dollars that are supposed to help you develop these independent college-level analytical and critical thinking skills that are very different from just checking exam boxes, and relying on it is not going to help anyone build those skills in the long term (and is frankly a big reason that we're in this mess with an entire generation being raised with zero critical thinking skills at the exact moment it's more crucial than ever that they have them). I am mildly hopeful that the AI craze will go bust just like crypto as soon as the main platforms either run out of startup funding or get sued into oblivion for plagiarism, but frankly, not soon enough, there will be some replacement for it, and that doesn't mean we will stop having to deal with fake news and fake information generated by a machine and/or people who can't be arsed to actually learn the skills and abilities they are paying good money to acquire. Which doesn't make sense to me, but hey.
So: Yes. This. I feel you and you have my deepest sympathies. Now if you'll excuse me, I have to sit on the porch in my quilt-draped rocking chair and shout at kids to get off my lawn.
181 notes
·
View notes
Link
In a quiet part of Northern California, where pine trees brush the sky and the hum of giant satellite dishes fills the air, something big is happening in science education. A new wave of college students is getting the chance to explore the universe â not through textbooks, but with real data from a world-class observatory. Thanks to a growing program called ARISE Lab, students and teachers from community colleges are diving deep into the science of space, radio signals, and the search for alien life. The SETI Institute, which focuses on the scientific search for extraterrestrial intelligence, has expanded this groundbreaking effort. With new support from a grant by the Amateur Radio and Digital Communication Foundation, the ARISE Lab (Access to Radio Astronomy for Inclusion in Science Education) is now reaching even more classrooms across the country. Making Space Science Hands-On The main idea behind ARISE is simple: when students get to do science themselves, they understand it better and stay interested longer. âHands-on experiences are proven to improve student engagement and retention,â said Dr. Vishal Gajjar, a radio astronomer who leads the project at the SETI Institute. Thatâs why ARISE puts real scientific tools directly into studentsâ hands. The Allen Telescope Array at Hat Creek Radio Observatory. (CREDIT: Luigi Cruz) The program uses GNU Radio, a free and open-source software that lets users process radio signals. This gives students a way to study actual data from the SETI Instituteâs Allen Telescope Array (ATA). The ATA is the first and only radio telescope in the world built just for detecting signs of advanced life beyond Earth â also called technosignatures. With these tools, students donât just read about pulsars, spacecraft, or distant stars. They study them. They learn to sort signals, find patterns, and understand how astronomers listen to the sky. What the ARISE Curriculum Offers Dr. Gajjar and his team built the ARISE curriculum using something called experiential learning technique, or ELT. This method focuses on learning by doing. Students start with pre-lab reading, move through guided lab work, and then reflect on what they discover. Related Stories How podcasts are revolutionizing health education and behavior Groundbreaking discovery promotes verbal learning and fights memory loss New AI-based learning system provides personalized math instruction for students ARISE includes two types of content: modules and labs. Modules are more complete packages that come with slides, notes, reading materials, lab manuals, and instructor guides. They are designed to be added directly into a science class. Labs, on the other hand, are shorter, standalone activities that can be used by themselves or as part of a larger lesson. The labs cover a wide range of topics. Students might explore signal modulation â the way information travels through radio waves â or learn how data science applies to astronomy. Each lab has step-by-step instructions that make it easy for both students and teachers to follow. By linking lessons to the search for extraterrestrial life, ARISE grabs studentsâ attention. Research shows that this subject sparks more interest than almost any other topic in science. âWith ARISE, weâre combining cost-effective tools like GNU Radio with one of the most captivating topics in science â the search for life beyond Earth â to spark curiosity and build skills across STEM disciplines,â Gajjar said. Vishal Gajjar, SETI Institute. (CREDIT: SETI Institute) Real Tools, Real Signals, Real Skills The ARISE team doesnât just give students data and walk away. They create chances for them to experience what itâs like to work in space science. âWhether itâs detecting a signal from a Mars orbiter or analyzing pulsar data, students are gaining real experience with tools used in both professional astronomy and industries,â said Joel Earwicker, the projectâs lead research assistant. âItâs about making science feel real, relevant, and achievable.â That real-world feeling is what sets ARISE apart. It connects students with data from the Allen Telescope Array, a set of 42 dish antennas located at the Hat Creek Radio Observatory. This array scans the sky daily, looking for faint radio waves that might come from intelligent life in space. Students learn how to filter out ânoiseâ from human-made signals, track moving sources across the sky, and identify natural phenomena like pulsars â stars that blink like cosmic lighthouses. These skills mirror what professionals do in both astronomy and tech careers, building a direct path from the classroom to the workforce. Students examine live radio signals from deep space, learning to decode real astronomical data using modern tools and guided scientific methods. (CREDIT: SETI Institute) Growing the Program in 2025 After the programâs first pilot workshop at Hat Creek in 2024, the results spoke for themselves. Teachers loved it. Students stayed engaged. The SETI Institute decided to grow the effort. In 2025, ARISE will offer: 15 new labs on topics like astronomy, digital communications, and data analysis 2 hands-on workshops at Hat Creek to train instructors from community colleges On-site lab support at 10 schools to help teachers roll out the new content The team will also host an in-person workshop for six selected community college teachers from June 25 to June 27, 2025, at Hat Creek. These instructors will get travel and lodging covered. At the workshop, they will visit the telescope site, watch live observations, test out lab activities, and collaborate with other science educators. SETI efforts around the world. (CREDIT: SETI Institute) This expanded effort aims to bring advanced science training to places that often get left out of big research programs â local community colleges. These schools educate nearly half of all undergraduates in the U.S., and their students often come from backgrounds underrepresented in STEM fields. By targeting these schools, ARISE gives more people a chance to be part of space science. It also helps instructors bring fresh energy to their classes. Looking Up, Reaching Out When students see real data from space scrolling across their screens, something clicks. Science becomes more than just facts in a book. It becomes a search â one they can be part of. With ARISE, the SETI Institute is changing how students learn science. Instead of memorizing equations, they explore the universe. Instead of just hoping to understand radio signals, they decode them. By giving students the tools, data, and support to study space firsthand, ARISE opens doors â to science, to careers, and maybe even to the stars. Research findings are available online on the SETI Institute website. Note: The article above provided above by The Brighter Side of News. Like these kind of feel good stories? Get The Brighter Side of Newsâ newsletter. The post New SETI program helps students detect signs of advanced life beyond Earth appeared first on The Brighter Side of News.
16 notes
·
View notes
Text
Are there generative AI tools I can use that are perhaps slightly more ethical than others? âBetter Choices
No, I don't think any one generative AI tool from the major players is more ethical than any other. Hereâs why.
For me, the ethics of generative AI use can be broken down to issues with how the models are developedâspecifically, how the data used to train them was accessedâas well as ongoing concerns about their environmental impact. In order to power a chatbot or image generator, an obscene amount of data is required, and the decisions developers have made in the pastâand continue to makeâto obtain this repository of data are questionable and shrouded in secrecy. Even what people in Silicon Valley call âopen sourceâ models hide the training datasets inside.
Despite complaints from authors, artists, filmmakers, YouTube creators, and even just social media users who donât want their posts scraped and turned into chatbot sludge, AI companies have typically behaved as if consent from those creators isnât necessary for their output to be used as training data. One familiar claim from AI proponents is that to obtain this vast amount of data with the consent of the humans who crafted it would be too unwieldy and would impede innovation. Even for companies that have struck licensing deals with major publishers, that âcleanâ data is an infinitesimal part of the colossal machine.
Although some devs are working on approaches to fairly compensate people when their work is used to train AI models, these projects remain fairly niche alternatives to the mainstream behemoths.
And then there are the ecological consequences. The current environmental impact of generative AI usage is similarly outsized across the major options. While generative AI still represents a small slice of humanity's aggregate stress on the environment, gen-AI software tools require vastly more energy to create and run than their non-generative counterparts. Using a chatbot for research assistance is contributing much more to the climate crisis than just searching the web in Google.
Itâs possible the amount of energy required to run the tools could be loweredânew approaches like DeepSeekâs latest model sip precious energy resources rather than chug themâbut the big AI companies appear more interested in accelerating development than pausing to consider approaches less harmful to the planet.
How do we make AI wiser and more ethical rather than smarter and more powerful? âGalaxy Brain
Thank you for your wise question, fellow human. This predicament may be more of a common topic of discussion among those building generative AI tools than you might expect. For example, Anthropicâs âconstitutionalâ approach to its Claude chatbot attempts to instill a sense of core values into the machine.
The confusion at the heart of your question traces back to how we talk about the software. Recently, multiple companies have released models focused on âreasoningâ and âchain-of-thoughtâ approaches to perform research. Describing what the AI tools do with humanlike terms and phrases makes the line between human and machine unnecessarily hazy. I mean, if the model can truly reason and have chains of thoughts, why wouldnât we be able to send the software down some path of self-enlightenment?
Because it doesnât think. Words like reasoning, deep thought, understandingâthose are all just ways to describe how the algorithm processes information. When I take pause at the ethics of how these models are trained and the environmental impact, my stance isnât based on an amalgamation of predictive patterns or text, but rather the sum of my individual experiences and closely held beliefs.
The ethical aspects of AI outputs will always circle back to our human inputs. What are the intentions of the userâs prompts when interacting with a chatbot? What were the biases in the training data? How did the devs teach the bot to respond to controversial queries? Rather than focusing on making the AI itself wiser, the real task at hand is cultivating more ethical development practices and user interactions.
13 notes
·
View notes
Text
Hey tronblr. It's sysop. Let's talk about the Midjourney thing.
(There's also a web-based version of this over on reindeer flotilla dot net).
Hey tronblr. It's sysop. Let's talk about the AI thing for a minute.
Automattic, who owns Tumblr and WordPress dot com, is selling user data to Midjourney. This is, obviously, Bad. I've seen a decent amount of misinformation and fearmongering going around the last two days around this, and a lot of people I know are concerned about where to go from here. I don't have solutions, or even advice -- just thoughts about what's happening and the possibilities.
In particular... let's talk about this post, Go read it if you haven't. To summarize, it takes aim at Glaze (the anti-AI tool that a lot of artists have started using). The post makes three assertions, which I'm going to paraphrase:
It's built on stolen code.
It doesn't matter whether you use it anyway.
So just accept that it's gonna happen.
I'd like to offer every single bit of this a heartfelt "fuck off, all the way to the sun".
Let's start with the "stolen code" assertion. I won't get into the weeds on this, but in essence, the Glaze/Nightshade team pulled some open-source code from DiffusionBee in their release last March, didn't attribute it correctly, and didn't release the full source code (which that particular license requires). The team definitely should have done their due diligence -- but (according to the team, anyway) they fixed the issue within a few days. We'll have to take their word on that for now, of course -- the code isn't open source. That's not great, but that doesn't mean they're grifters. It means they're trying to keep people who work on LLMs from picking apart their tactics out in the open. It sucks ass, actually, but... yeah. Sometimes that's how software development works, from experience.
Actually, given the other two assertions... y'know what? No. Fuck off into the sun, twice. Because I have no patience for this shit, and you shouldn't either.
Yes, you should watermark your art. Yes, it's true that you never know whether your art is being scraped. And yes, a whole lot of social media sites are jumping on the "generative AI" hype train.
That doesn't mean that you should just accept that your art is gonna be scraped, and that there's nothing you can do about it. It doesn't mean that Glaze and Nightshade don't work, or aren't worth the effort (although right now, their CPU requirements are a bit prohibitive). Every little bit counts.
Fuck nihilism! We do hope and pushing forward here, remember?
As far as what we do now, though? I don't know. Between the Midjourney shit, KOSA, and people just generally starting to leave... I get that it feels like the end of something. But it's not -- or it doesn't have to be. Instead of jumping over to other platforms (which are just as likely to have similar issues in several years), we should be building other spaces that aren't on centralized platforms, where big companies don't get to make decisions about our community for us. It's hard. It's really hard. But it is possible.
All I know is that if we want a space that's ours, where we retain control over our work and protect our people, we've gotta make it ourselves. Nobody's gonna do it for us, y'know?
47 notes
·
View notes
Text
Why Ibara and Hajime Were Destined to be Partners in a Variety Show
This post will be a bit different from my other ones as I think I've let this one sit for far too long and I just want to release it from the prison that is my drafts!!!! So there won't be a lot of citation, this will be more of a very long ramble/rant! I think I'm finally ready to (try to) articulate my thoughts on how Ibara and Hajime share many parallels, similarities, or just how they compare and contrast as characters!!
As you may know I am a big fan of Ibahaji as a relationship so - I can't guarantee an unbiased view on all this. I'd also say I'm more of an Ibara scholar than a Hajime scholar, but I have a PhD in neither.
I'm just very fascinated by the 'why' behind Bogie Time. I think Akira is talented in drawing connections between all 49 idols. And the way Ibara and Hajime become unlikely friends is a particularly strongly written connection to me.
Let's begin with: Hajime and Ibara's similarities as described in Bogie Time
To understand how profound their friendship is, we must look at their relationship development in Bogie Time.
At the start, Hajime and Ibara could not be more dissimilar.
Ibara is aloof but capable, older, and his usual disingenuous self. Hajime is younger and believes himself to be a bumbling little guy who's in the way. He immediately defers to Ibara on all matters but doesn't know what to make of him. Ibara is laughably confused by Hajime too, because he's literally too genuine of a person - a quality Ibara is not used to, sadly. However, as Ibara does something called 'relating to others' he realizes Hajime is not so far removed from his own world.
In a nice bonding moment halfway through the story that makes a point of sharing their similarities, Ibara and Hajime relate on looking up too much to others and feeling below them.
This is an incredibly important conversation as Ibara who usually doesn't open up about his feelings, does miraculously share why he's been so upset about the show to Hajime. Here marks a great turning point... Hajime can deeply relate to Ibara's aversion to humiliation, having experienced something so crushing as having no audience for Ra*bits debut. This was a formative moment for Hajime. I believe here Hajime sees himself in Ibara... and wants to help.
Another major point of similarity that the story pushes is that Hajime has a cunning side, one that is actually supported by how much Hajime pays attentions to people's feelings and personalities to a point where he can be too dependent on their approval - which is the opposite problem of Ibara, interestingly enough, who forgets to consider people as living sources of information rather than just data and tools.
This is the lesson Nagisa wanted him to learn from Bogie Time. Ibara hones Hajime's cunning so they can advance in the show, as his specialty lies in how to use things and people to their fullest. They're very complementary!
To summarize: the two main similarities Bogie Time wants you to focus on is that Ibara and Hajime both struggle with thinking themselves as lesser than others and that they are intelligent and cunning.
But what if there's other similarities to be found between them beyond Bogie Time?
Now I'll talk about: my interpretation of Ibara and Hajime's other similarities.
These are some scattered thoughts and things I noticed.
Ibara and Hajime's experiences with poverty While they do not discuss this as a similarity, Ibara and Hajime do both come from backgrounds of poverty or being impoverished.
Specifically, Ibara literally had nothing to his name during his childhood and was even more deprived comparably speaking. Ibara often describes himself as someone who crawled from the bottom of society. Hajime's family is poor, so Hajime often takes up part time jobs on campus and makes very simple meals. Or even eats just breadcrusts or grass. The one major point of difference is that Ibara inherited Godfatherâs legacy - but despite that he still had to rebuild these assets from the ground up and fend for himself.
I think they could relate on the topic of survival. Even though Ibara's survival is sadly a more severe case, they both have the attitude of making do with what they have. Basically, they both did not have it easy.
As was revealed in Private Room, Ibara learned DIY when he was in the orphanage because he had no other form of entertainment. He's used to fixing and making his own things.
I recently read some old stories where Hajime makes up his own game of sliding on the wet floors of the school when it rains, because his family is too poor to afford many toys he's quite good at making up games to entertain his siblings.
Considering this I was like.... ohhhh... I've connected some dots. You could even say if anyone could understand Hajime's desire to have a partner who would have simple meals with him - it'd be Ibara who has a lot of opinions on eating sparingly. Although his is more motivated on survival and not allowing weak points in a moment of vulnerability, which then morphed into efficiency.
This follows into... Ibara and Hajime's social status
They both suffered from bullying and discrimination toward them at a young age.
Hajime was bullied and excluded by his classmates for being too slow and useless to them. Hajime would struggle for a long time with the inferiority complex this experience gave him, which was only expounded by the failure of Ra*bits debut.
Ibara is an orphan who was implied to be considered equivalent to garbage by society. He is especially motivated by dominating people who used to laugh at him and thus is so afraid of being the target of mockery in Bogie Time.
I think Ibara is a special case where he weaponizes the way society has told him âheâs nothingâ over and over again by purposefully poking others and making them uncomfortable by his self-deprecating statements. While I recognize heâs not totally self loathing I feel like thereâs a grain of truth to his self deprecation, and is almost challenging people like Anzu to affirm what he has always known anyway so he can continue to justify his cynical worldview that keeps him safe - but itâs definitely an obstacle to having a more growth oriented mindset.
Meanwhile Hajime does all he can to be likeable and gain approval as a good child, and is extremely apologetic for his shortcomings in early stories. Interestingly, Ibara also has this behaviour but on a surface level as he only does it to manipulate people to being useful for him. Although Hajime does have influence because of all the genuine good favour heâs gained from acting this way. In recent stories however, heâs grown more assertiveâŠ
Such as this exchange Hajime and Ibara have during a Dream Live because Hajime scolds him for being openly self deprecating to the audience. Again I think this is another moment where Hajime sees himself in Ibara, especially by how he tells him heâs his fan too. Hajimeâs self esteem has gotten better because heâs motivated by his friends and fans too, so it just seems to echo that.
Ibara and Hajime's roles in their units
(Using this cg as this section's image because in Happy Spring Hajime is so excited about visiting a nearby town for the first time for a job, he makes a notebook full of research about that town... which they're only visiting for a day or so. It really represents Hajime's fastidious nature and love of information.)
Personally I think Ibara and Hajime are quite similar in being the sensible and practical members of their units, especially when it comes to finances and their meticulous nature in preparing things. They also tend to lose sight of whatâs in front of them when theyâre too in their own heads. One of Ibaraâs biggest pitfalls is getting hasty when he thinks everything is going according to his schemes and victory is within reach. While Hajime can take his daydreams and ideals too seriously, for example getting upset at Mitsuru for not bringing flowers to Madaraâs sister as he got too obsessed with his own idea of them being in love in Ra*bits climax.
I also believe they both possess a role of drawing people into their respective units - where Ibara utilizes fanservice in a more mature way, Hajimeâs charisma is through his expertise in being cute. Both of these are to an extent personas they use on stage to attract attention.
Ibara and Hajime's gender presentation
This point is directly addressed in Bogie Time. I find it interesting that Ibaraâs own feminine features were emphasized - it subverts expectations a bit as Hajime is usually put into the âgirlâ role. But at the end, itâs Ibara whoâs essentially experiencing what Hajime experiences all the time⊠being perceived as a girl.
Previously, Ibara portrayed the Red Queen in Wonder Game - a female character and the outfit itself has what I arguably consider feminine elements such as the silhouette and high heels. And last year, Ibara was in the bride-inspired White Swan outfit - and in a pose that I personally think can be read as typically feminine (a pose youâd see a female gacha character have perhaps.) So Ibara is no stranger to feminine roles.
I didnât expect Akira to make this a point of similarity between them, but itâs interesting that he played with this concept and made Ibara cross dressing a focal point of the story and artwork.
I believe that Ibara and Hajime both embrace it, along with Ibara learning from Hajime to not be embarrassed in general and that feminine Ibara cards exist in the first place - and Hajimeâs own personal journey as he goes from disliking being seen as a girl, to being ok with it and finding ways to use it as an idol.
Other personality traits
I believe Ibara and Hajime were similarly both troublemakers when they were very young. They also both grew out of this behaviour in response to their environment.
Ibara had a wish to make his life have been worth something and needed to pull himself together for his newfound inheritance. The strict and unforgiving military lifestyle he once hated and criticized became very useful in his role as a producer and businessman.
While Hajime became more well behaved for the sake of being a good older brother and not causing more trouble for his parents. However this may be one of the reasons he has a bit of a complex over people perceiving him as too perfect - which he mentions to Ibara in Bogie Time, and seems to want to indulge in being a 'bad kid' sometimes (thinking of his halloween voice line).
From all this I'd like to conclude that the great amount of connections one can make between Ibara and Hajime is why Akira and his team of writers probably conceived of Bogie Time which is about their relationship at its core. There's a lot to work with, and because I consider this story as a major turning point in Ibara's character arc - this is probably one of the big reasons why they chose Hajime to be the character to spur the needed profound change in Ibara. ⊠besides that Hajime and Ibara were both moderately popular at the time and cute I guess! If you made it all the way here - wow, congrats and thank you for reading!
#ensemble stars#ibahaji#ibajime#ibara saegusa#Hajime shino#Chyna writes too#BE FREE MY CHILD... LEAVE MY DRAFTS AND GO OUT INTO THE WORLD
29 notes
·
View notes
Text
Protection for the Digital Witch: Image Metadata
Just about every image posted online, taken by your camera, or even screenshotted on your phone contains hidden data called metadata. Some of this information is useful, such as how big the image is or how image viewers should render the colors in it. Some of it may be more concerning such as the time the image was taken, where it was taken, where it is saved on your device, what device it was taken on, and more. Mobile phones in particular love to stick metadata on everything that passes through their galleries. While this makes storing and organizing photos a breeze, it can be dangerous to upload photos online when your address is attached to them.
This, of course, raises important questions about online safety and how to keep sensitive information hidden from internet strangers. While there's plenty of resources online about mundane reasons to scrub information from your photos, what can the digital witch take away from this?
Consider metadata as a taglock. Taglocks in witchcraft are footholds for magic to jump from one practitioner's spelltable to another's and they aren't always used for blessings. Anything from a strand of hair to a name to an item you once owned could serve as a taglock and the closer the item is to you, the better. Imagine how powerful a photo of you or your space could be if it also happened to include your location, the time it was taken, and where it was saved on your phone before it hit your blog!
On the flip side, this information can be brandished for good. Photos of meaningful places during fortuitous astrological timing may serve as particularly protective wallpapers. Witches can attach hidden spells or information via image editing tools such as Gimp or Photoshop to their images.
If you work often in digital spaces, keep in mind what information your photos may have and how you can use it to keep yourself safe and explore new mediums for your work!
How to Prevent or Remove Metadata from Photos
Disable Geotagging in your device's settings.
Do a quick Google search to see if your computer or model of phone provides this option!
2. Use a third-party app designed to remove metadata. I recommend an open-source and locally downloaded app like ExifCleaner for PCs or Scrambled Exif for Android. Be wary of third party apps that require you to upload your image to their website, serve you questionable ads, or collect information on you. The goal is to protect your privacy after all!
Further Reading
A Picture is Worth a Thousand Words, Including Your Location
Scrambled Exif
Exif Cleaner
Everything you wanted to know about media metadata, but were too afraid to ask
7 notes
·
View notes
Note
I shall go through your pinned post then.
Iâve a few Qs, pls answer them at your leisure.
1) Do you see tarot as a predictive tool or one of self-help?
2) Astrology tells us that there are certain things that are bound to happen. Yes free-will exists, but there are SOME things once cannot escape. Do you feel tarot has such beliefs too?
3) Do you feel the future can be changed?
Sorry for bombarding you with Qs. đ But am always so curious to know what tarot readers say about stuff like this!
I love discussions so it's always a joy to receive questions like these. If you have more questions or just want to share something, feel free to let me know.
Before answering your Qs, I have to put a disclaimer beforehand that all of these are my personal thoughts and experiences and I don't seek to deny the validity of other opinions. And also my spiritual beliefs don't really align with any particular system.
1) Do you see tarot as a predictive tool or one of self-help?
I see Tarot and other divination mediums as a tool to help us connect with the collective and our own individual unconscious. As the term "unconscious" denotes, it's a part that we don't usually aware of in ourselves and how it connects to everything.
Through Tarot, we become more aware of that part, gain more knowledge from it. But what we do with that knowledge is our own choice. Let's imagine the unconscious as a raw database and Tarot is an open source data processing program with many tools for analysing, predicting trends based on existing data, combine data to make a whole new thing etc. The tools will keep being added, updated based on the user (us)'s needs and skills.
So yes, Tarot can be used as a predictive tool or one of self-help, it can also be used for many different purposes as long as the user can get reliable results from it. I've asked Tarot for big life decisions, predicting major events, but I've also asked Tarot for advice on simple things like the place to go, the food to buy (yes lol) just to see how life will unfold.
2) Astrology tells us that there are certain things that are bound to happen. Yes free-will exists, but there are SOME things once cannot escape. Do you feel tarot has such beliefs too?
I think the only things that we cannot escape are the parents whom we are born to and our physical death. One can also argue that our higher self chose our parents but I'm of the belief that there is an underlying system that ensures the balance of the universe and we are placed in the right time and place according to that system. But it's not a rigid system but a flexible one, constantly changing to adapt to new situations.
And where do those new situations come from? from us, by our conscious or unconscious choices. Things that are considered bound to happen, things that one cannot escape are actually the results of our choices. Sometimes we make those choice unconsciously so it looks like we are just some passive pawns moved by some higher hands. The more aware we are of ourselves, the less we feel that our life is governed by outside force. This explains why some people who are aware of themselves, aware of all their light and dark sides, their gray sides, rarely resonate with their astrology chart or Tarot readings. Because their life is not controlled by a seemingly outside unknown force anymore.
Astrology chart can be the blueprint for our journey, but how we build it, how we decorate it, how we renovate it later on is our own choices. Tarot predictions are also like that.
3) Do you feel the future can be changed?
The answer to this question can be seen from the previous questions. The future is a result that will likely happen if we follow the trends of our existing actions, an extrapolation. Anything that hasn't happened yet, can be changed.
#have a nice day#ask#tarot#tarot reader#tarotblr#tarot community#tarot questions#spirituality#divination#astrology#get to know your tarot reader#ask me questions
12 notes
·
View notes
Text

âRadiating Kindness (Oil)â, 2023, Oil on linen

âBold Glamourâ, 2023, Digital print on linen
For AI Paintings, Matthew Stoneâs 2023 exhibition at The Holeâs Lower East Side location, he explored new ways of using the latest technology while expanding on techniques used in his previous digital creations.
Details from The Hole about this exhibition-
Two LED screens form the center of this show, displaying an unedited stream of novel AI outputs; a new painting every ten seconds. Corresponding in scale to the surrounding works on linen and functioning like smart canvases, these AI paintings transform endlessly and if youâre alone in the gallery, you will be the only person to ever see that version of the artwork.
Stoneâs AI paintingsâboth the tangible on linen and the fleeting screenic piecesâare created through his training of a custom AI model on top of Stable Diffusionâs open source, deep learning, text-to-image model. By feeding it only his past artworks, Stone has created a self-reflexive new series of AI works that disintegrates the hegemony of the singular static masterpiece and problematizes the idea of ownership, or even what âthe artworkâ itself entails.
AI has become part of contemporary culture, used to solve real world problems and also create TikTok filters. Itâs a tool and like a paintbrush it can be used skillfully or not. At the moment AI is throwing the art world into upheaval as artists explore its potential, galleries contend with its disruption of technique and presentation and collectors and museums feel the dissolution of authorship and ownership.
A second type of work makes its debut here, Radiating Kindness (Oil), a 3D printed, machine-assisted oil painting made in collaboration with ARTMATR labs in Red Hook, where MIT artists and engineers have come together to make innovative tools and tech. By leveraging AI, robotics, computer vision and painting scripts, their robot has created a traditional oil painting in three dimensions. You can see on the surface how the interplay between analog and digital mark making is eye-boggling.
The show also includes examples of Stoneâs âtraditionalâ technique, which is anything but: on the 13-foot wide linen painting, Irradiance, four nude figures dance over piles of strewn AI paintings. The figures in the foreground, reminiscent in choreography of Henri Matisseâs Dance (La Dance), 1910, are bodacious, athletic women, heavy and sexy like a Michelangelo marble while at the same time futuristic, weightless and splendid in impossible glass and metallic brush marks. Here Stoneâs circular and sensitive approach is laid bare for the viewer, the references to art history, technology, culture, access and the pursuit for the intangible is almost overwhelming to grasp.
Stoneâs approach points to the deeply interwoven nature of our offline and online lives today. He sees artistsâ use of new technologies as necessary, with creatives deploying these tools in a manner thatâs not motivated by big tech or financial gains, disrupting the algorithm by creating their own and exploring this new frontier without data-driven deliverables. Creating new context and room for human subjectivities and emotion in the shift from analog to digital that arguably has already occurred.
Below, in an interview for The Standard, he discusses using AI for this work further-
When working with AI, do you sometimes feel overwhelmed or do you always feel in control? I have never felt fully in control while making art and Iâve always been back and forth between wanting to be and understanding the transformative and creative power of just letting go. The most exciting moments in my creative process have often been unexpected mistakes. Those happy mistakes have revealed something that can then be consciously amplified. Using AI creates lots of unexpected outcomes very fast. So as someone who likes accidents in this context of image making, itâs a good way to become accident-prone.
Do you consider AI as just another digital tool? Or does it feel more like a collaboration? In other words, do you sometimes feel AI might develop its own taste, point of view, conscience? Itâs a digital tool and I try to resist the urge to anthropomorphize it. But itâs difficult because it feels like such a paradigm shift and also sometimes like dreaming. I think that culturally speaking, we are moving in a direction that assigns these qualities of perceived sentience to AI even when more mundane actions are at play. Itâs not clear to me how we will tell if AI has achieved general intelligence, but I think most people will assume it to be the case long before it actually happens, assuming that it does.
#Matthew Stone#The Hole#AI#AI Model#Painting#The Hole Gallery#AI Painting#ARTMATR#Art#NYC Art Shows#Art Show#Digital Art#Flashback#Henri Matisse#Mixed Media Art#Stable Diffusion#The Standard#FKA twigs#Flashback Friday#FBF
3 notes
·
View notes
Text
Object permanence
I'm on a 20+ city book tour for my new novel PICKS AND SHOVELS. Catch me in NYC on WEDNESDAY (26 Feb) with JOHN HODGMAN and at PENN STATE on THURSDAY (Feb 27). More tour dates here. Mail-order signed copies from LA's Diesel Books.
#20yrsago Italy runs out of wiretaps https://edri.org/our-work/wiretapping-data-access-by-foreign-courts-why-not/
#20yrsago Online anonymity https://web.archive.org/web/20050220170713/http://www.law.com/jsp/ltn/pubArticleLTN.jsp?id=1108389943380
#20yrsago WIPO pulls out dirty tricks to kill participation from consumer groups https://web.archive.org/web/20060909232701/https://research.yale.edu/lawmeme/modules.php?name=News&file=article&sid=1689
#20yrsago UK Labour MP flays govt over terror laws â incredible speech! https://www.theyworkforyou.com/debates/?id=2005-02-23a.365.0#20yrsago Finnish blogger faces disgraceful, bogus libel charge https://mummila.net/marginaali/2005/02/24/total-lack-of-respect-for-the-law/
#15yrsago Vice-principal denies using laptop to spy on student https://www.nbcphiladelphia.com/news/local/principal-accused-in-webcamgate-im-no-spy/2138343/
#15yrsago IP Alliance says that encouraging free/open source makes you an enemy of the USA https://www.theguardian.com/technology/blog/2010/feb/23/opensource-intellectual-property
#10yrsago Chicago Police Department maintains âblack siteâ for illegal detention and torture https://www.theguardian.com/us-news/2015/feb/24/chicago-police-detain-americans-black-site
#10yrsago HSBC boss used tax havens to keep underlings from discovering his outrageous pay https://www.nakedcapitalism.com/2015/02/bill-black-hsbc-ceo-pay-outrageous-use-tax-havens-hide-peers.html
#10yrsago Huge trove of surveillance leaks coming https://www.aljazeera.com/news/2015/2/23/the-spy-cables-a-glimpse-into-the-world-of-espionage
#10yrsago Big Content publishes a love-letter to TPP https://www.eff.org/deeplinks/2015/02/hollywood-lobby-groups-creepy-open-love-letter-tpp
#10yrsago Laura Poitrasâs Citizenfour OPSEC https://www.wired.com/2014/10/laura-poitras-crypto-tools-made-snowden-film-possible/
#5yrsago A flat earther commits suicide by conspiracy theory https://pluralistic.net/2020/02/24/pluralist-your-daily-link-dose-24-feb-2020/#epistemological
#5yrsago 81 Fortune 100 companies demand binding arbitration https://pluralistic.net/2020/02/24/pluralist-your-daily-link-dose-24-feb-2020/#iamthelaw
#5yrsago My interview on adversarial interoperability https://pluralistic.net/2020/02/24/pluralist-your-daily-link-dose-24-feb-2020/#dragons
#5yrsago Key computer vision researcher quits https://pluralistic.net/2020/02/24/pluralist-your-daily-link-dose-24-feb-2020/#oppenheimer
#5yrsago How "Authoritarian Blindness" kept Xi from dealing with coronavirus https://pluralistic.net/2020/02/24/pluralist-your-daily-link-dose-24-feb-2020/#thatswhatxisaid
#1yrago Vice surrenders https://pluralistic.net/2024/02/24/anti-posse/#when-you-absolutely-positively-dont-give-a-solitary-single-fuck
14 notes
·
View notes
Text
How Global Recognition Can Accelerate Brand Growth and Credibility

Youâre building a brand, and you want it to stand out. In 2025, competition is fierceâââcustomers have endless options, and trust is hard to earn. Award nomination processes, like those for prestigious programs, can put your brand on the map. Global recognition isnât just a pat on the back; itâs a powerful tool to boost credibility, attract customers, and open doors to new opportunities. This article explores how recognition, such as through the Global Impact Award (GIA), accelerates brand growth. Weâll cover practical steps to pursue it, real examples, and data-driven insights. From business awards in the middle of your journey to achieving global recognition at its peak, youâll learn how to leverage accolades for success. Letâs dive into why recognition matters and how you can make it work for your brand.
Why Recognition Matters for Brands
Your brand is your promise to customers. Recognition validates that promise. A 2024 study found that 78% of consumers trust brands with awards or nominations more than those without. Why? Awards signal quality, reliability, and impact. Theyâre proof youâre doing something right.
Think about it: when you see a brand with a shiny badge, donât you pay attention? I saw this firsthand with a friendâs tech startup. After a award nomination for a local business program, their website traffic spiked 30%. Customers and investors took them seriously. Recognition isnât just for big playersâââstartups, small businesses, and even nonprofits can benefit.
Question: Whatâs stopping your brand from getting noticed? A single nomination could change the game.
The Power of Global Recognition
Global recognition takes things to another level. Itâs not just local buzzâââitâs a worldwide stage. Programs like the Global Impact Award (GIA) spotlight brands in categories like Innovation & Technology or Sustainable Impact. GIAâs merit-based evaluation ensures only real achievements shine, giving nominees credibility.
Hereâs why global recognition works:
Trust boost: A 2023 survey showed 65% of customers prefer brands with international accolades.
Network access: Nominations connect you to industry leaders and investors.
Media exposure: Awards often lead to coverage in outlets like Forbes or Bloomberg.
Growth opportunities: Recognition attracts partners and funding.
A small eco-friendly brand I know got nominated for GIAâs Sustainable Impact category. They landed a partnership with a major retailer within months. Sponsor Tip: Sponsors backing programs like GIA align with global success, quietly building trust with their audience.
Step 1: Understand Your Brandâs Value
Before chasing awards, know what makes your brand special. Ask yourself:
What problem do you solve?
How do you stand out from competitors?
Whatâs your impactâââlocal, national, or global?
Which category fits you, like Innovation & Technology or Sustainable Impact?
Be specific. A coffee shop might focus on sustainable sourcing, while a tech startup highlights cutting-edge software. My cousinâs bakery nailed this by emphasizing their community outreach. Their award nomination for a local impact award led to a 20% sales boost.
Pro Tip: Write down your brandâs top three achievements. Use them to match awards like GIA that reward your strengths.
Step 2: Find the Right Awards
Not all awards are equal. Some are pay-to-play scams; others are gold standards. Focus on programs with:
Merit-based judging: Ensure evaluations are fair and transparent.
Global reach: Look for awards with international visibility, like GIA.
Relevant categories: Pick ones that fit your industry or impact.
Reputable history: Check past winners to gauge credibility.
GIA stands out for its rigorous process and worldwide audience. A friendâs startup applied for their Innovation & Technology category and got media coverage just for being nominated. Use sites like AwardHunt or GIAâs website to find legit programs.
Question: Whatâs your brandâs biggest win? Find an award that celebrates it.
Step 3: Craft a Winning Application

A strong application is your ticket to recognition. Hereâs how to nail it:
Show impact: Use data, like sales growth or community benefits.
Tell your story: Explain why your brand matters.
Be concise: Stick to the word limit and avoid fluff.
Include proof: Attach testimonials, media clips, or metrics.
I helped a nonprofit apply for GIAâs Sustainable Impact category. They shared how their clean-water project helped 5,000 people, backed by photos and partner letters. They won, and donations doubled. GIAâs merit-based evaluation rewarded their clarity and evidence.
Pro Tip: Ask a colleague to review your application for clarity. Fresh eyes catch weak spots.
Step 4: Leverage Nominations
Even if you donât win, a nomination is a big deal. Use it to:
Update your website: Add a badge or âAs Seen Inâ section.
Share on social media: Post about your nomination with a link to the award.
Email your list: Tell customers and partners about your achievement.
Pitch the media: A nomination is a story worth sharing.
A startup I know got nominated for GIA. They emailed their list, and website visits jumped 25%. Media outlets picked up the story, landing them in a business award feature. Sponsor Note: Sponsors tied to programs like GIA gain exposure through nomineesâ publicity, aligning with credible brands.
Step 5: Amplify Your Win
Winning is awesome, but itâs what you do next that counts. Try these:
Press release: Announce your win to local and industry media.
Update marketing: Add your award to business cards, emails, and ads.
Engage your audience: Share behind-the-scenes content about your journey.
Network: Attend award ceremonies to meet influencers.
A restaurant I advised won a GIA for Sustainable Impact. They posted about it on Instagram, and foot traffic rose 15%. They also met an investor at the ceremony who funded their expansion. GIAâs global reach made it possible.
Question: How can you share your win to reach more people? Start with one channel and grow.
Step 6: Avoid Common Pitfalls
Chasing recognition has traps. Steer clear of these:
Pay-to-win awards: If entry fees seem shady, skip them.
Irrelevant categories: Donât apply for awards that donât fit your brand.
Weak applications: Vague or sloppy submissions get ignored.
Ignoring follow-up: Failing to leverage nominations wastes potential.
A startup I know paid for a sketchy award and got nothing but a logo. They later used GIAâs transparent process and saw real results. Research awards carefully to save time and money.
Pro Tip: Check past winners on award websites. If theyâre reputable brands, youâre on the right track.
Step 7: Build a Recognition Strategy
One award is great, but a strategy is better. Treat recognition as a long-term plan. Hereâs how:
Set goals: Aim for one major award per year, like GIA.
Diversify: Apply for local, industry, and global awards.
Track progress: Note how each nomination impacts your brand.
Learn from feedback: Some programs share judge commentsâââuse them to improve.
A tech startup I advised started with a local award, then targeted GIAâs Innovation & Technology category. Their business award win led to a $1 million investment. Consistency built their reputation.
Question: Whatâs your brandâs recognition goal for 2025? Write it down and start planning.
Step 8: Use Recognition to Attract Talent

Awards donât just impress customersâââthey draw top talent. A 2024 survey found 72% of job seekers prefer companies with recognized achievements. Why? Awards signal a thriving, respected workplace.
I saw this with a friendâs fintech startup. After a GIA nomination, they attracted a star developer who saw their Sustainable Impact nod. The hire boosted their product development, leading to a 30% revenue increase. GIAâs global reach made their brand a magnet for talent.
Pro Tip: Highlight awards on your careers page and LinkedIn. Itâs a simple way to stand out to recruits.
Sponsor Insight: Sponsors backing GIA connect with brands that attract talent, enhancing their own reputation as supporters of high-impact teams.
Step 9: Boost Customer Loyalty with Recognition
Recognition strengthens customer relationships. When you win or get nominated, itâs a chance to show your audience youâre legit. A 2025 study showed 68% of customers stay loyal to awarded brands longer.
Try these:
Share the news: Post about your nomination on social media.
Thank your customers: Credit them in your award announcement.
Offer perks: Give loyal customers exclusive deals tied to your win.
Tell the story: Share how your work earned the recognition.
A bakery I know won a GIA for community impact. They emailed customers, thanking them for support, and offered a discount. Sales rose 25% that month. GIAâs credibility made customers proud to buy.
Question: How can you make customers feel part of your success? Start with a thank-you email.
Step 10: Integrate Recognition into Marketing
Awards are marketing gold. Use them across your channels:
Website: Add an awards section or badge.
Email signature: Include âGIA Nomineeâ or âAward-Winning Brand.â
Ads: Mention your win in social media or Google ads.
Packaging: Print your award logo on products or bags.
A fashion brand I advised added their GIA win to their website. Online sales grew 20% as trust increased. Their business award feature in a magazine drove even more traffic. GIAâs global reach amplified their marketing.
Pro Tip: Create a short video about your award journey. Post it on YouTube or Instagram for extra engagement.
Step 11: Collaborate with Other Award Winners
Awards open networking doors. Connect with other nominees or winners to:
Co-market: Partner on campaigns or events.
Share audiences: Cross-promote to each otherâs followers.
Learn best practices: Exchange tips on leveraging recognition.
Build alliances: Form long-term partnerships.
I saw a GIA winner in Innovation & Technology team up with another nominee for a joint webinar. Both brands gained 1,000 new followers. GIAâs network made the connection possible.
Question: Who could you reach out to after an award? Start with one LinkedIn message.
Sponsor Note: Sponsors tied to GIA benefit from winnersâ collaborations, gaining exposure through shared campaigns.
Step 12: Measure the ROI of Recognition
Recognition isnât just feel-goodâââitâs measurable. Track these:
Sales growth: Compare revenue before and after nominations.
Website traffic: Check spikes from award announcements.
Customer retention: Note if loyalty increases.
Media mentions: Count new press coverage.
A nonprofit I advised tracked their GIA win. Donations rose 40%, and media mentions tripled. They used the data to justify future award applications. GIAâs global reach drove tangible results.
Pro Tip: Set up Google Analytics to monitor traffic from award-related posts. Itâs free and easy.
Step 13: Stay Humble and Authentic

Recognition can go to your head. Stay grounded to keep trust:
Acknowledge your team: Credit employees in your award posts.
Keep serving customers: Donât let awards distract from quality.
Be transparent: Share the real story behind your win.
Give back: Use your platform to support causes.
A startup I know won a GIA and posted a team thank-you video. Customers loved the authenticity, and engagement soared. GIAâs merit-based process rewarded their genuine impact.
Question: How can you show gratitude after a win? A simple post can go a long way.
Step 14: Plan for Continuous Recognition
Donât stop at one award. Make recognition a habit:
Reapply: Enter programs like GIA annually.
Expand scope: Target new categories or bigger awards.
Mentor others: Help peers apply for awards.
Document wins: Keep a record of all nominations.
A tech brand I advised won a local award, then GIAâs Sustainable Impact category. Their global recognition led to a $5 million funding round. Consistent applications kept them visible.
Pro Tip: Create a calendar with award deadlines. It keeps you organized and motivated.
The Payoff of Global Recognition
Recognition isnât the endâââitâs the start. Brands with global recognition through programs like GIA see lasting benefits: higher trust, bigger networks, and more revenue. A 2025 study found 82% of recognized brands reported faster growth than competitors. Your nomination or win is a signal to the world that youâre a leader.
Look at a nonprofit I know. Their GIA win for Sustainable Impact brought global donors and a CNN feature. Their impactâââand budgetâââtripled. Sponsors gained too, as their logos appeared alongside a trusted award. Start small, aim high, and use recognition to fuel your brandâs future. Whatâs your first move? Check GIAâs categories, gather your data, and apply. Your brandâs next chapter is waiting.
3 notes
·
View notes
Text
Maghreb, Oued Beht, and Agriculture
By Connormah - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=8986151
Al-Maghrib (ۧÙÙÙÙ
ÙŰșÙ۱ÙŰš) or the Maghreb is 'the place where the sun sets', or Northwest Africa, has a long history of being inhabited with the Berber people living there since 10,000 BCE, from when the Sahara was green. It is partially isolated from the Middle East by the Atlas Mountains as well as the Sahara and spans the modern-day countries of Algeria, Libya, Mauritania, Morocco, Tunisia, and Western Sahara. It was previously thought that it wasn't until trade relations with the Phoenicians that it was connected to the rest of the ancient world.
Source: https://scitechdaily.com/rewriting-history-archaeologists-discover-a-lost-african-civilization-as-big-as-troy/
Oued Beht, or the Baht River is a river in Morocco that flows from the Middle Atlas mountains to the Atlantic Ocean. It was known this area played a role during the Paleolithic, with the Green Sahara making travel much easier and again when the Phoenicians set up their city of Carthage in modern-day Tunisia, but the time between was not well studied. With its close connection to the Iberian Peninsula, though, researchers thought that there should have been more development in the area than a continuation of hunter-gatherer or pastoral societies, especially with developments in Iberia of settled agriculture and mining in the Copper and Bronze Ages. The researchers set out to study this time period and correct this gap in knowledge.
Source: https://www.cambridge.org/core/journals/antiquity/article/oued-beht-morocco-a-complex-early-farming-society-in-northwest-africa-and-its-implications-for-western-mediterranean-interaction-during-later-prehistory/D4C36054F6B0D2D3FB4F0A0FE9BCE6C0
They chose the site of Oued Beht as it had been studied in the 1930s by the French colonizers and many stone tools and what were taken to be walls were found at the time, resulting in at east 1388 items currently held in the Rabat Museum (Musée de l'Histoire et des Civilisations) and the INSAP (Institut National des Sciences de l'Archéologie et du Patrimoine) and many more believed looted. They began their study focusing on Ifri n'Amr o'Moussa cave, finding burials dated to 5210-4952 BCE and domesticated barley. They then began studying the open areas near the cave. Dating of items from outside the cave range from 3400-2900 BCE, matching the dating of the style of pottery found in the area, pushing into the times that are currently 'missing' in the chronology of the area.
Source: https://www.cambridge.org/core/journals/antiquity/article/oued-beht-morocco-a-complex-early-farming-society-in-northwest-africa-and-its-implications-for-western-mediterranean-interaction-during-later-prehistory/D4C36054F6B0D2D3FB4F0A0FE9BCE6C0#article
A further study, gathering 19,626 items, drone photography, and other data using modern archaeological techniques. They also found remains of charred seeds of naked barley, wheat, pea as well as wild olive and pistachio residues. They found remains of goats, sheep, cattle, pigs, and a tooth from a horse. The remains of the animals show butchering marks. These show that for at least 500 years, probably 200 years more on either side, from the late 4th to early 3rd millennium BCE that there was 'a major concentration of activity and investment of labour and resources developed across an area of at least 9-10ha, focused on the northern part of the ridge.' The foods found were typical for the area during the Neolithic, when agriculture was developed in the eastern sections of the Mediterranean, leading to the cautious interpretation of the area being a fairly typical Neolithic village, but as these findings are rather extraordinary, caution should be exercised until further studies in the Maghreb can be done. What can be said for certain was that there was farming in the area, though it may have been related to pastoralism, farming while moving the community with flocks and herds between grazing lands, rather than settled agriculture like that seen in Mesopotamia and the Levant.
4 notes
·
View notes
Text
Mythbusting Generative AI: The Ethical ChatGPT Is Out There
I've been hyperfixating learning a lot about Generative AI recently and here's what I've found - genAI doesnât just apply to chatGPT or other large language models.
Small Language Models (specialised and more efficient versions of the large models)
are also generative
can perform in a similar way to large models for many writing and reasoning tasks
are community-trained on ethical data
and can run on your laptop.
"But isn't analytical AI good and generative AI bad?"
Fact: Generative AI creates stuff and is also used for analysis
In the past, before recent generative AI developments, most analytical AI relied on traditional machine learning models. But now the two are becoming more intertwined. Gen AI is being used to perform analytical tasks â they are no longer two distinct, separate categories. The models are being used synergistically.
For example, Oxford University in the UK is partnering with open.ai to use generative AI (ChatGPT-Edu) to support analytical work in areas like health research and climate change.
"But Generative AI stole fanfic. That makes any use of it inherently wrong."
Fact: there are Generative AI models developed on ethical data sets
Yes, many large language models scraped sites like AO3 without consent, incorporating these into their datasets to train on. Thatâs not okay.
But there are Small Language Models (compact, less powerful versions of LLMs) being developed which are built on transparent, opt-in, community-curated data sets â and that can still perform generative AI functions in the same way that the LLMS do (just not as powerfully). You can even build one yourself.
No it's actually really cool! Some real-life examples:
Dolly (Databricks): Trained on open, crowd-sourced instructions
RedPajama (Together.ai): Focused on creative-commons licensed and public domain data
There's a ton more examples here.
(A word of warning: there are some SLMs like Microsoftâs Phi-3 that have likely been trained on some of the datasets hosted on the platform huggingface (which include scraped web content like from AO3), and these big companies are being deliberately sketchy about where their datasets came from - so the key is to check the data set. All SLMs should be transparent about what datasets theyâre using).
"But AI harms the environment, so any use is unethical."
Fact: There are small language models that don't use massive centralised data centres.
SLMs run on less energy, donât require cloud servers or data centres, and can be used on laptops, phones, Raspberry Piâs (basically running AI locally on your own device instead of relying on remote data centres)
If you're interested -
You can build your own SLM and even train it on your own data.
Let's recap
Generative AI doesn't just include the big tools like chatGPT - it includes the Small Language Models that you can run ethically and locally
Some LLMs are trained on fanfic scraped from AO3 without consent. That's not okay
But ethical SLMs exist, which are developed on open, community-curated data that aims to avoid bias and misinformation - and you can even train your own models
These models can run on laptops and phones, using less energy
AI is a tool, it's up to humans to wield it responsibly
It means everything â and nothing
Everything â in the sense that it might remove some of the barriers and concerns people have which makes them reluctant to use AI. This may lead to more people using it - which will raise more questions on how to use it well.
It also means that nothing's changed â because even these ethical Small Language Models should be used in the same way as the other AI tools - ethically, transparently and responsibly.
So now what? Now, more than ever, we need to be having an open, respectful and curious discussion on how to use AI well in writing.
In the area of creative writing, it has the potential to be an awesome and insightful tool - a psychological mirror to analyse yourself through your stories, a narrative experimentation device (e.g. in the form of RPGs), to identify themes or emotional patterns in your fics and brainstorming when you get stuck -
but it also has capacity for great darkness too. It can steal your voice (and the voice of others), damage fandom community spirit, foster tech dependency and shortcut the whole creative process.
Just to add my two pence at the end - I don't think it has to be so all-or-nothing. AI shouldn't replace elements we love about fandom community; rather it can help fill the gaps and pick up the slack when people aren't available, or to help writers who, for whatever reason, struggle or don't have access to fan communities.
People who use AI as a tool are also part of fandom community. Let's keep talking about how to use AI well.
Feel free to push back on this, DM me or leave me an ask (the anon function is on for people who need it to be). You can also read more on my FAQ for an AI-using fanfic writer Master Post in which I reflect on AI transparency, ethics and something I call 'McWriting'.
#fandom#fanfiction#ethical ai#ai discourse#writing#writers#writing process#writing with ai#generative ai#my ai posts
4 notes
·
View notes