#video robot software
Explore tagged Tumblr posts
Text
1 note
·
View note
Text
I decided to take a shot at a game jam for the first time, it being Pirate Software's Game Jam 16, along with a few friends like @redlaserpointers, and I'm so happy about the result after 2 weeks making the game!
It's about a robot who ends up in a situation where his pilot cannot help, leaving only himself to stop another, rogue robot, from attempting to hurt humanity. Robots are awesome and if you like them, you should check it out!
#game jam#game dev#pirate software#indigo satellite#zero and phobos my favorite characters#oc#video game#gaming#robots#robot oc
5 notes
·
View notes
Text
Browsing video games at a Software Etc. store in 2002
2 notes
·
View notes
Text
Warehouse robot collapses after working for 20 hours straight.
#Warehouse robot collapses after working for 20 hours straight.#robotics#robots#robot#slavery#exhaustion#a.i.#artificial intelligence#ai#software#hardware#firmware#infotech#information technology#ausgov#politas#auspol#tasgov#taspol#australia#fuck neoliberals#neoliberal capitalism#anthony albanese#albanese government#videos#video#class war#eat the rich#eat the fucking rich#capitalism
5 notes
·
View notes
Text

#illustration#drawing#concept art#art#sketch#artists on tumblr#character design#tumblr artwork#artwork#ink drawing#armored core 6#armored core#mecha#machine#robot#robot art#from software#video game art#video games
9 notes
·
View notes
Text
The HP Robots Otto is a versatile, modular robot designed specifically for educational purposes. It offers students and teachers an exciting opportunity to immerse themselves in the world of robotics, 3D printing, electronics and programming. The robot was developed by HP as part of their robotics initiative and is particularly suitable for use in science, technology, engineering and mathematics (STEM) classes. https://www.youtube.com/watch?v=ok1GaEeTGPA Key features of Otto: Modular design: Otto is a modular robot that allows students to build, program and customize it through extensions. This promotes an understanding of technology and creativity. The modular structure allows various components such as motors, sensors and LEDs to be added or replaced, which increases the learning curve for students. Programmability: The robot can be programmed with various programming languages, including block-based programming for beginners and Python and C++ for advanced programmers. This diversity allows students to continuously improve their coding skills and adapt to the complexity of the tasks. Sensors and functions: Equipped with ultrasonic sensors for obstacle detection, line tracking sensors and RGB LEDs, Otto offers numerous interactive possibilities. These features allow students to program complex tasks such as navigating courses or tracing lines. The sensors help to detect the environment and react accordingly. 3D printing and customizability: Students can design Otto's outer parts themselves and produce them with a 3D printer. This allows for further personalization and customization of the robot. This creative freedom not only promotes technical understanding, but also artistic skills. Own parts can be designed and sensors can be attached to desired locations. Educational approach: Otto is ideal for use in schools and is aimed at students from the age of 8. Younger students can work under supervision, while older students from the age of 14 can also use and expand the robot independently. The kit contains all the necessary components to build a functioning robot, including motors, sensors, and a rechargeable battery. Programming environments: Otto is programmed via a web-based platform that runs on all operating systems. This platform offers different modes: Block-based programming: Similar to Scratch Jr., ideal for beginners. This visual programming makes it easier to get started in the world of programming and helps students understand basic concepts such as loops and conditions. Python: A Python editor is available for advanced users. Python is a popular language that works well for teaching because it is easy to read and write. Students can use Python to develop more complex algorithms and expand their programming skills. C++: Compatible with the Arduino IDE for users who have deeper programming knowledge. C++ offers a high degree of flexibility and allows students to access the hardware directly, allowing for their own advanced projects. https://www.youtube.com/watch?v=v5Otdd4fogs&t=3s Expansion Kits: In addition to the Starter Kit, there are several expansion kits. All expansion kits require the starter kit, as they are built on top of it. Emote Expansion Kit: It includes components such as an LED matrix display, OLED display, and an MP3 player that allow the robot to display visual and acoustic responses. This kit is particularly suitable for creative projects where Otto should act as an interactive companion. The emote kit allows Otto to show emotions, mirror human interactions, and develop different personalities. Sense Expansion Kit: With the Sense Kit, Otto can perceive its surroundings through various sensors. Included are sensors for temperature, humidity, light and noise as well as an inclination sensor. These enable a wide range of interactions with the environment. The kit is ideal for projects that focus on environmental detection and data analysis.
Interact Expansion Kit: The Interact kit expands Otto's tactile interaction capability through modules such as push buttons, rotary knobs and accelerometers. It enables precise inputs and reactions, as well as measurement of acceleration. This kit is great for playful activities and interactive games. Invent Expansion Kit: The Invent kit is specifically designed to encourage users' creativity. It allows the individual adaptation of Otto's functionalities and design through 3D printing and additional modules as well as compatible clamping blocks. Users can design and print new accessories to make the robot unique. Equip Otto with legs and teach him to walk or make him fit for outdoor use off-road with chains. https://www.youtube.com/watch?v=k7sb23sKPBM&t=1278s Use in the classroom: Otto comes with extensive resources developed by teachers. These materials help teachers design effective STEM lessons without the need for prior knowledge. The robot can be used both in the classroom and at home. The didactic materials include: Curricula: Structured lesson plans that help teachers plan and execute lessons. Project ideas and worksheets: A variety of projects that encourage students to think creatively and expand their skills. Tutorials and videos: Additional learning materials to help students better understand complex concepts. Conclusion: The HP Robots Otto is an excellent tool for fostering technical understanding and creativity in students. Thanks to its modular design and diverse programming options, it offers a hands-on learning experience in the field of robotics and electronics. Ideal for use in schools, Otto provides teachers with a comprehensive platform to accompany students on an exciting journey into the world of technology. In particular, Otto's versatility through the 3D-printed parts and expansion packs offers the opportunity to build the personal learning robot. https://www.youtube.com/watch?v=oumP4L29aDI https://www.youtube.com/watch?v=_p_OS6dmH7o&t=1236s
#3DPrinting#Development#EducationStudies#English#General#Hardware#MyCollection#Pictures#Programming#RobotsBlogContent#RobotsBlogTimelapseBuilds#RobotsBlogVideoCreations#Software#STEM#Toys#Video#3dprinted#Classroom#DIY#Emote#HP#HPRobots#Interact#Invent#Otto#OttoDIY#Robot#Robotics#school
0 notes
Text
Anthropic's stated "AI timelines" seem wildly aggressive to me.
As far as I can tell, they are now saying that by 2028 – and possibly even by 2027, or late 2026 – something they call "powerful AI" will exist.
And by "powerful AI," they mean... this (source, emphasis mine):
In terms of pure intelligence, it is smarter than a Nobel Prize winner across most relevant fields – biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc. In addition to just being a “smart thing you talk to”, it has all the “interfaces” available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access. It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on. It does all of these tasks with, again, a skill exceeding that of the most capable humans in the world. It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary. It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory it could even design robots or equipment for itself to use. The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10x-100x human speed. It may however be limited by the response time of the physical world or of software it interacts with. Each of these million copies can act independently on unrelated tasks, or if needed can all work together in the same way humans would collaborate, perhaps with different subpopulations fine-tuned to be especially good at particular tasks.
In the post I'm quoting, Amodei is coy about the timeline for this stuff, saying only that
I think it could come as early as 2026, though there are also ways it could take much longer. But for the purposes of this essay, I’d like to put these issues aside [...]
However, other official communications from Anthropic have been more specific. Most notable is their recent OSTP submission, which states (emphasis in original):
Based on current research trajectories, we anticipate that powerful AI systems could emerge as soon as late 2026 or 2027 [...] Powerful AI technology will be built during this Administration. [i.e. the current Trump administration -nost]
See also here, where Jack Clark says (my emphasis):
People underrate how significant and fast-moving AI progress is. We have this notion that in late 2026, or early 2027, powerful AI systems will be built that will have intellectual capabilities that match or exceed Nobel Prize winners. They’ll have the ability to navigate all of the interfaces… [Clark goes on, mentioning some of the other tenets of "powerful AI" as in other Anthropic communications -nost]
----
To be clear, extremely short timelines like these are not unique to Anthropic.
Miles Brundage (ex-OpenAI) says something similar, albeit less specific, in this post. And Daniel Kokotajlo (also ex-OpenAI) has held views like this for a long time now.
Even Sam Altman himself has said similar things (though in much, much vaguer terms, both on the content of the deliverable and the timeline).
Still, Anthropic's statements are unique in being
official positions of the company
extremely specific and ambitious about the details
extremely aggressive about the timing, even by the standards of "short timelines" AI prognosticators in the same social cluster
Re: ambition, note that the definition of "powerful AI" seems almost the opposite of what you'd come up with if you were trying to make a confident forecast of something.
Often people will talk about "AI capable of transforming the world economy" or something more like that, leaving room for the AI in question to do that in one of several ways, or to do so while still failing at some important things.
But instead, Anthropic's definition is a big conjunctive list of "it'll be able to do this and that and this other thing and...", and each individual capability is defined in the most aggressive possible way, too! Not just "good enough at science to be extremely useful for scientists," but "smarter than a Nobel Prize winner," across "most relevant fields" (whatever that means). And not just good at science but also able to "write extremely good novels" (note that we have a long way to go on that front, and I get the feeling that people at AI labs don't appreciate the extent of the gap [cf]). Not only can it use a computer interface, it can use every computer interface; not only can it use them competently, but it can do so better than the best humans in the world. And all of that is in the first two paragraphs – there's four more paragraphs I haven't even touched in this little summary!
Re: timing, they have even shorter timelines than Kokotajlo these days, which is remarkable since he's historically been considered "the guy with the really short timelines." (See here where Kokotajlo states a median prediction of 2028 for "AGI," by which he means something less impressive than "powerful AI"; he expects something close to the "powerful AI" vision ["ASI"] ~1 year or so after "AGI" arrives.)
----
I, uh, really do not think this is going to happen in "late 2026 or 2027."
Or even by the end of this presidential administration, for that matter.
I can imagine it happening within my lifetime – which is wild and scary and marvelous. But in 1.5 years?!
The confusing thing is, I am very familiar with the kinds of arguments that "short timelines" people make, and I still find the Anthropic's timelines hard to fathom.
Above, I mentioned that Anthropic has shorter timelines than Daniel Kokotajlo, who "merely" expects the same sort of thing in 2029 or so. This probably seems like hairsplitting – from the perspective of your average person not in these circles, both of these predictions look basically identical, "absurdly good godlike sci-fi AI coming absurdly soon." What difference does an extra year or two make, right?
But it's salient to me, because I've been reading Kokotajlo for years now, and I feel like I basically get understand his case. And people, including me, tend to push back on him in the "no, that's too soon" direction. I've read many many blog posts and discussions over the years about this sort of thing, I feel like I should have a handle on what the short-timelines case is.
But even if you accept all the arguments evinced over the years by Daniel "Short Timelines" Kokotajlo, even if you grant all the premises he assumes and some people don't – that still doesn't get you all the way to the Anthropic timeline!
To give a very brief, very inadequate summary, the standard "short timelines argument" right now is like:
Over the next few years we will see a "growth spurt" in the amount of computing power ("compute") used for the largest LLM training runs. This factor of production has been largely stagnant since GPT-4 in 2023, for various reasons, but new clusters are getting built and the metaphorical car will get moving again soon. (See here)
By convention, each "GPT number" uses ~100x as much training compute as the last one. GPT-3 used ~100x as much as GPT-2, and GPT-4 used ~100x as much as GPT-3 (i.e. ~10,000x as much as GPT-2).
We are just now starting to see "~10x GPT-4 compute" models (like Grok 3 and GPT-4.5). In the next few years we will get to "~100x GPT-4 compute" models, and by 2030 will will reach ~10,000x GPT-4 compute.
If you think intuitively about "how much GPT-4 improved upon GPT-3 (100x less) or GPT-2 (10,000x less)," you can maybe convince yourself that these near-future models will be super-smart in ways that are difficult to precisely state/imagine from our vantage point. (GPT-4 was way smarter than GPT-2; it's hard to know what "projecting that forward" would mean, concretely, but it sure does sound like something pretty special)
Meanwhile, all kinds of (arguably) complementary research is going on, like allowing models to "think" for longer amounts of time, giving them GUI interfaces, etc.
All that being said, there's still a big intuitive gap between "ChatGPT, but it's much smarter under the hood" and anything like "powerful AI." But...
...the LLMs are getting good enough that they can write pretty good code, and they're getting better over time. And depending on how you interpret the evidence, you may be able to convince yourself that they're also swiftly getting better at other tasks involved in AI development, like "research engineering." So maybe you don't need to get all the way yourself, you just need to build an AI that's a good enough AI developer that it improves your AIs faster than you can, and then those AIs are even better developers, etc. etc. (People in this social cluster are really keen on the importance of exponential growth, which is generally a good trait to have but IMO it shades into "we need to kick off exponential growth and it'll somehow do the rest because it's all-powerful" in this case.)
And like, I have various disagreements with this picture.
For one thing, the "10x" models we're getting now don't seem especially impressive – there has been a lot of debate over this of course, but reportedly these models were disappointing to their own developers, who expected scaling to work wonders (using the kind of intuitive reasoning mentioned above) and got less than they hoped for.
And (in light of that) I think it's double-counting to talk about the wonders of scaling and then talk about reasoning, computer GUI use, etc. as complementary accelerating factors – those things are just table stakes at this point, the models are already maxing out the tasks you had defined previously, you've gotta give them something new to do or else they'll just sit there wasting GPUs when a smaller model would have sufficed.
And I think we're already at a point where nuances of UX and "character writing" and so forth are more of a limiting factor than intelligence. It's not a lack of "intelligence" that gives us superficially dazzling but vapid "eyeball kick" prose, or voice assistants that are deeply uncomfortable to actually talk to, or (I claim) "AI agents" that get stuck in loops and confuse themselves, or any of that.
We are still stuck in the "Helpful, Harmless, Honest Assistant" chatbot paradigm – no one has seriously broke with it since that Anthropic introduced it in a paper in 2021 – and now that paradigm is showing its limits. ("Reasoning" was strapped onto this paradigm in a simple and fairly awkward way, the new "reasoning" models are still chatbots like this, no one is actually doing anything else.) And instead of "okay, let's invent something better," the plan seems to be "let's just scale up these assistant chatbots and try to get them to self-improve, and they'll figure it out." I won't try to explain why in this post (IYI I kind of tried to here) but I really doubt these helpful/harmless guys can bootstrap their way into winning all the Nobel Prizes.
----
All that stuff I just said – that's where I differ from the usual "short timelines" people, from Kokotajlo and co.
But OK, let's say that for the sake of argument, I'm wrong and they're right. It still seems like a pretty tough squeeze to get to "powerful AI" on time, doesn't it?
In the OSTP submission, Anthropic presents their latest release as evidence of their authority to speak on the topic:
In February 2025, we released Claude 3.7 Sonnet, which is by many performance benchmarks the most powerful and capable commercially-available AI system in the world.
I've used Claude 3.7 Sonnet quite a bit. It is indeed really good, by the standards of these sorts of things!
But it is, of course, very very far from "powerful AI." So like, what is the fine-grained timeline even supposed to look like? When do the many, many milestones get crossed? If they're going to have "powerful AI" in early 2027, where exactly are they in mid-2026? At end-of-year 2025?
If I assume that absolutely everything goes splendidly well with no unexpected obstacles – and remember, we are talking about automating all human intellectual labor and all tasks done by humans on computers, but sure, whatever – then maybe we get the really impressive next-gen models later this year or early next year... and maybe they're suddenly good at all the stuff that has been tough for LLMs thus far (the "10x" models already released show little sign of this but sure, whatever)... and then we finally get into the self-improvement loop in earnest, and then... what?
They figure out to squeeze even more performance out of the GPUs? They think of really smart experiments to run on the cluster? Where are they going to get all the missing information about how to do every single job on earth, the tacit knowledge, the stuff that's not in any web scrape anywhere but locked up in human minds and inaccessible private data stores? Is an experiment designed by a helpful-chatbot AI going to finally crack the problem of giving chatbots the taste to "write extremely good novels," when that taste is precisely what "helpful-chatbot AIs" lack?
I guess the boring answer is that this is all just hype – tech CEO acts like tech CEO, news at 11. (But I don't feel like that can be the full story here, somehow.)
And the scary answer is that there's some secret Anthropic private info that makes this all more plausible. (But I doubt that too – cf. Brundage's claim that there are no more secrets like that now, the short-timelines cards are all on the table.)
It just does not make sense to me. And (as you can probably tell) I find it very frustrating that these guys are out there talking about how human thought will basically be obsolete in a few years, and pontificating about how to find new sources of meaning in life and stuff, without actually laying out an argument that their vision – which would be the common concern of all of us, if it were indeed on the horizon – is actually likely to occur on the timescale they propose.
It would be less frustrating if I were being asked to simply take it on faith, or explicitly on the basis of corporate secret knowledge. But no, the claim is not that, it's something more like "now, now, I know this must sound far-fetched to the layman, but if you really understand 'scaling laws' and 'exponential growth,' and you appreciate the way that pretraining will be scaled up soon, then it's simply obvious that –"
No! Fuck that! I've read the papers you're talking about, I know all the arguments you're handwaving-in-the-direction-of! It still doesn't add up!
280 notes
·
View notes
Text
Three AI insights for hard-charging, future-oriented smartypantses

MERE HOURS REMAIN for the Kickstarter for the audiobook for The Bezzle, the sequel to Red Team Blues, narrated by @wilwheaton! You can pre-order the audiobook and ebook, DRM free, as well as the hardcover, signed or unsigned. There’s also bundles with Red Team Blues in ebook, audio or paperback.
Living in the age of AI hype makes demands on all of us to come up with smartypants prognostications about how AI is about to change everything forever, and wow, it's pretty amazing, huh?
AI pitchmen don't make it easy. They like to pile on the cognitive dissonance and demand that we all somehow resolve it. This is a thing cult leaders do, too – tell blatant and obvious lies to their followers. When a cult follower repeats the lie to others, they are demonstrating their loyalty, both to the leader and to themselves.
Over and over, the claims of AI pitchmen turn out to be blatant lies. This has been the case since at least the age of the Mechanical Turk, the 18th chess-playing automaton that was actually just a chess player crammed into the base of an elaborate puppet that was exhibited as an autonomous, intelligent robot.
The most prominent Mechanical Turk huckster is Elon Musk, who habitually, blatantly and repeatedly lies about AI. He's been promising "full self driving" Telsas in "one to two years" for more than a decade. Periodically, he'll "demonstrate" a car that's in full-self driving mode – which then turns out to be canned, recorded demo:
https://www.reuters.com/technology/tesla-video-promoting-self-driving-was-staged-engineer-testifies-2023-01-17/
Musk even trotted an autonomous, humanoid robot on-stage at an investor presentation, failing to mention that this mechanical marvel was just a person in a robot suit:
https://www.siliconrepublic.com/machines/elon-musk-tesla-robot-optimus-ai
Now, Musk has announced that his junk-science neural interface company, Neuralink, has made the leap to implanting neural interface chips in a human brain. As Joan Westenberg writes, the press have repeated this claim as presumptively true, despite its wild implausibility:
https://joanwestenberg.com/blog/elon-musk-lies
Neuralink, after all, is a company notorious for mutilating primates in pursuit of showy, meaningless demos:
https://www.wired.com/story/elon-musk-pcrm-neuralink-monkey-deaths/
I'm perfectly willing to believe that Musk would risk someone else's life to help him with this nonsense, because he doesn't see other people as real and deserving of compassion or empathy. But he's also profoundly lazy and is accustomed to a world that unquestioningly swallows his most outlandish pronouncements, so Occam's Razor dictates that the most likely explanation here is that he just made it up.
The odds that there's a human being beta-testing Musk's neural interface with the only brain they will ever have aren't zero. But I give it the same odds as the Raelians' claim to have cloned a human being:
https://edition.cnn.com/2003/ALLPOLITICS/01/03/cf.opinion.rael/
The human-in-a-robot-suit gambit is everywhere in AI hype. Cruise, GM's disgraced "robot taxi" company, had 1.5 remote operators for every one of the cars on the road. They used AI to replace a single, low-waged driver with 1.5 high-waged, specialized technicians. Truly, it was a marvel.
Globalization is key to maintaining the guy-in-a-robot-suit phenomenon. Globalization gives AI pitchmen access to millions of low-waged workers who can pretend to be software programs, allowing us to pretend to have transcended the capitalism's exploitation trap. This is also a very old pattern – just a couple decades after the Mechanical Turk toured Europe, Thomas Jefferson returned from the continent with the dumbwaiter. Jefferson refined and installed these marvels, announcing to his dinner guests that they allowed him to replace his "servants" (that is, his slaves). Dumbwaiters don't replace slaves, of course – they just keep them out of sight:
https://www.stuartmcmillen.com/blog/behind-the-dumbwaiter/
So much AI turns out to be low-waged people in a call center in the Global South pretending to be robots that Indian techies have a joke about it: "AI stands for 'absent Indian'":
https://pluralistic.net/2024/01/29/pay-no-attention/#to-the-little-man-behind-the-curtain
A reader wrote to me this week. They're a multi-decade veteran of Amazon who had a fascinating tale about the launch of Amazon Go, the "fully automated" Amazon retail outlets that let you wander around, pick up goods and walk out again, while AI-enabled cameras totted up the goods in your basket and charged your card for them.
According to this reader, the AI cameras didn't work any better than Tesla's full-self driving mode, and had to be backstopped by a minimum of three camera operators in an Indian call center, "so that there could be a quorum system for deciding on a customer's activity – three autopilots good, two autopilots bad."
Amazon got a ton of press from the launch of the Amazon Go stores. A lot of it was very favorable, of course: Mister Market is insatiably horny for firing human beings and replacing them with robots, so any announcement that you've got a human-replacing robot is a surefire way to make Line Go Up. But there was also plenty of critical press about this – pieces that took Amazon to task for replacing human beings with robots.
What was missing from the criticism? Articles that said that Amazon was probably lying about its robots, that it had replaced low-waged clerks in the USA with even-lower-waged camera-jockeys in India.
Which is a shame, because that criticism would have hit Amazon where it hurts, right there in the ole Line Go Up. Amazon's stock price boost off the back of the Amazon Go announcements represented the market's bet that Amazon would evert out of cyberspace and fill all of our physical retail corridors with monopolistic robot stores, moated with IP that prevented other retailers from similarly slashing their wage bills. That unbridgeable moat would guarantee Amazon generations of monopoly rents, which it would share with any shareholders who piled into the stock at that moment.
See the difference? Criticize Amazon for its devastatingly effective automation and you help Amazon sell stock to suckers, which makes Amazon executives richer. Criticize Amazon for lying about its automation, and you clobber the personal net worth of the executives who spun up this lie, because their portfolios are full of Amazon stock:
https://sts-news.medium.com/youre-doing-it-wrong-notes-on-criticism-and-technology-hype-18b08b4307e5
Amazon Go didn't go. The hundreds of Amazon Go stores we were promised never materialized. There's an embarrassing rump of 25 of these things still around, which will doubtless be quietly shuttered in the years to come. But Amazon Go wasn't a failure. It allowed its architects to pocket massive capital gains on the way to building generational wealth and establishing a new permanent aristocracy of habitual bullshitters dressed up as high-tech wizards.
"Wizard" is the right word for it. The high-tech sector pretends to be science fiction, but it's usually fantasy. For a generation, America's largest tech firms peddled the dream of imminently establishing colonies on distant worlds or even traveling to other solar systems, something that is still so far in our future that it might well never come to pass:
https://pluralistic.net/2024/01/09/astrobezzle/#send-robots-instead
During the Space Age, we got the same kind of performative bullshit. On The Well David Gans mentioned hearing a promo on SiriusXM for a radio show with "the first AI co-host." To this, Craig L Maudlin replied, "Reminds me of fins on automobiles."
Yup, that's exactly it. An AI radio co-host is to artificial intelligence as a Cadillac Eldorado Biaritz tail-fin is to interstellar rocketry.

Back the Kickstarter for the audiobook of The Bezzle here!
If you’d like an essay-formatted version of this post to read or share, here’s a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/01/31/neural-interface-beta-tester/#tailfins
#pluralistic#elon musk#neuralink#potemkin ai#neural interface beta-tester#full self driving#mechanical turks#ai#amazon#amazon go#clm#joan westenberg
1K notes
·
View notes
Note
Okay so to add to the superfam x neglected! reader! What if the reader is an absolute nerd with engineering (mechanical, computer, electronics, software, and robotics) and as they slowly grew comfortable with Lex Luthor, they just found themselves yapping away about the functions of the Omnitrix, interesting projects they're working on, and what they want to do yk. Like this girl went from quiet, demure, shy who is not used to attention to someone who could talk Lex's ears off for HOURS, and he isn't even mad??? All her ideas are pretty interesting and it's also nice to see her just be in her element. The superfam being regretful because they didn't know Reader could talk this much or be so bright about something. Lois feeling immense guilt because she got so used to the easy way of not having to worry about injuries because Jon, Kon, and Clark don't that she forgot her daughter didn't inherit any of the indestructableness
Also!!! As Reader slowly got used to actually have an adult pay attention to her and encourage her with her interests, plus the praise she receives with her hero persona, she just became more confident. Idk it's like 1 am and the drabble was too good
-����
I want to start with the Ben10 reader x Invincible- I don't really feel the vibe for it- like Debbie has bitten into Nolan for not being happy for powerless kiddie Mark, and she'd do it again, and Nolan honestly just gave "angry because I'm getting attacked" vibes. But it'd be funny to see how shocked Battle Beast would be to see a youngling of his race try and battle him :)))
Also- Dad!Lex honestly just gives PTA Mom vibes to me-
(This and the 3 other Tony Stark!Reader stories I read are slowly making me want to do a TS!Neglected!Reader x batfam, Bruce would lose it at his daughter being so much like Brucie and so little like Bruce)
I'M HAPPY Y'ALL LIKE MY RAMBLING!! Most of the time I feel like it has no rhythm or reason :))
Lex: My kid is a mastermind in robotics and alien tech, on her way to have a greater empire than mine- What can your monkey-brained son do besides chase a ball, Janet?!
You: I don't even go here... I don't even know this woman.
Like, once this man gets attached, he goes crazy. He has an important business meeting at the same time you have a school event? At best, he's video chatting from your school, screaming mid-sentence that you did great, or at worst, not even present in the meeting.
Because let's be honest- rich man who knows how to act and can provide proof of neglect vs an above middle class family who didn't even know where the kid was? The rich man wins.
And sure, you may hate it at first, think it's a ploy to get back at Superman, because why would an adult actually care for you? But instead of lowering the anti-supers measures he has in place, he triples them. He asks about school shit, nags you about homework, if you ate, "You shouldn't sit for so long in front of a screen."
It drove you crazy. And you started acting out- missing homework, having sleepless nights, arguing with him over the smallest shit, until he called you for a serious talk. You were ready for him to tell you to pack your shit and go- but he just asked if everything was fine at school. "Because this isn't you."
Your lip may have trembled, but you refuse to acknowledge that you cried, that tears were even a concept. After hugging, definitely not crying in his arms, you got better- even if he suspended you from hero activity for a year "because you should have just come to talk to him" but whatever-
And boy does all of this fuck with the family. Not every place is anti-super proof, so they hear you talking his ear off at the restaurant he takes you to for the weekly family outing, or to celebrate whatever you made that day. And it builds the anger and guilt.
Lois started stalking Lex, not what she'd call it, but she was. Because she knew you wouldn't be far away. And it was amazing to see you talk about whatever machine-thingy-robot- whatever it was. It hurt to realize what she's been missing, and it hurt even more to see you smile so brightly and proudly at the man who, for years, has tried to ruin the family.
And when Jon sees you out and about and tries to follow after you, wanting his sis back, he loses track of you. Lex just finds a note stating that the cloaking tech works, but after 30 seconds, it fried. He doesn't ask, he already knows. But if you told him, he would obediently listen.
Kon is straight up thinking Lex brainwashed you. He refuses to believe you're doing that out of your own free will and is actively plotting with Tim and Young JL.
It was eerie how much you resembled Kon in his early days as "the real, new Superman" but the difference was in the way you actually were strategic in your flirting. Just enough to charm, but not enough to insinuate a possibility of more. Not that Lex would allow anyone to sniff around you, he has a strict list of requirements for the ideal one, and he's sure no one will hit all of them.
Honestly- he may as well start creating an android, the perfect one, to be the right lover. You shut that idea down quick. "Feels wrong to create an android and take away its free will. There are so many movies and games about why that's a bad idea."
All he hears is to create an android with another main mission, and just let the thing fall in love with you on its own. He has mad confidence in your rizz, mainly because he thinks you are the prize, and anyone not seeing that is crazy or blind. (*caugh* the superfam *caugh*) You could also do no wrong in his eyes, could kill everyone, and he'd still be like "lil baby, innocent, sweet thing", but that's another discussion.
This man will need to take calming pills if you show interest in JL members, aliens like Rook Blonko and Ester, or the witch Charmcaster.
You and Lex:
136 notes
·
View notes
Text
youtube
Meet the MOLA AUV, a multimodality, observing, low-cost, agile autonomous underwater vehicle that features advanced sensors for surveying marine ecosystems. 🤖🪸
At the core of the MOLA AUV is a commercially available Boxfish submersible, built to the CoMPAS Lab's specifications and enhanced with custom instruments and sensors developed by MBARI engineers. The MOLA AUV is equipped with a 4K camera to record high-resolution video of marine life and habitats. Sonar systems use acoustics to ensure the vehicle can consistently “see” 30 meters (100 feet) ahead and work in tandem with stereo cameras that take detailed imagery of the ocean floor.

Leveraging methods developed by the CoMPAS Lab, the vehicle’s six degrees of freedom enable it to move and rotate in any direction efficiently. This agility and portability set the MOLA AUV apart from other underwater vehicles and allow it to leverage software algorithms developed at MBARI to create three-dimensional photo reconstructions of seafloor environments.
In its first field test in the Maldives, the MOLA AUV successfully mapped coral reefs and collected crucial ocean data. With the MOLA AUV’s open-source technology, MBARI hopes to make ocean science more accessible than ever. Watch now to see MOLA in action.
Learn more about this remarkable robot on our website.
86 notes
·
View notes
Note
I couldn’t resist this thing we’re doing with Homestuck characters with Youtube channels so here we go.
Beta trolls first/
Aradia: 100% she’s a spelunker who’s video content is exploring abandoned urban areas and cave systems. She even has on hand dusting tools and a pickaxe to look for exposed fossils. Her uploads coincide within a day of a person dissapearing or the report of a murder.
Tavros: he’s Alternia’s biggest Pokétuber, or I guess in this sense, a Fidutuber. He likes to discuss the meta for competitive and lore about the fiduspawn series. He also dabbles in fantasy RPGs, especially if there is a girl with magical powers or a protagonist whisked away to a fantastical realm, tying back to his Pupa Pan obsession. What’s cool about Tavros’s playthroughs for his RPG games is they’re narrated and voice acted by his friends! He has Nepeta and Aradia do some of the female cast voice acting, as well as Gamzee and Karkat for male voice acting!
Sollux: twitch streaming speedrunner, very popular in the speedrunning community for his TAS tech and glitch hunting prowess. His uploading schedule is very infrequent due to his struggle with sensing the imminently deceased.
Karkat: 100% a movie reviewer like that of the Nostalgia critic, his review gimmick is he trues to violently destroy movies he considers “COMPLETE SHIT FESTIVITIES” by using threshecutioner style combat on the DVD boxes, or if the movie was digitally downloaded, corrupt it with one of his broken, shitty viruses.
Nepeta: survivalist vlogger who gives tips on living in a cave, animal hunting, preparing meals from the meats of different wild animals, and how to keep contact with the civilized world. Notably Nepeta has collaborated with Aradia to guide her through particularly hard to navigate caves.
Kanaya: fashion and aesthetics channel. She is a lifestyle blogger dedicated to showing you how everything can be shaped colored and placed to fit your personality. She’s got playlists for landscaping and gardening, fashion, and hell even how to make the food on your plate look appetizing!
Terezi: skit and parodies channel. Her on hand plushes make her a plush skit channel similar to SuperMarioLogan, and she loves to invite her friends to cast as different pyralsprites, action figures, paper drawings and even an occasional animal carcass for her new episodes. When her plushes are worn out, Terezi instead uses GMod and editing software to make her secondary skit show, think SMG4 but now the characters type in leetspeak and reference bugs and grubs a lot. For using copyrighted characters she has been sued many times and won every case. She uploads legal advice to a secondary channel on how to avoid getting copyright claimed and how to win in court.
Vriska: e-thot and competitive gamer. She plays a lot of ranked team games like Overwatch, Marvel Rivals, Fortnite, and even Team Fortress 2! She’s toxic, and has been known to call out and swear at her teammates and opponents, has doxxed her own moderators for her chat, and once sent Tavros a virus to his computer for wiping the floor with her at FiduFLARP, a modded game of FLARP that adds in fiduspawn monsters as catchable enemies. How she’s not banned is a sinple reason: she always streams with a crop top that’s worn low.
Equius: hybrid channel for electrical engineering and combat training. He’s like electroboom in that he gets shocked quite a lot, but different in that he could build a Boston dynamic robot in less than an hour. He has tutorials on constructing the control systems of military aircraft! His combat videos focus on a lot of boxing and hand-to-hand techniques, and especially how to concentrate the force of your blow. When he does demonstrations for weapon combat, he invites Karkat and Tavros over for battle strategies for Threshecutioners and Cavalreapers.
Gamzee: naturalist and spiritual healing channel. He streams often for calls with chat members asking for their ailments and providing healing advice or even trying a through-the-screen hypnosis method to cure chat members. He’s a fraud, but people love his calm demeanor and positive attitude so much people go to him for vibes, and his “cures” work often enough that some people even believe he has healing magic.
Eridan: strategy games and naitical technology. World of Warships no. 1 advertising advocate. He also dabbles in human games like Hearts of Iron IV and Sid Mires’s Civilization series. He thinks they’re actually really really easy games. When he reviews ships he likes to go to museums to review and describe war ships and how effective they were at sea. Sometimes he can even swim to shipwrecks if he feels like it, which is… rarely.
Feferi: Vlogger for ocean diving and nature documentation. Her positive attitude and natural optimism towards the unknown makes her view even the most ugly and aggressive deep sea life seem cute and misunderstood. Surprisingly her favorite sea life is the shark! When Eridan needs to explore a shipwreck, he uses Feferi as a guide to get him safely diving down to the wreckage, and so he doesn’t feel alone in the dark waters. Deep sea diving actually burns a LOT of calories, especially with how long her videos can get (2-4 hours) so on the side she does shorter Mukbang videos! Commenters are in awe that she’s so skinny despite eating half her weight in food every time she does a Mukbang.
Beta kids/
John: illusions and pranks channel, loves to live record strangers falling for his obvious trucks and deceits. Don’t worry it’s all for fun and no one gets hurt :) His magic even extends to cool programs you can do on your computer to make your desktop do something cool, or customize your pointer (yeah to this day he’s still an amateur at coding) when Karkat is reviewing bad movies, John is usually invited for skits where he’s the stand in for a stawman of the movie’s fandom explaining why the movie is actually good.
Rose: she’s like those atheist skeptic channels but instead of just debunking God and flat earth theory she also uses her magic to prove and convince her subscribers the horrorterrors are the only real cosmic entities who exist beyond the physical universe. In the case there’s a video going around of something crazy happening that could be a hoax, she invites John over so he can rant and expose the magic tricks the video uses to make it look real.
Dave: shitposter. He posts whatever he feels like whenever he feels like. Tony Hawk combo score, Sweet Bro and Hella Jeff comic dubs, Smosh skits with John, and a lot of Youtube Poops. He’s less a poster and more an editor. He’s done video editing for a bunch of other Youtubers, like Karkat, Kanaya, Sollux, Terezi, John, and Rose. He has a lot of time on his hands and he uses it for the best editing gags and cutoffs you could imagine. Like Caddicarus but even funnier.
Jade: she’s a mystery. You’ll see her everywhere and yet her main channel has less than 10k subscribers. She has made a cameo on EVERY character I’ve described so far. She’s dug up bones for Aradia, competed against Tavros in fiduspawn (and narrates for his RPG playthroughs, voice acting for some female characters too!) she playtests Sollux’s speedrun strats to see of they’re humanly viable, she’s done skits in Karkat’s video where she parodies prominent female characters alongside John, Dave, and Sollux. She is used as a practice combatant for Nepeta to demonstrate fighting various wildlife from foxes to bears. She’s done in depth explanations for various plant life and their living conditions in Kanaya’s horticulture videos, she plays in Terezi’s skits as a character who’s a stereotypical furry, she’s Vriska’s top pick for playing casual multiplayer games like UNO, or Worms, or Smash Bros. Jade was a featured teacher for how to build a homemade Nuclear Reactor with Equius. She was interviewed by Gamzee for her dog ears and her ability to see the future. She demonstrates how to handle various firearms in Eridan’s videos, and has even done 2-person mukbangs with Feferi. So after all that, what does Jade post on her main account? Squiddles character AMVs.
If I’m gonna do the alpha kids and trolls it has to be a separate ask, this is a very long one.
Hot damn! These are all so good!
#homestuck#Beta Kids#Beta Trolls#John Egbert#Dave Strider#Rose Lalonde#Jade Harley#Aradia Megido#Tavros Nitram#Sollux Captor#Karkat Vantas#Nepeta Leijon#Kanaya Maryam#Terezi Pyrope#Vriska Serket#Equius Zahhak#Gamzee Makara#Eridan Ampora#Feferi Peixes#Influencerstuck
36 notes
·
View notes
Note
May I ask *how* Mammon from Helluva Boss is bad asexual representation?
I'm ace myself, & have a video script about asexuality I hope to one day be able to produce once I get some recording & editing equipment/software, & analyzed Mammon's appearances so far (so just S2) for it, & I came to the conclusion that he's not as bad as most people say he is when it comes to asexual representation. His ace portrayal is DEFINITELY FLAWED, but not to the degree I think most people make it out to be. To summarize...
1.) He actually has some moments in his debut episode that subtly imply he's ace (while still staying in character with him being a jerk), they're easy to miss to the point one might be able to write them off as plausible unrelated coincidences though; these moments including his deflection about making robots of whoever wins his clown pageant being seen as "weird" at the beginning of the episode, calling his customers "sick degenerate adults" during the Robo Fizz ad, & possibly him not quite understanding the implications when he tells Fizz during the fan-meet, "The better the impression, the more they’ll want a piece of you to take home & fuck! Don’t you want that, Fizzie? To be fucked?!"
2.) When he says in Mastermind, “Oh shut up, you two. We all know you enjoy slumming it with the lower class plebs. Unlike the rest of us, heh. Right Levy?” definitely read as ace to me the first viewing alone.
Most of his moments DEFINITELY needed another pass, especially by an experienced & asexual writer since the stuff he says to Fizz during the fan-meet doesn't read as ace at all, the way he doesn't say anything to Beelzebub about him being ace when she says stuff that shows she either doesn't understand asexuality or just doesn't care to learn anything about him in Mastermind in favor of unnatural dialogue between the two mindlessly insulting each other, & Mammon's dialogue & body language/expressions when he flirts with Leviathan don't read as asexual in the slightest, at least to me.
So what I'd like to know, if possible, is what evidence is there that he's as bad of asexual representation as so many folks seem to imply? The way everyone talks about him (without explaining their reasoning alongside the facts), makes it sound like he's clearly harmful representation with loads of misinformation, but I couldn't find any evidence of something like that when I wrote that part of my script. If there's something really bad about him I've missed, I'd love to add it to my script!
I understand Medrano's personal opinions of asexuality over the years is full of red flags & misinformation (that hopefully she'll finally bury with her potentially being on the ace-spectrum & actively trying to learn more about it), but just looking at Mammon's portrayal alone, is there something I'm missing? I'm approaching this as someone who just wants to learn, so if you can teach me something here & don't mind taking the time to do so, please do!
My main beef with Mammon and his asexuality representation is that it feels like stereotype of what some think asexual people are. Portrayed as selfish jerks who don’t care about others but themselves, inflated ego, etc. It also doesn’t help that Mammon is portrayed as this greedy, disgusting pig (metaphorically and literally) who uses others when it benefits him. I get it, he’s the sin of greed but with the asexual representation added on top of that, it is not a good look but hopefully Vivziepop does a better job handling it when season 3 and 4 comes.
38 notes
·
View notes
Note
how did you do LttM's voice effects for the Sliver of Straw animation i need to know i need to voice act moon for the funnies pls north
I edit the audio in my video editing program for the simpler UI, but I'm pretty sure you could use most software for this.
I start out by making two overlapping audio tracks with the exact same audio, and then fiddling with the pitch shift settings until I have something that sounds moon-esque enough. Typically I pitch up one of the tracks and pitch down the other. This time I asked egguca to make moon sound like she's having some trouble speaking, which led to a bit of a deeper voice, so I left one of the tracks with the original pitch and pitched up the other one.
Overlaying two of the same audio with different pitches is what gives the audio the robot-like quality. I also added some additional effects (chorus, distortion) to the track with the normal pitched audio.
Subtlety is key. All of these effects could easily be overdone, which will make the lines difficult to comprehend. They're at their best when they're barely audible. (Or you could just skip them, too.)
After doing all this, I manually adjust the audio to add a stutter effect.
I'm trying to make it sound like how moon's voice stutters in the game. Practically I just sometimes duplicate the beginning of a word or sentence. It sounds more fun if it's slightly different on the two tracks.
This is how it sounds in practice:
But I don't do the effects the same every time :D I basically just fiddle with the settings until I have something that sounds nice. For contrast, when I was voicing moon myself, I gave it some reverb and pitched one of the tracks down.
Hope this helps!
143 notes
·
View notes
Text
A Strange Blue-Twintail Ghost?!
Hello, and welcome back to Project Sekai News! I'm Hoshino Ichika, and this is our faithful journalist, Tenma Saki!
Hi everyone! Glad to finally be back after such a hiatus!
We apologize for being gone for so long. Though this is no excuse, school has become stressful lately, as we are second years. Nevertheless, we are back to report on events around Shibuya Sekai.
As for today's report, I'm sure you've all heard of Hatsune Miku, Sekai's beloved virtual idol.
I have, Ich - Hoshino-san, but what about the people that haven't? Care to share with us who she is?
Of course. She's something called a Vocaloid, developed by Crypton Future Media, and codenamed CV01. A Vocaloid, according to the Vocaloid Wiki, is "a singing voice synthesizer software product." Basically, a software that can allow users to synthesize a singing voice. Hatsune Miku is depicted as a sixteen year old girl with long blue twintails, though there is debate on what color they actually are. Some people say they're teal or green. She was released in August 2007, and since that has become the most popular, cutest girl ever, able to spread hope to all of her fans across the world...
Haha, Hoshino-san! You're getting carried away!
A-ah, sorry! Um, anyway... people have been claiming that they've seen Hatsune Miku around Shibuya Sekai, walking around as if she were a normal human being. Obviously, this isn't possible, since you can only see Miku on a screen. They report they only saw her out of the corner of their eye, and when they tried to look at her directly, she vanished.
Woah?! Are we sure they're not hallucinating?
Most of the people who saw her have never hallucinated anything before, which is strange. A first year high school student reported that she thought she saw Miku behind the corner of a building, watching her, but when she blinked, Miku disappeared, as if she were just imagining it. Multiple people reported a similar situation, actually.
But considering the amount of sightings, it's probably not her imagination, right?
No.
Is it not a cosplayer, or someone that just happens to look like Miku?
If it was a human, no human should be able to simply disappear like that.
Then is she... a ghost?!
Not exactly - she could be a hologram. Miku is a robot, after all. Though how she's appearing throughout Shibuya is a mystery. But since it's Miku and not Kagamine Le - I mean, some really creepy monster from the Backrooms - I assume we don't have anything to worry about! After all, Miku could never cause any real harm to the citizens of Shibuya Sekai.
What did Len ever do to you -
The police are trying to investigate this sudden apperance of Hatsune Miku, but so far have no luck regarding figuring out what exactly she is. We ask that for now, do not approach the Hatsune Miku hologram and report any sightings of her to the police immediately. Until we know what she is, we need to take the utmost precaution.
Even if she is your favorite idol!
U-um! Ichi - Hoshino-san, Tenma-san - is that over there - the Miku hologram?
Where?! - Azusawa, that's red hair, not blue. Oh - it's gone. Well, the vanishing part wasn't a lie...
N-no, there was blue right next to it, I think..
I saw it too...
Um, Ichi? Why do you look so scared?
...
Ok, signing off! Bye everyone! It was nice talking to you again!
Goodbye!
Kohane, turn off the camera right -
------------------------------------------------------------------------------
Hey, did you see Project Sekai News' report on the Hatsune Miku ghost?
Yeah.
I did! I'm glad they're back, but it was a really weird report, wasn't it? I haven't seen Miku as a hologram yet... and what happened at the end?
I rewatched the video multiple times, but I couldn't see anything wrong. We can ask Amia about it when she arrives. Maybe she'll know something.
...
Is something wrong, Yuki?
Nothing.
...Okay. I'll get to work, then. I'm almost finished with the demo...
Alright.
#project sekai news#context: mafuyu and kanade live together#pjsk#project sekai#proseka#prsk#colorful stage#pjsekai#hoshino ichika#tenma saki#azusawa kohane
21 notes
·
View notes
Note
What objections would you actually accept to AI?
Roughly in order of urgency, at least in my opinion:
Problem 1: Curation
The large tech monopolies have essentially abandoned curation and are raking in the dough by monetizing the process of showing you crap you don't want.
The YouTube content farm; the Steam asset flip; SEO spam; drop-shipped crap on Etsy and Amazon.
AI makes these pernicious, user hostile practices even easier.
Problem 2: Economic disruption
This has a bunch of aspects, but key to me is that *all* automation threatens people who have built a living on doing work. If previously difficult, high skill work suddenly becomes low skill, this is economically threatening to the high skill workers. Key to me is that this is true of *all* work, independent of whether the work is drudgery or deeply fulfilling. Go automate an Amazon fulfillment center and the employees will not be thanking you.
There's also just the general threat of existing relationships not accounting for AI, in terms of, like, residuals or whatever.
Problem 3: Opacity
Basically all these AI products are extremely opaque. The companies building them are not at all transparent about the source of their data, how it is used, or how their tools work. Because they view the tools as things they own whose outputs reflect on their company, they mess with the outputs in order to attempt to ensure that the outputs don't reflect badly on their company.
These processes are opaque and not communicated clearly or accurately to end users; in fact, because AI text tools hallucinate, they will happily give you *fake* error messages if you ask why they returned an error.
There's been allegations that Mid journey and Open AI don't comply with European data protection laws, as well.
There is something that does bother me, too, about the use of big data as a profit center. I don't think it's a copyright or theft issue, but it is a fact that these companies are using public data to make a lot of money while being extremely closed off about how exactly they do that. I'm not a huge fan of the closed source model for this stuff when it is so heavily dependent on public data.
Problem 4: Environmental maybe? Related to problem 3, it's just not too clear what kind of impact all this AI stuff is having in terms of power costs. Honestly it all kind of does something, so I'm not hugely concerned, but I do kind of privately think that in the not too distant future a lot of these companies will stop spending money on enormous server farms just so that internet randos can try to get Chat-GPT to write porn.
Problem 5: They kind of don't work
Text programs frequently make stuff up. Actually, a friend pointed out to me that, in pulp scifi, robots will often say something like, "There is an 80% chance the guards will spot you!"
If you point one of those AI assistants at something, and ask them what it is, a lot of times they just confidently say the wrong thing. This same friend pointed out that, under the hood, the image recognition software is working with probabilities. But I saw lots of videos of the Rabbit AI assistant thing confidently being completely wrong about what it was looking at.
Chat-GPT hallucinates. Image generators are unable to consistently produce the same character and it's actually pretty difficult and unintuitive to produce a specific image, rather than a generic one.
This may be fixed in the near future or it might not, I have no idea.
Problem 6: Kinetic sameness.
One of the subtle changes of the last century is that more and more of what we do in life is look at a screen, while either sitting or standing, and making a series of small hand gestures. The process of writing, of producing an image, of getting from place to place are converging on a single physical act. As Marshall Macluhan pointed out, driving a car is very similar to watching TV, and making a movie is now very similar, as a set of physical movements, to watching one.
There is something vaguely unsatisfying about this.
Related, perhaps only in the sense of being extremely vague, is a sense that we may soon be mediating all, or at least many, of our conversations through AI tools. Have it punch up that email when you're too tired to write clearly. There is something I find disturbing about the idea of communication being constantly edited and punched up by a series of unrelated middlemen, *especially* in the current climate, where said middlemen are large impersonal monopolies who are dedicated to opaque, user hostile practices.
Given all of the above, it is baffling and sometimes infuriating to me that the two most popular arguments against AI boil down to "Transformative works are theft and we need to restrict fair use even more!" and "It's bad to use technology to make art, technology is only for boring things!"
90 notes
·
View notes
Text
https://youtu.be/v5Otdd4fogs?si=97RhphkHTJFgsiin https://youtube.com/shorts/55SNSr_Om38?si=TPoQ4hGycJTYC7PS https://youtube.com/shorts/YfI50sOm1g4?si=0s9ZxfgJlMcHOAC4
#3DPrinting#Development#EducationStudies#English#General#German#Hardware#MyCollection#Pictures#Programming#RobotsBlogContent#RobotsBlogVideoCreations#Software#STEM#Toys#Video#3dDruck#3DPrintedRobot#HP#HPRobots#MINT#Otto#OttoDIY#OttoRobot#Robot#Roboter#Sensors
0 notes