Tumgik
yasskaydee · 1 year
Text
youtube
Great conversation between a track legend and a rising track star. I especially appreciate how it highlights the intelligence of the athletes behind the physical feats.
2 notes · View notes
yasskaydee · 1 year
Text
Another long read from a personal POV about being on social media (before it was even called that) since the 80's. The comments are nice to read as well.
0 notes
yasskaydee · 1 year
Text
Solid read on the root problems of all these large internet companies.
0 notes
yasskaydee · 1 year
Text
A perfect case for the government (local, state, federal) to set up their own emergency response services and not rely on privately-owned companies whose motivation is always suspect and can be easily re-aligned against the public good. The whole open-source ActvityPub protocol and Mastodon sound geared exactly for that since you could have a unified basis for these regional servers to communicate.
0 notes
yasskaydee · 2 years
Text
This article stood out to me for two reasons:
This excerpt and its associated link: "As noted in “Stochastic Parrots,” the famous paper critiquing AI language models that led to Google firing two of its ethical AI researchers, “coherence is in the eye of the beholder.” "
an excellent comment which also included supporting references once can look up for further research (I included the mentioned research paper link at the bottom) which is worth sharing:
pj_camp (comment author)
Aristotle believed the heart was all important and the brain was simply a radiator to keep the heart cool. Descartes, impressed by the hydraulic action of fountains in the royal gardens, developed a hydraulic analogy for the action of the brain. Thomas Henry Huxley thought of the brain as analogous to a steam engine. Now we thing brain are computers and so, therefore, computers are brains.
However, leaving aside the fact that if an argument is true, the converse argument does not have to be true, the fact is that there is exactly zero evidence and exactly zero theoretical reason to believe that computers can be what brains are. To believe it nonetheless is an article of faith, not an article of science. There are, in fact, some reasons to believe that they are not.
The computer/brain analogy is compelling because computers are able to do some things we find extremely difficult and to which we attribute high intelligence to people who can do those things. Playing chess, for example. Most of these things involve what amounts to effective lookup of useful information in a large database. The things that we find simple to do, like perceiving and navigating through a complex world, computers find extraordinarily difficult. Somehow, we do not see that as a lack of intelligence.
Computers are fundamentally dualist. Brains are not. By that I mean that computers are a hardware substrate on which an algorithm created by an external entity executes. That at least suggests that the analogy between brains and computers could be just that -- an analogy. As French neuroscientist Yves Frégnac put it, "big data is not knowledge."
LLMs are an elaborate way of accessing big data. What strikes me about what enthusiasts are eliding from LLMs is that what they do is not driven by knowledge. It is driven by a pastiche of things that have been said in the past by humans. So when they argue that LLMs indicate the imminent arrival of true artificial intelligence, they are in effect claiming that intelligence does not depend in any way on actual knowledge. That strikes me as nonsense.
Brains are not radiators. They are not fountains. They are not steam engines. They may not even be computers. No one really knows that yet. Another French neuroscientist, Romain Brette, has challenged this metaphor in some detail. Brette points out that in thinking of brains as running code, researcher unconsciously drift between different meanings of the word "code." Starting from a technical sense, in which code means there is a link between a stimulus and the activity of a neuron, they drift into a very different, representational, meaning in which neural codes represent that stimulus, without justifying, or even consciously acknowledging, that shift.
This is dangerously close to a homunculus model. The unstated implication, using the representational meaning of code, is that the activity of neural networks is presented to an ideal observer or reader within the brain, often described as "downstream structures'" that have access to optimal ways of decoding the signals. With LLMs, it is pretty obvious that the downstream structure is us.
The cognitive revolution in psychology, starting in the 1970's, has pretty clearly demonstrated that viewing the brain as a passive computer that responds to inputs and processes data is wrong. Brains exist in bodies. Those bodies are interacting with and intervening in the world, and a considerable portion of whatever it is that brains do is based on sensorimotor metaphors derived from these interactions. And I should point out here that the meaning of metaphor is not the usual "How shall I compare thee to a summer's day" sense. Rather, the cognitive theory of metaphor involves wholesale export of reasoning methods from one domain into a completely different one, e.g. using the ability of the brain to reason about navigation to instead think about mathematics. This is what a number line is. When the metaphor changes (as it did in mathematics from numbers as enumeration of objects to labeling positions along a path), the meaning changes as well (as when the enumeration metaphor excluded the concept of zero as well as irrational numbers from the world of numbers -- the Pythagorean position -- to requiring them to be numbers since otherwise those positions along a path lack labels).
In 2015, the roboticist Rodney Brooks chose the computational metaphor of the brain as his pet hate in his contribution to a collection of essays entitled This Idea Must Die. Less dramatically, but drawing similar conclusions, two decades earlier the historian S Ryan Johansson argued that “endlessly debating the truth or falsity of a metaphor like ‘the brain is a computer’ is a waste of time. The relationship proposed is metaphorical, and it is ordering us to do something, not trying to tell us the truth.”
Reverse engineering a computer is often used as a thought experiment to show how, in principle, we might understand the brain. Inevitably, these thought experiments are successful, encouraging us to pursue this way of understanding the squishy organs in our heads. But in 2017, a pair of neuroscientists, Eric Jonas and Konrad Paul Kording*, decided to actually do the experiment on a real (and simple) computer chip, the MOS 6507 processor that was used in popular video games in the 70's and 80's. Things did not go as expected.
They deployed the entire analytical armament of modern neuroscience to attempt reverse engineering the CPU. Despite the fact that there is a clear explanation for how the chip works, they were unable to detect from outside the hierarchy of information processing that occurs inside it. As Jonas and Kording put it, the techniques fell short of producing “a meaningful understanding”. Their conclusion was bleak: “Ultimately, the problem is not that neuroscientists could not understand a microprocessor, the problem is that they would not understand it given the approaches they are currently taking.” This is directly related to neural networks in general as they are the blackest of black boxes. No one knows how they convert input into output, and this experiment suggests that such knowledge cannot be obtained with current techniques. Absent that knowledge, claims of "sentience" or "intelligence" are specious.
*Could a Neuroscientist Understand a Microprocessor?
0 notes
yasskaydee · 2 years
Text
youtube
Just enjoyable to listen to someone passionate about their work and whose clearly put a lot of thought into what they do and why.
Also, his emotional reaction to machine learning generated animation (around 22 minute mark) is relevant considering all the latest news about ML-generated art and content.
0 notes
yasskaydee · 2 years
Text
For anyone whose been following the ML (incorrectly called "AI") art/writing/code generation space, this is an article I read with some good discussion and a few gem comments scattered through the thread. The two things that stood out for me the most:
This link (click PDF under "Download" in right hand menu to read the paper) examining how ML art generators do, in fact, directly plagiarize artwork: https://arxiv.org/abs/2212.03860
And these comments from "Longmile149" (page 6 of comment thread) which highlight the contrast between Silicon Valley techbro "move fast, break things" aloofness and real people who don't dedicate their lives to tech fetishism:
Post #1 ------------------------------------------------------------------------------
I worked in a school for a year post-pandemic doing IT support as they rolled out 1:1 devices for the kids. Not a lot of professional cred, I'll admit, but actual first-hand experience nonetheless. What I saw was a school that bought the sizzle but got a chewed-up plastic hamburger. The super cool ultra-high-tech hyper-personalized education software OF THE FUTURE made everything worse. Teachers couldn't teach if the network went down or if a license wasn't renewed or a vendor updated their app and broke the damn thing for a day in the process. With all the activities being splooged out of some SV assembly line somewhere, and all the student records being stored safely on the cloud, any disruption was essentially a death sentence for the day's learning...which was made even worse in the classes where they had students all at different points in a lesson, because even if the teachers did still have materials to work from, their students were all over the map and it wasn't possible to do anything coherent. Administrators routinely hammering that teachers must use the tools regardless of educational outcomes because the school had paid for the license to a software package and if they wanted to keep App A they also had to show they were using Apps B and C to maintain the funding. Added bonus, teachers who focused on teaching the material instead of teaching the software got hit on their end of year reviews. Added added bonus, teachers also got dinged on parent reviews because the educational software that they don't control didn't work and was confusing. Hey, you know how older people and people who aren't technophiles aren't great with new technology, especially stuff like license-based SAS kinda products that change constantly? Guess how good most of the teachers in the building were at figuring out wtf was going on and adapting or even calling the right person when their SSO broke? Also super fun trying to help teachers understand the difference between whitelisting and blacklisting websites and how to tell when their screen monitoring software had failed to release the previous students so that another teacher could work with that class and what to do about it. Know what a chatbot is absolute garbage at doing? Understanding. Anything. At all. That's not what they do or how they work. The chatbots can't tell if a kid knows the subject, it can only tell if the kid knows how to use the app the subject is being presented on. When we did finals, most of the kids in K-5 absolutely bombed the first time through because the testing software was different from the learning software and they were being asked to interact with the material in a fundamentally different way than what they'd been taught for the whole year. You wanna see kids break down and cry? Watch them sob uncontrollably because they thought they did everything right but then their entire test gets wiped out by a laptop battery dying. Watch them freak all the way out when their test software flags them for cheating and locks them out and their teacher can't help them unlock it. I could go on and on about how utterly fucked the whole thing was, but the point here is that educational software as a tool for educators to work with is fine (to a point), but the reality is that schools are going to push more and more into using educational software as a pillar for educations to be built on, and that's really bad. I wish I had any reason to believe that slapdash, ever-changing, top-down software dressed up with a chatbot on the front end (or whatever other trendy nonsense is hot in SV at the time) was gonna be good for kids, but I just don't...and that's about the only kind that exists. The digital divide is real and it's not just about access to shiny new laptops. There is a massive gap between the technical skills most people actually have and the technical skills the tech bros think everyone should have, and "do more computers harder" is gonna make that way, way, way worse, not better.
Post #2 ------------------------------------------------------------------------------
We’re comparing anecdata, so there’s no real right or wrong here. I feel you on the virtual and the self-pacing. My 7th grader is doing virtual school through our local district. It’s great in some ways, trash in others. He can rocket ahead…when the software is working. His teacher is legit and she’s very responsive and he can go in for face-to-face help if he needs it. It only works because I know enough to keep things working. I can troubleshoot, my wife and I can point him to other resources, we have other computers and reliable internet. We’re on the relatively privileged side of the digital divide. We are absolutely the exception. I work at a library now. One of my main jobs is helping people bridge that divide. People in their 20s who don’t know how to find the pdf they downloaded on their phone so they can print it. Kids who don’t understand why their school email on their Chromebook won’t let them send a document to the library for us to print. Parents who can’t figure out how to reconnect their phone to their email account. Immigrants who use Facebook Messenger to talk to family overseas who want to call someone at Facebook on the phone to beg for help because their account is locked and they don’t know how to fix it. Retirees who finally got comfortable with using their phone and then some asshole software guys decided to rearrange the UI and redecorate in an update and now they’re lost and giving up because it’s too much to keep relearning over and over. I’m glad that the tools exist, man, but I suspect it’s a fantasy to think that my experience was abnormal. A couple decades working in factory automation and a few years in school/library/public service IT work make me think it’s way, way more likely that most places are going to be badly rolled out. How can they not be? The culture of the tech industry fetishizes change for change’s sake. The vast majority of society can’t actually adapt to that fast enough to keep up. It’s why everything is a kludge. I dunno. I think there are gems to be mined from the stuff being developed for schools. There’s real need for reform and there are tech solutions that could be powerful tools if we actually slow down and regulate and standardize them and teach and support the teachers who’ll need to use them, but that seems…optimistic.
0 notes
yasskaydee · 2 years
Text
Informative long-read about all the major people involved in the US-China relationship regarding virus research and taking a critical eye towards whether the lab-leak theory has credibility.
*edit 5 months later* Here's an additional article from VF examining more evidence regarding lab-leak. It's title is: COVID-19 Origins: Investigating a “Complex and Grave Situation” Inside a Wuhan Lab by Katherine Eban and Jeff Kao.
1 note · View note
yasskaydee · 2 years
Text
1 note · View note
yasskaydee · 2 years
Text
Worth a read, some excerpts: "By early May, Gilbert said, he had emailed about a half-dozen experts, including Appel and Harri Hursti, the co-founder and co-organizer of the Voting Machine Hacking Village at DEF CON, the annual hacking conference in Las Vegas. He’d issued his challenge: Could they hack the machine? ... By September, Gilbert still hadn’t heard from Hursti. In fact, nobody had agreed to test the machine."
The article closes with this:
"Moving forward, Gilbert is planning to write a new paper detailing his design, and still hoping to find a hacker willing to test out the machine. The experience has left him sounding jaded about the world of election hacking ­— a world, he suggested, that often seems focused more on performatively tearing apart machines than actually working toward solutions.
“They only look at things they know they can break,” Gilbert said.
“If you have something that you can’t on the face of it figure out before you touch it,” he continued, “they’re not going to touch it.”
My thoughts:
And the prof has now realized the problem with the "tech world", it's driven by tech fetishism instead of tech-enabled solutions. Anyway, there's more details about the design in the article and it's evolution, give it a read.
1 note · View note
yasskaydee · 2 years
Video
youtube
Haven’t posted in ages but here’s another favorite from Laurent. A different camera angle: https://www.youtube.com/watch?v=a2K88FpTT9k
5 notes · View notes
yasskaydee · 3 years
Video
youtube
The art speaks for itself
2 notes · View notes
yasskaydee · 4 years
Video
youtube
More unsung black history. He ran junkyard parts against factory-stock cars and won. Very inspiring story.
0 notes
yasskaydee · 4 years
Video
youtube
*edit* - I don’t know what happened to the higher quality video but here’s a repost.
I’ve said it before but Laurent has more depth than Larry when he turns on and Larry knows it. Look at everyone’s face when he goes in, haha
1 note · View note
yasskaydee · 4 years
Link
Some really cool behind-the-scenes animation work for two characters from streets of rage 4
0 notes
yasskaydee · 4 years
Link
Demonstrating the importance of competent governance.
0 notes
yasskaydee · 4 years
Link
Looks like AMD are back in a big way.
I had an AMD laptop a couple of years ago and the only real problem with it was its battery life. It handled user interaction and multi-tasking without a hitch and AMD’s integrated graphics have been better than Intel’s for some time. Now it looks like they have solved the battery issue plus more.
Competition is always good.
0 notes