#neural responses
Explore tagged Tumblr posts
ytvideoseo · 1 year ago
Video
youtube
Explore the fascinating parallels between human structure and stimulation, and their intriguing connection with our canine companions in this captivating YouTube video. Discover how the intricate workings of the human body mirror aspects of canine physiology, shedding light on our shared evolutionary journey. From neural responses to sensory experiences, delve into the captivating realm where humans and dogs intersect, offering insights into the profound bond between species. Join us on a journey of discovery and appreciation for the wonders of human and canine biology. If you want to know more about this, Click here 
#credentialedcoachtraining #credentialedcoach #coachingtraining #coachingskills #coachtrainingprogram #coachtraininginstitute #accreditedcoachtraining #humanstructure #caninecompanions #parallels #stimulation #dogs #connection #physiology #neuralresponses #sensoryexperiences #intersect #fascinating #intricateworkings #sharedtraits #understanding #relationship #neuroscience #evolutionarybiology #neurobiology #humananimalbond #comparativephysiology #appreciation #coaching #discoveries
2 notes · View notes
catgirlredux · 1 year ago
Text
The cocky hotshot daredevil pilot to brainfried emaciated half-machine veteran pipeline is REAL and it can happen to you!!!
CLICK HERE to learn more!!
45 notes · View notes
anythingbyadriannelenker · 6 months ago
Text
i did it. i completed my annotated bibliography. it only took 3 days and 30+ mental damage points. onwards and upwards folks
3 notes · View notes
clonememesfrikyeah · 1 year ago
Text
You know what would be the worst? If at the end of the war when all is said and done, after the clones lost every little thing they had, after Vaders rise and the Jedi’s fall, after all that death and hardship and misery? It would be terrible to be a clone and wake up like suddenly coming out of a coma, in a stasis chamber that they grew up in and rarely left, there was the craziest dream just before and there’s the lingering feeling something important just happened, this is Kamino 35bby, all the information they were just fed is already neatly stored in their perfect flash-memory brain. No ones died yet, all of that was a simulation based on a calculation of events to instal orders and hone the discipline of troops. It’s dark, there’s no way of telling if anyone or anything exists beyond the boundary’s of the inside. There’s a designated call sign and designation along with vitals displayed in the line of vision, it’s also counting down the seconds to when a new simulation is set to begin.
8 notes · View notes
jcmarchi · 2 years ago
Text
Fruit flies could hold the key to building resiliency in autonomous robots - Technology Org
New Post has been published on https://thedigitalinsider.com/fruit-flies-could-hold-the-key-to-building-resiliency-in-autonomous-robots-technology-org/
Fruit flies could hold the key to building resiliency in autonomous robots - Technology Org
Tumblr media Tumblr media
Mechanical Engineering Assistant Professor Floris van Breugel has been awarded a $2 million National Science Foundation (NSF) grant to adapt autonomous robots to be as resilient as fruit flies.
Resiliency in autonomous robotic systems is crucial, especially for robotics systems used in disaster response and surveillance, such as drones monitoring wildfires. Unfortunately, modern robots have difficulty responding to new environments or damage to their bodies that might occur during disaster response, van Breugel wrote in his grant application. In contrast, living systems are remarkably adept at quickly adjusting their behavior to new situations thanks to redundancy and flexibility within their sensory and muscle control systems.
Scientific discoveries in fruit flies have helped shed light on how these insects achieve resiliency in flight, according to van Breugel. His project will translate that emerging knowledge on insect neuroscience to develop more resilient robotic systems.
“This is a highly competitive award on a topic with tremendous potential impact, which also speaks of the research excellence of the investigator and Mechanical Engineering at UNR,” Petros Voulgaris, Mechanical Engineering department chair, said.
This research aligns with the College of Engineering’s Unmanned Vehicles research pillar.
Engineering + flies
The intersection of engineering and flies long has been an interest to van Breugel.
“As an undergrad, I did research where my main project was designing a flying, hovering thing that birds or insects vaguely inspired,” he said. “Throughout that project, I realized that the hard part, which was more interesting to me, is once you have this mechanical thing that can fly, how do you control it? How do you make it go where you want it to go? If it gets broken, how do you adapt to that?”
Van Breugel says he is examining how “animals can repurpose or reprogram their sensorimotor systems ‘on the fly’ to compensate for internal damage or external perturbations quickly.”
Working with van Breugel on the grant are experts in insect neuroscience, including Michael Dickinson, professor of bioengineering and aeronautics at the California Institute of Technology (and van Breugel’s Ph.D. advisor) as well as Yvette Fisher, assistant professor of neurobiology at U.C. Berkeley. Both have pioneered aspects of brain imaging in flies in regards to the discoveries and technology in the field that van Breugel is utilizing in this research project. Also on the project: Bing Bruton, associate professor of biology at the University of Washington, who brings her expertise in computational neuroscience.
The importance of flies in the realm of both engineering and neuroscience stems from the combination of their sophisticated behavior together with brains that are numerically simple enough that they can be studied in detail. This “goldilocks” combination, van Bruegel said, makes it feasible to distill properties of their neural processing into fundamental engineering principles that can be applied to robotics systems. 
As part of the grant, research experiences will be offered to middle school, high school and undergraduate students to participate in both neuroscience and robotics research. Van Breugel and his team also will develop open-source content to help bring neuroscience fluency to engineering students. This aligns with the College of Engineering’s Student Engagement operational pillar.
Source: University of Nevada, Reno
You can offer your link to a page which is relevant to the topic of this post.
3 notes · View notes
sashayed · 5 months ago
Text
imagine being a bird watching hot ones. you'd be like "wtf is 'hot sauce,' peppers don't do that" bc you don't have the neural receptors that cause a burning sensation if your tissues come in contact with capsaicin; you can stick your whole head in a ghost pepper no problem, so you'd think, wow these bald apes are full of shit they are faking a whole big physiological response to normal food for entertainment. weird. and then you'd be like wait, what are they eating? what are they eating?
#yg
15K notes · View notes
yesthatsatumbler · 1 year ago
Text
I tend to think of AI responses as being a lot like those D+ students who get asked something at an exam and aren't actually very sure of the answers, and have to quickly make up something that vaguely sounds like it makes sense and hope it's close enough to count.
And, like, sometimes their association web is good enough that they stumble into the right answer (and sometimes the right answer was something obvious all along so they just happen to guess correctly). But a lot of the time it's just a pile of nonsense that they think sounds vaguely right.
...silly thought: I guess the way AI training works is pretty much sending them through gazillions of simulated exams and grading them on whether their replies are close enough to correct answers to count, and then hoping that by trial and error they build up enough of the right association web to get correct(ish) answers more often than not. But they're still fundamentally making stuff up every single time.
(And it only works at all because they're doing absolutely insane amounts of said trial-and-error.)
AI doesn't know things.
AI is playing improv.
This is is a key difference and should shape how you think about AI and what it can do.
566 notes · View notes
ytvideoseo · 1 year ago
Text
Human Structure and Stimulation, & Dogs
Explore the fascinating parallels between human structure and stimulation, and their intriguing connection with our canine companions in this captivating YouTube video. Discover how the intricate workings of the human body mirror aspects of canine physiology, shedding light on our shared evolutionary journey. From neural responses to sensory experiences, delve into the captivating realm where…
youtube
View On WordPress
0 notes
prokopetz · 1 year ago
Text
The whole "the brain isn't fully mature until age 25" bit is actually a fairly impressive bit of psuedoscience for how incredibly stupid the way it misinterprets the data it's based on is.
Okay, so: there's a part of the human brain called the "prefrontal cortex" which is, among other things, responsible for executive function and impulse control. Like most parts of the brain, it undergoes active "rewiring" over time (i.e., pruning unused neural connections and establishing new ones), and in the case of the prefrontal cortex in particular, this rewiring sharply accelerates during puberty.
Because the pace of rewiring in the prefrontal cortex is linked to specific developmental milestones, it was hypothesised that it would slow down and eventually stop in adulthood. However, the process can't directly be observed; the only way to tell how much neural rewiring is taking place in a particular part of the brain is to compare multiple brain scans of the same individual performed over a period of time.
Thus, something called a "longitudinal study" was commissioned: the same individuals would undergo regular brain scans over a period of mayn years, beginning in early childhood, so that their prefrontal development could accurately be tracked.
The longitudinal study was originally planned to follow its subjects up to age 21. However, when the predicted cessation of prefrontal rewiring was not observed by age 21, additional funding was obtained, and the study period was extended to age 25. The predicted cessation of prefrontal development wasn't observed by age 25, either, at which point the study was terminated.
When the mainstream press got hold of these results, the conclusion that prefrontal rewiring continues at least until age 25 was reported as prefrontal development finishing at age 25. Critically, this is the exact opposite of what the study actually concluded. The study was unable to identify a stopping point for prefrontal development because no such stopping point was observed for any subject during the study period. The only significance of the age 25 is that no subjects were tracked beyond this age because the study ran out of funding!
It gets me when people try to argue against the neuroscience-proves-everybody-under-25-is-a-child talking point by claiming that it's merely an average, or that prefrontal development doesn't tell the whole story. Like, no, it's not an average – it's just bullshit. There's no evidence that the cited phenomenon exists at all. If there is an age where prefrontal rewiring levels off and stops (and it's not clear that there is), we don't know what age that is; we merely know that it must be older than 25.
27K notes · View notes
joncronshawauthor · 7 months ago
Text
Tech Bros Versus Zombies: A Story of Disruption Gone Wrong
Have you ever wondered what might happen if Silicon Valley accidentally triggered a zombie apocalypse? Not the shambling, brain-eating kind – but the perfectly synchronised, engagement-metrics-obsessed, neural-interface-gone-wrong sort. Well, wonder no more. I’m excited to introduce my latest story, Tech Bros Versus Zombies, now available for free on my Patreon…
0 notes
max1461 · 4 months ago
Text
War man goes to vietnam and kills innocent women and children it harms his psyche and the level of adrenaline in his nervous system causes improperly weighted bayesian updating in the neural inhibitory response he comes home and yells and hoots and hollars at the restaurant embarasses his wife when the server comes and he says youll never take me alive viet cong and runs out the door. They divorce he lives on the street psychologist says you have ptsd we used to call it shell sock. Pyschologist says your flashbacks are triggered by people saying hi and shit. Young woman comes to the psychologist shes bullied and shit the other girls tied her up in the basement and called her ugly. pyschologist in 1980 says your like that soldier guy becase your nervious system. And your triggered by women that you think are attracytive. Pych says take these pills its the internet year two thousand and they say im gay I was shoved in the lockered and the bullies called me a fag the other website man says your triggered like a vietnam war big time killer. Fuck man im trigger by them calling me a faggot on the web but its the water I swim in. its 2010 and theres femininsm in the video games and some young women on the new twitter say it can trigger people to call them a gay fag all day. but some young men don't like feminsm in their video games it makes them upset in themselves and all frustrated so they say your triggered, are you triggered lib your a triggered lib. Well some clever guy one of the cleverest ever to live says more like your a triggered conservative lol your a triggered trumpter. Well now your calling them triggered and there calling you triggered and now its 2025 and im watching a video about small apartmerts in japan and the guy says "wow a lot of people in the comments are triggered by small apartments apparently" their triggered like a major vietnam killer. their triggered like the guys who did my lai and shit.
4K notes · View notes
sufrimientilia · 9 months ago
Text
incapacitation
content warning
drugs that make a character woozy and disoriented. slurring words and falling slack, everything too heavy and confusing and muffled
blown pupils, wandering eyes, breathing too much or too little. sweating, shaking, puking, so limp and pale it’s almost like they’re dead
fevers so high a character's mind just turns to mush. glossy eyes tracking the ceiling, listless and unaware until eventually there's sweat sticking all over the sheets and they start mumbling some vague responses to caretaker's questions
tranquilizer dart that brings a character down all at once. one sudden jerk or look of confusion, not enough time to glance at it much less pull it out before eyes are rolling back and they collapse into the dirt
tranquilizer dart that comes on slowly. pulling it out and running and running until each step becomes too uncoordinated, stumbling or getting dragged along by a teammate until even their begging to stay awake, let's go, becomes hazy and distant
struck so hard that everything rings in one ugly roar. staggering or falling, told to sit down, just stay down. so confused and lost, repeating the same questions and forgetting the answer over and over and over again
character so messed up they struggle to follow any part of the conversation. everything too heavy and confusing and muffled, just useless and incoherent and completely oblivious to the situation
nervous prodding or pleading by caretaker, begging them to just stay awake or focus
jostled around by captor, told to get the fuck up and follow orders, easily manhandled and restrained
mumbling nonsense and spilling secrets. stoic characters without any masks, so confused and broken and vulnerable, slipping and powerless in every sort of way
"you're okay, i promise you're okay"
“ah, shit. you’re a mess—”
“I guess you won’t remember this anyways…”
gaze drifting and blank, too faraway to track anything caretaker/captor is saying. nudged and prodded and pleaded at to no avail, just incoherent and out of it
too weak to move. beaten absolutely senseless or bleeding all over the place, a character just hurting and spent beyond means sprawled flat against the ground
getting dragged along or stepped on, pinned down as if they're in any state to go anywhere
hypnotized and stunned into mindlessness. repeated mantras and rewired thoughts, a character made pliable and blank and used like a puppet
paralyzed but fully aware, left slack and useless and desperate with limp muscles and depressed breathing. assumed dead and abandoned, grieved over or dumped aside like a corpse, forced to watch and unable to do anything
poisoned and just getting worse and worse. teammates desperately looking for a cure while character deteriorates, puking and passing out and getting high fevers, hallucinating and begging for relief
characters taken out of commission when they're otherwise the strongest one. exposed to a weakness, given magical restraints or cuffs with neural suppressors to keep them docile, targeted and taken out
vertigo taking a character side to side, brought down and useless
5K notes · View notes
jcmarchi · 1 year ago
Text
Building ChatGPT-style tools with Earth observation - Technology Org
New Post has been published on https://thedigitalinsider.com/building-chatgpt-style-tools-with-earth-observation-technology-org/
Building ChatGPT-style tools with Earth observation - Technology Org
Imagine being able to ask a chatbot, “Can you make me an extremely accurate classification map of crop cultivation in Kenya?” or “Are buildings subsiding in my street?” And imagine that the information that comes back is scientifically sound and based on verified Earth observation data.
ESA, in conjunction with technology partners, is working to make such a tool a reality by developing AI applications that will revolutionise information retrieval in Earth observation.
A digital helping hand for data
Earth observation generates vast volumes of vital data every day, but it is difficult for humans alone to ensure that we obtain the best value from that data. Fortunately, AI helps interact with such large and complex datasets, identifying key features and presenting the information in a user-friendly format.
For example, I*STAR, an activity co-funded by the ESA InCubed programme, developed a platform that uses AI to monitor current events like earthquakes or volcano eruptions so that satellite operators can automatically plan the next data acquisitions for customers.
The SaferPlaces AI tool, again supported by InCubed, creates flood maps for disaster response teams by merging in situ measurements with satellite data. SaferPlaces was crucial to damage assessment efforts during last year’s floods in Emilia-Romagna in Italy.
In the last few years, the progress of AI has accelerated tremendously, with the advance of tools such as ChatGPT and Gemini even surprising experts in the field. To take advantage of this transformative innovation and capture the opportunities enabled by this technology, a natural next step is to build a ChatGPT-style text-based enquiry with Earth observation data.
Along with various partners from the fields of space, computing and meteorology, ESA is currently developing an Earth observation digital assistant that will understand human queries and respond with human-like answers – known as natural language capabilities.
Not surprisingly though, there are a number of pieces of the jigsaw puzzle to complete to create such a digital assistant, starting with the powerhouse that underpins it, the foundation model.
The motor roaring under the bonnet
AI models work by training and improving over time, but in more traditional machine learning, the machine has to be fed with large sets of data that have been labelled, often by a human.
Enter foundation models, which take a very different approach. A foundation model is a machine learning model that trains, largely without human supervision, on sizeable and varied sources of unlabelled data. Foundation models are quite general, but can be tailored to specific applications.
The result is a flexible, powerful AI engine, and since their inception in 2018 foundation models have contributed to a huge transformation in machine learning, impacting many industries and society as a whole.
ESA Φ-lab has several ongoing initiatives for creating foundation models dedicated to Earth observation-related tasks. These models use data to provide information on environmentally critical topics such as methane leaks and extreme-weather-event mitigation.
PhilEO recognises features like Richat
One foundation model project, PhilEO, started at the beginning of 2023 and is now reaching maturity. An evaluation framework based on global Copernicus Sentinel-2 data, and soon the PhilEO model itself, are being released to the Earth observation community in order to stimulate a collaborative approach, advance development in the field and ensure the derived foundation model is extensively validated.
The image above shows the Richat Structure, the type of feature that the PhilEO model has learnt to recognise without human supervision.
The human interface
Separate ESA initiatives are looking into the human end of the jigsaw puzzle – creating the digital assistant that will take a natural language question from a user, process the right data through Earth observation foundation models and produce the answer in text and/or images.
A precursor Digital Twin of Earth project has recently demonstrated that its digital assistant prototype can carry out multimodal tasks, searching among multiple data archives such as Sentinel-1 and 2 to compare information.
An ESA Φ-lab activity due to start in April will explore natural language processing for extracting and analysing information from verified Earth observation text sources, together with interpreting queries from both experts and general users. This activity will ultimately lead to the creation of a fully functioning digital assistant.
“The concept of an Earth observation digital assistant that can provide a broad range of insight from varied sources is a tantalising prospect, and as these initiatives show, there are a number of fundamental building blocks to put in place to achieve that aim,” comments Head of ESA Φ-lab Giuseppe Borghi.
“Given the extremely encouraging progress already achieved with PhilEO and the digital assistant precursor, I fully expect the new projects to yield game-changing results in the near future.”
Source: European Space Agency
You can offer your link to a page which is relevant to the topic of this post.
0 notes
krissym72 · 1 year ago
Text
Mastering the Art of Crafting AI Image Prompts: A Comprehensive Guide
In the dynamic landscape of artificial intelligence (AI), the fusion of technology and creativity has birthed a remarkable phenomenon: AI image prompts. These prompts serve as catalysts for AI systems to generate visual content autonomously, igniting a revolution in creative AI applications. Defining AI Image Prompts:AI image prompts are carefully crafted instructions or stimuli designed to…
Tumblr media
View On WordPress
1 note · View note
valtsv · 9 months ago
Text
whoever wired the body's pain and pleasure responses along many of the same neural pathways thank you thank you thank you thank you
5K notes · View notes
guitarbomb · 1 year ago
Text
Neural DSP announces Cortex Control Launch
Neural DSP Technologies has just unveiled Cortex Control, a cutting-edge desktop platform that revolutionizes the way users interact with the Quad Cortex amp modeler. This launch marks a significant milestone in the world of amp modeling, offering a range of user-experience enhancements and creative control options. Cortex Control Developed through extensive research and shaped by feedback from…
Tumblr media
View On WordPress
0 notes