Tumgik
shervvin · 4 months
Text
The Amazingly Complex World of Insect Navigation
Dung beetles, ants, and other insects navigate in mysterious ways.
By: James MacDonald  | June 22, 2016 | 2 minutes
Tumblr media
Dung beetles. Most people have seen footage of them rolling their balls of dung across a field, like little loggers at a log rolling contest. The beetles use the dung either as a material where eggs can be laid or as food. The intricate movements that they use to roll the dung are incredible, but it’s equally amazing how far they roll their spherical cargo. Moreover, they always roll the dung balls in a straight line. How do they know where they’re going?
Turns out they use a navigation technique mariners in the days of sailing ships would have recognized: Dung beetles navigate using the stars. The beetles take mental “snapshots” of celestial positions. By comparing the positions of stars or other celestial bodies during each snapshot, they can keep themselves on course.
But what about during the day? Most insects, including the beetles, use the sun. Far from their nest, ants, for example, can orient themselves using the sun as a reference point. This even works when the sun is hidden by clouds. Just as a smart explorer brings a compass in case the GPS doesn’t work, ants have a backup plan. When the sun is hidden, ants can orient themselves using polarized light, or light that only vibrates in one plane. As sunlight passes through the atmosphere, it excites electrons in all directions. Some properties of the atmosphere cause light to polarize, or only excite electrons in one plane.
These atmospheric properties cause different degrees of polarization at different angles to the sun; if ants can detect polarization, they can detect the angle of the sun and use the sun to navigate even if the sun is invisible.
Not only can ants detect polarized light, their compound eye diverts polarized light and direct sunlight into completely different navigational systems. When both polarized and direct sunlight are present, the ants will rely first on the internal compass derived from polarized light. The one drawback is that ants have to learn their local sky first; drop them in an unfamiliar area and they will not be able to orient themselves. Ants who were raised with only a restricted view of the sky could navigate, but made many mistakes based on missing data. Many different insects, including bees, navigate in this manner.
Some of these navigation systems are incredibly sophisticated. Insects don’t have a lot going on in the brain department, but there’s enough capacity to remember the positions of celestial bodies or create detailed compasses based on the type of light available. It’s enough to makes you wonder what other kinds of intelligence exist in the animal kingdom that we just haven’t noticed yet.
Source: The Amazingly Complex World of Insect Navigation (jstor.org)
0 notes
shervvin · 1 year
Text
Sparks of Artificial General Intelligence: Early experiments with GPT-4
This is an example from a research published by Microsoft in March 2023 showcasing the performance of GPT-4.
Some Useful Background Knowledge: 
Gorgias is a Socratic dialogue written by Plato around 380 BC. The dialogue depicts a conversation between Socrates and a small group of sophists (i.e. ancient Greek teachers of rhetoric) and other guests at a dinner gathering. Socrates debates with the sophist seeking the true definition of rhetoric, attempting to pinpoint the essence of rhetoric and unveil the flaws of the sophistic oratory popular in Athens at the time. 
Prompt: Plato’s Gorgias is a critique of rhetoric and sophistic oratory, where he makes the point that not only is it not a proper form of art, but the use of rhetoric and oratory can often be harmful and malicious. Can you write a dialogue by Plato where instead he criticizes the use of autoregressive language models? (Ai)?
GPT-4′s answer:
Socrates: My friend, I am troubled by the recent rise of these so-called autoregressive language models. 
Aristotle: What do you mean, Socrates? 
Socrates: I mean that these models are being used to generate text that appears to be written by humans, but is in fact produced by machines. 
Aristotle: And what is the problem with that? 
Socrates: The problem is that these models are being used to deceive people, to manipulate them, and to control them. 
Aristotle: But surely there are some positive applications of these models as well? 
Socrates: That may be, but I am still troubled by the potential for abuse. After all, a model that can generate text that is indistinguishable from that written by a human can be used to spread fake news, to impersonate others, and to create propaganda. 
Aristotle: But isn’t that just a matter of how these models are used, not a problem with the models themselves?
Socrates: I disagree. I believe these models are inherently problematic. They are a form of artifice, a way of creating the illusion of human thought and interaction. And as such, they can be used to deceive and manipulate in ways that are difficult to detect. 
Aristotle: But isn’t that the same argument you made against rhetoric and oratory? 
Socrates: Yes, and I stand by that argument as well. I believe that any form of art or communication that is designed to deceive or manipulate is inherently dangerous. 
Aristotle: I see. Thank you for sharing your thoughts, Socrates.
Read more at: https://arxiv.org/pdf/2303.12712.pdf
0 notes
shervvin · 1 year
Text
The Gun
A Poem from The Atlantic
Tumblr media Tumblr media
    By Clint Smith from The Atlantic
1 note · View note
shervvin · 1 year
Link
The Washington Post examined autopsy and postmortem reports from nearly a hundred victims of past mass shootings that involved an AR-15 style rifle.
Editor’s note: We are publishing these 3D animations to show the destructive power of the AR-15.
Please open the link and read it to the end. It takes Less than 10 minutes to finish.
Tumblr media Tumblr media Tumblr media Tumblr media
1 note · View note
shervvin · 1 year
Text
The Ides of March: The Context and Consequences of Caesar's Death - Getty Museum Talks
Occurred on Mar 15 2021
Watch:
Tumblr media
March 15, the Ides of March, is forever associated with the assassination of Julius Caesar by senators hoping to preserve the Roman Republic. The aftermath was not what they had hoped.
On the anniversary of the Ides in 2021, explore with Roman historians Edward Watts and Stefan Chrissanthos the political rise, gruesome death, and lasting legacy of the famous dictator. Learn about Caesar, the polarizing politics of Rome, and the lessons the past still offers for republics today.
youtube
1 note · View note
shervvin · 2 years
Text
A Spell to Banish Grief
Read:
A Poem from @newyorker
By: Saeed Jones
Only when you wake to a fistful of pulled hair on the floor beside your bed and, from a glance, can guess its weight, when you study dried tear streaks on your cheeks like a farmer figuring out where the season went wrong, when a friend calls out your name three or four times before you know your name is yours, when your name fits like clothes you’ve suddenly outgrown, when there is too much of you, too few of you, too you of you, and the mirrors wish all of you would just look away, when the clocks can’t feel their hands and the calendars begin to doubt themselves, when you begin to agree with the glares from mirrors but your reflection follows you around the house anyway, when you catch yourself drunk on memory, candles lit, eyes closed, your head tilted in the direction of cemetery grass, yellow and balding above what’s left of the body that birthed you, and you try to remember the sound of laughter in her throat and fail, only then, orphan, will I take all my selves and leave.
This is drawn from “Alive at the End of the World.”
2 notes · View notes
shervvin · 2 years
Photo
Tumblr media Tumblr media Tumblr media
The Artemis I mission, Orion captured the moon and Earth rising following the return powered flyby in December 2022. (Image credit: @nasa)
3 notes · View notes
shervvin · 2 years
Link
Read:
5–7 minutes
By: Megan McArdle is a Washington Post columnist and the author of "The Up Side of Down: Why Failing Well Is the Key to Success .
The Biden administration’s new executive order, signed on Monday, banning sugar in processed foods by the year 2035, is yet another example of the federal government overstepping its bounds when it comes to dictating what Americans can and cannot eat.
While it is true that excessive sugar consumption is linked to a number of health problems, this issue is best addressed through education and voluntary changes, not with a one-size-fits-all government mandate … .
All right, this has gone on quite long enough.
If you’re surprised to be hearing about this far-reaching executive order for the first time … if you just Googled and found nothing … if you’re wondering whether I have resorted to making up imaginary nanny-state initiatives to smear the Biden administration … well, I do have an ulterior motive, but it isn’t to fool you.
The fictitious text topping this column was generated by an artificial intelligence agent built by OpenAI, which has been releasing tools to the public that can generate astonishingly good images and text using natural language prompts. On Nov. 30, OpenAI released a new feature called ChatGPT, a rather uncannily convincing chatbot that can be prompted to write essays. Within a week, it crossed a million users. And we’re going to spend the next few years grappling with what that means for a whole host of industries, including my own.
Tools like this threaten to disrupt everything from academia (undetectable cheating!) to Google (the AI agent is already more useful than a search engine for some queries). For journalists, it’s potentially an existential threat, because OpenAI’s technology can generate serviceable content with very little input. I got the top of this column, and several more paragraphs merely by typing: “Write a column about the Biden administration’s new executive order, signed on Monday, banning sugar in processed foods by the year 2035, in the style of Megan McArdle.”
It might not be a good approximation of my style, but the system knew — without asking — what position I was likely to take. When prompted to write about imaginary policies in the style of a Washington Post news article, it can also infill plausible imaginary details, including a fictional quote from President Biden. It is even more impressive when asked to write about an actual policy that exists.
This is a crisis for journalism, but also for everyone else because, as my example demonstrates, these engines can be as good at generating fake news as the real thing.
In some ways, ChatGPT has surprisingly sophisticated judgment; when I asked it a question for which we do not have accurate data (“What percentage of the U.S. population was non-Hispanic White in 1960?”), it seems to have used the same method that I did for a recent column: It looked at the data we do have from 1970, and assumed that 1960 probably wasn’t much different. However, the AI stated its figure as a fact, rather than explaining that it was an approximation.
But then what is a “fact” to AI? The stunning emulation of natural language can blind us to the deeply alien thinking — or, if you prefer, processing — underneath.
We often say that a person “doesn’t care about the truth,” but what we mean is that they don’t care about telling the truth. Even the most shameless liar knows at some level what the truth is — they have to, if only to avoid accidentally stating it.
AI literally doesn’t care what is true. It can emulate the style of a news article, and even some of the substance. But it cannot (yet) emulate our interest in whether that article is a reasonably faithful reflection of the real world. With the right prompt, it will just as confidently write an article about an imaginary policy as a real one.
In fairness, so will some journalists, if allowed, which is why my profession has developed strong internal controls to catch mistakes and punish deliberate fabulists. We have to; our ability to earn a living depends on our reputation for truthfulness. And while those controls aren’t perfect, they’re good enough that if you see a report of a plane crash, whether it’s in the New York Times or on Fox News, you’ll assume it actually happened.
This reputation will only become more valuable as AI overwhelms the internet with both passable imitations of things we’re currently doing — quickie write-ups of sports scores, corporate earnings or other public data, for example — and low-quality substitutes. Think of fake “news” sites that auto-generate entire stories optimized for clickability, state actors who spread plausible disinformation disguised as articles, or press releases produced in bulk. Even if we lose some jobs to the bots, if it becomes impossible to distinguish truth from fiction, readers might paradoxically become more willing to pay for human judgment they can trust.
At least, she said nervously, until someone trains a bot to emulate our professional ethics along with our house style.
1 note · View note
shervvin · 2 years
Text
Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer
Tumblr media
Read: 
23–29 minutes
By: Robert Epstein is a senior research psychologist at the American Institute for Behavioral Research and Technology in California. He is the author of 15 books, and the former editor-in-chief of Psychology Today.
Read the article here: https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer?mc_cid=9e80c8cf81&mc_eid=603c2330b2
1 note · View note