#multimode
Explore tagged Tumblr posts
Text
https://www.xcalable.org/product/1000mbps-multi-mode-rugged-sfp-glc-sx-mm-rgd-882658074189/
#youtube#@inphi#@lucent#@marvell#@nokia#@alcatel#@xcalable#amazon#ebay#Cisco#1000mbps#multimode#rugged#sfp#glc
0 notes
Text
did you know pittsburgh has regional transit that desperately needs funding like every other transit network in the us. take your local public transit!!!! please!!!!!!! fund them please!!!!
[no bars versions below]
#digital art#fan art#art#my art#the pitt#everyone embrace multimodal transportation NOW!!!!!!#some of them may be rich enough to have a car but who wants to drive in pgh??? No One
2 notes
·
View notes
Text

MBTA Green Line Type 7 3709 and DE60LFR 1216 (May 14 2025)
#mbta#transit#public transportation#green line#light rail#mbta type 7#new flyer#de60lfr#bus#trainposting#huntington ave beacon of multimodality
3 notes
·
View notes
Text
i actually fucking hate editing
#and i hate texting becuase ive started to text like i have tags to explain my every thought process#what an insane red flag to have#so obsessed with clear communication re: underlying feelings that you can only communicate via tumblr post#idiots#multimodality
3 notes
·
View notes
Text
I’m at the election results. I’m at the altered kink headspace. I’m at the combination election results altered kink headspace.
#Lemme tell u I’m enjoying one of those sensation much more than the other#Vibes so bad I needed multimodal posting
2 notes
·
View notes
Text
Pegasus 1.2: High-Performance Video Language Model

Pegasus 1.2 revolutionises long-form video AI with high accuracy and low latency. Scalable video querying is supported by this commercial tool.
TwelveLabs and Amazon Web Services (AWS) announced that Amazon Bedrock will soon provide Marengo and Pegasus, TwelveLabs' cutting-edge multimodal foundation models. Amazon Bedrock, a managed service, lets developers access top AI models from leading organisations via a single API. With seamless access to TwelveLabs' comprehensive video comprehension capabilities, developers and companies can revolutionise how they search for, assess, and derive insights from video content using AWS's security, privacy, and performance. TwelveLabs models were initially offered by AWS.
Introducing Pegasus 1.2
Unlike many academic contexts, real-world video applications face two challenges:
Real-world videos might be seconds or hours lengthy.
Proper temporal understanding is needed.
TwelveLabs is announcing Pegasus 1.2, a substantial industry-grade video language model upgrade, to meet commercial demands. Pegasus 1.2 interprets long films at cutting-edge levels. With low latency, low cost, and best-in-class accuracy, model can handle hour-long videos. Their embedded storage ingeniously caches movies, making it faster and cheaper to query the same film repeatedly.
Pegasus 1.2 is a cutting-edge technology that delivers corporate value through its intelligent, focused system architecture and excels in production-grade video processing pipelines.
Superior video language model for extended videos
Business requires handling long films, yet processing time and time-to-value are important concerns. As input films increase longer, a standard video processing/inference system cannot handle orders of magnitude more frames, making it unsuitable for general adoption and commercial use. A commercial system must also answer input prompts and enquiries accurately across larger time periods.
Latency
To evaluate Pegasus 1.2's speed, it compares time-to-first-token (TTFT) for 3–60-minute videos utilising frontier model APIs GPT-4o and Gemini 1.5 Pro. Pegasus 1.2 consistently displays time-to-first-token latency for films up to 15 minutes and responds faster to lengthier material because to its video-focused model design and optimised inference engine.
Performance
Pegasus 1.2 is compared to frontier model APIs using VideoMME-Long, a subset of Video-MME that contains films longer than 30 minutes. Pegasus 1.2 excels above all flagship APIs, displaying cutting-edge performance.
Pricing
Cost Pegasus 1.2 provides best-in-class commercial video processing at low cost. TwelveLabs focusses on long videos and accurate temporal information rather than everything. Its highly optimised system performs well at a competitive price with a focused approach.
Better still, system can generate many video-to-text without costing much. Pegasus 1.2 produces rich video embeddings from indexed movies and saves them in the database for future API queries, allowing clients to build continually at little cost. Google Gemini 1.5 Pro's cache cost is $4.5 per hour of storage, or 1 million tokens, which is around the token count for an hour of video. However, integrated storage costs $0.09 per video hour per month, x36,000 less. Concept benefits customers with large video archives that need to understand everything cheaply.
Model Overview & Limitations
Architecture
Pegasus 1.2's encoder-decoder architecture for video understanding includes a video encoder, tokeniser, and big language model. Though efficient, its design allows for full textual and visual data analysis.
These pieces provide a cohesive system that can understand long-term contextual information and fine-grained specifics. It architecture illustrates that tiny models may interpret video by making careful design decisions and solving fundamental multimodal processing difficulties creatively.
Restrictions
Safety and bias
Pegasus 1.2 contains safety protections, but like any AI model, it might produce objectionable or hazardous material without enough oversight and control. Video foundation model safety and ethics are being studied. It will provide a complete assessment and ethics report after more testing and input.
Hallucinations
Occasionally, Pegasus 1.2 may produce incorrect findings. Despite advances since Pegasus 1.1 to reduce hallucinations, users should be aware of this constraint, especially for precise and factual tasks.
#technology#technews#govindhtech#news#technologynews#AI#artificial intelligence#Pegasus 1.2#TwelveLabs#Amazon Bedrock#Gemini 1.5 Pro#multimodal#API
2 notes
·
View notes
Text
Jim Andrews. Still from NeoNio, a revamp of the original 2000 work, an important text in the history of electronic literature.
#Jim Andrews#Nio#multimodality#digital poetics#electronic literature#digital literature#typographic art
4 notes
·
View notes
Note
how's the orvposting are you having fun with the orvposting
ill be honest ningy. the orvposting is one of the only things keeping me going.
2 notes
·
View notes
Text
Just saw that a YMS(+) bus had "Department of Transport<br>National Capital Territory of Delhi" on it's side instead of the cool DTC logo and name, what the fuck? is there a third (fourth) hidden bus operator? does the department of transport run its own fuckin bus system??? what?????
#just confused#it was a JBM greenlife bus#already been struggling to find out fucking anything about the “Delhi Transport” logo on cluster buses#why is this such a fucking confusing mess#consolidate all the buses in Delhi and just fucking hand it over to the DTC#ALL OF THEM#just the DTC#actually MERGE the DMRC and DTC#integrated multimodal transport ny fucking ASS#would explain why the YMS bus is called that honestly#the whole not following the three int bus number system
2 notes
·
View notes
Text
Sotrigender Pride Flag
Sotrigender or tritrisogender/trisotrigender: trimodal trigender in which someone is iso, trans, and cis; being trisogender as a result of being trigender; or being trigender as a result of being trisomodal.
#ap#sotrigender#trigender#trimodal#trisomodal#multimodal#gender modality#multimodality#trimodality#genders#neogender#gender umbrella#mogai coining#liom coin#pride flag#isogender#transgender#trans#cis#iso#cisgender#trisgender#isotrans#isocis#tris#trismodal#trisogender
38 notes
·
View notes
Text
Being, Thinking, and Knowing in a Hypertext Age
The speculative rhetorical model posits that we can only know the world in ways bounded and contextualized by our own experience of being. For this reason, a speculative rhetoric approach tries to pay careful attention to the perspectives, roles, and experiences of nonhumans, since communication inevitably takes place among a vast array of nonhuman actants. Speculative rhetorician Andrew Reid asserts that “A speculative rhetoric begins with recognizing that language is nonhuman.” At first, I couldn’t begin to imagine what this must mean. Sure, animals communicate, but surely language—expressive, symbolic communication with defined rules—must be an exclusively human phenomenon.
I read Reid’s short list of scholars cited (Alexander Galloway, Richard Grusin, Bruno Latour, Alan Lui, and Quentin Meillasoux) aloud to GPT-4 and asked it to tell me what they were known for, in hopes that knowing the background Reid was drawing from would help me contextualize such a bizarre statement.
It confirmed that Bruno Latour is best known for actor-network theory, as I had thought. Meillasoux it introduced as a speculative realist philosopher. Lui it defined as a scholar of “language as a digital-cultural phenomenon, influenced by both human creativity and digital technology.” Grusin, it said, was known for proposing that new technologies “remediate” and refashion older ones. Galloway, it said, “explores how digital protocols, the rules and standards governing digital networks, shape interactions and communications.” A quick look at Google Scholar and the scholars’ university webpages confirmed that its characterizations were fairly accurate.
Altogether, I could only conclude that these scholars affirm language as a constructed, constantly evolving phenomenon, although I still couldn’t see how the ability to influence human actions would equate to an equal ownership of language. It may be old-fashioned, but at present I’m still prepared to embrace Kenneth Burke’s definition of man as “the symbol-using animal.” As far as I know, there’s no evidence that animals can grasp the abstract symbolism inherent in language as well as we can.
However, I do think Gunther Kress’s “Multimodality” afforded me with another avenue for making sense of Reid’s perspective, at least. Kress asserts that “all texts are multimodal”, where ‘text’ seems to be doing a great deal of heavy lifting to encompass practically anything into which meaning can be encoded and decoded. For him, the multimodality of verbal speech arises from its inclusion of “pitch variation; pace; stress; phonological units (produced by a complex of organs); lexis; sequencing (as syntax); etc.” In other words, any element which can have a role in imparting meaning is part of the mode (or means) of linguistic communication. Since some animals can intentionally adapt these facets of communication to a rhetorical context (i.e. cats having a less babyish meow around one another than humans), I can see the argument that many animals possess a kind of language in that way.
But since Kress’s many example pictures and diagrams stress the representational quality of human languages (in which he apparently includes visuals, which he says can develop a kind of grammar) even when it’s completely divorced from written or spoken words, I’m still inclined to say that animals have communicative skills but not language. I’m curious whether anyone knows of any animals capable of abstraction.
Similarly, I wonder at what point we could consider the product of generative AI to be language (or perhaps I should say a form of communication, period). There’s no conscious intent behind it, it’s an actant and not an actor, but it arguably works entirely in abstractions (it doesn’t have meaningful, individual experience of what anything is!) and it certainly considers its modal elements, as many generative AI models will show by displaying alternate response options.
5 notes
·
View notes
Text
i want to believe instead i look at the sky and i feel nothing
little song lyric art piece that i did for school today! words are from Chinese Satellite by Phoebe Bridgers.
#this song inspired a very clear image in my head when i first listened to it#so when i was offered the chance to pretty much make whatever multimodal piece of art i wanted of course i had to make this#thanks to aly for introducing me to phoebe bridgers' music! as you can see i appreciate it#featuring my continuing battle with backgrounds#phoebe bridgers#song lyric artwork#art#light pollution is a bitch#the geek draws
5 notes
·
View notes
Text
i have to come up with a research topic to cover across multiple assignments for the term help
#argument research and multimodal composition so it has to be like... a topic i can construct an argument around#and i literally have zero ideas
5 notes
·
View notes
Text
any experienced multimodal analysts have any Thoughts on ELAN? i'm on a hunt for a mac-compatible software for annotating vids that allows for a customizable coding scheme. lots of the ones i've seen are for conversation analysis -- which is awesome, but not aligned with my needs
#michelle's thesis#yes a new tag lol#gradblr#ELAN#multimodal#LOL i have no idea what to tag this so ppl can see it#conversation analysis#studyblr#research#phdblr#graduate school#grad student#grad school#grad studies#救命#for context i'm analyzing long-form video essays -- a descriptive sort of component descriptive analysis?#so the often crazy and chaotic multimodal/semiotic entanglements ... warrant a software#personal
2 notes
·
View notes
Text
Content of Multimodality

The image attached above is the graphic I created as a multimodal resource. The image displays the eight concepts of rhetoric, serving as a guide into the complexities of writing. Specifically, how multiple variables influence the literary technique of the writer and the receptive perception of the viewer. Created in a well orchestrated diagram, the graphic shows the viewer framework of each concept in relation to another– much displaying how rhetoric isn’t effective if one piece is missing from the “symmetric” image. In course of the definitions, they were added as “mini notes” for the individual concepts of rhetoric for people like me who may be unfamiliar with one or two terms. Being a person who had never really knew what discourse community was, I found the graphic to be helpful in remembering the premise of it through a memorable layout.
5 notes
·
View notes
Text
told my boss today that i couldn't stay late because i had to take a psychology test tonight (like. for school. for my psychology class) and he thought i meant that i had to go see a psychologist and get tests done ON ME???? what the hell dude
#SIR.....#i can't stop laughing thinking about it like wtf#btw the test was. fine.#i got a 90 which is ok. but it was an open book test#i do not grasp the idea of multimodal perception#very confusing to me
3 notes
·
View notes