#Computer Vision
Explore tagged Tumblr posts
Text
By Mobotato
Music on
#nestedneons#cyberpunk#cyberpunk art#cyberpunk aesthetic#cyberpunk artist#art#cyberwave#scifi#control#concept#3dart#3d illustration#concept art#cctv#cctv camera#computer vision#the dark side of ai#surveillance#cryptoart#crypto art
47 notes
·
View notes
Text
196 notes
·
View notes
Text
Bayesian Active Exploration: A New Frontier in Artificial Intelligence
The field of artificial intelligence has seen tremendous growth and advancements in recent years, with various techniques and paradigms emerging to tackle complex problems in the field of machine learning, computer vision, and natural language processing. Two of these concepts that have attracted a lot of attention are active inference and Bayesian mechanics. Although both techniques have been researched separately, their synergy has the potential to revolutionize AI by creating more efficient, accurate, and effective systems.
Traditional machine learning algorithms rely on a passive approach, where the system receives data and updates its parameters without actively influencing the data collection process. However, this approach can have limitations, especially in complex and dynamic environments. Active interference, on the other hand, allows AI systems to take an active role in selecting the most informative data points or actions to collect more relevant information. In this way, active inference allows systems to adapt to changing environments, reducing the need for labeled data and improving the efficiency of learning and decision-making.
One of the first milestones in active inference was the development of the "query by committee" algorithm by Freund et al. in 1997. This algorithm used a committee of models to determine the most meaningful data points to capture, laying the foundation for future active learning techniques. Another important milestone was the introduction of "uncertainty sampling" by Lewis and Gale in 1994, which selected data points with the highest uncertainty or ambiguity to capture more information.
Bayesian mechanics, on the other hand, provides a probabilistic framework for reasoning and decision-making under uncertainty. By modeling complex systems using probability distributions, Bayesian mechanics enables AI systems to quantify uncertainty and ambiguity, thereby making more informed decisions when faced with incomplete or noisy data. Bayesian inference, the process of updating the prior distribution using new data, is a powerful tool for learning and decision-making.
One of the first milestones in Bayesian mechanics was the development of Bayes' theorem by Thomas Bayes in 1763. This theorem provided a mathematical framework for updating the probability of a hypothesis based on new evidence. Another important milestone was the introduction of Bayesian networks by Pearl in 1988, which provided a structured approach to modeling complex systems using probability distributions.
While active inference and Bayesian mechanics each have their strengths, combining them has the potential to create a new generation of AI systems that can actively collect informative data and update their probabilistic models to make more informed decisions. The combination of active inference and Bayesian mechanics has numerous applications in AI, including robotics, computer vision, and natural language processing. In robotics, for example, active inference can be used to actively explore the environment, collect more informative data, and improve navigation and decision-making. In computer vision, active inference can be used to actively select the most informative images or viewpoints, improving object recognition or scene understanding.
Timeline:
1763: Bayes' theorem
1988: Bayesian networks
1994: Uncertainty Sampling
1997: Query by Committee algorithm
2017: Deep Bayesian Active Learning
2019: Bayesian Active Exploration
2020: Active Bayesian Inference for Deep Learning
2020: Bayesian Active Learning for Computer Vision
The synergy of active inference and Bayesian mechanics is expected to play a crucial role in shaping the next generation of AI systems. Some possible future developments in this area include:
- Combining active inference and Bayesian mechanics with other AI techniques, such as reinforcement learning and transfer learning, to create more powerful and flexible AI systems.
- Applying the synergy of active inference and Bayesian mechanics to new areas, such as healthcare, finance, and education, to improve decision-making and outcomes.
- Developing new algorithms and techniques that integrate active inference and Bayesian mechanics, such as Bayesian active learning for deep learning and Bayesian active exploration for robotics.
Dr. Sanjeev Namjosh: The Hidden Math Behind All Living Systems - On Active Inference, the Free Energy Principle, and Bayesian Mechanics (Machine Learning Street Talk, October 2024)
youtube
Saturday, October 26, 2024
#artificial intelligence#active learning#bayesian mechanics#machine learning#deep learning#robotics#computer vision#natural language processing#uncertainty quantification#decision making#probabilistic modeling#bayesian inference#active interference#ai research#intelligent systems#interview#ai assisted writing#machine art#Youtube
6 notes
·
View notes
Text
Awesome First Project Back - Powerful Sound Analysis with Artificial Intelligence and Python!!
This is a project I just completed that does powerful sound analysis on the Star-Spangled Banner using librosa, matplotlib, and numpy with Python! It's so cool, I am really excited to have completed it!
This is the program I wrote which I have a guide on how to do this project on Medium which you can find here:
This is the results of the program which is pretty cool! Here are the plots that were generated thanks to the power of Matplotlib!!
If you want to play around with it feel free, I am going to be uploading the code later to my Github!! :D My Github is: https://github.com/Newt93
#github#python 3#python#python programming#python programmer#computer vision#python project#python tutorial#tutorials#programming tutorial#programming#programmer#programmers#machine learning#artificial intelligence#sound engineering#sound design#music#music analysis#audio analysis#audio#data science#data analysis
32 notes
·
View notes
Text
Termovision HUD from The Terminator (1984) A head-up display (HUD) is a transparent display that presents data over a visual screen. A Termovision refers to HUD used by Terminators to display analyses and decision options.
#terminator#computer vision#pattern recognition#red#sci fi#hud#OCR#termovision#natural language processing
55 notes
·
View notes
Text
Amanda Wasielewski, Computational Formalism: Art History and Machine Learning
7 notes
·
View notes
Text

Running Form
This running form illustration shows an athletes running technique during various stages of the gait cycle. Overlays highlight the relative position of various body parts which is an essential step of analyzing running form. Visit: https://www.movaia.com
#Running Form#Movaia#Running Form Analysis#Gait Assessment#Running Technique Running Gait Analysis#AI#Computer Vision#Improve running form
6 notes
·
View notes
Text
Flexible circuit boards manufacturing (JLCPCB, 2023)
Inkjet print head uses a fiducial camera for registering an FPC panel, and after alignment it prints graphics with UV-cureable epoxy in two passes
#jlcpcb#pcb#fpc#manufacturing#manufacture#factory#electronics#fiducial camera#camera#cv#opencv#computer vision
137 notes
·
View notes
Text

2023.08.31
i have no idea what i'm doing!
learning computer vision concepts on your own is overwhelming, and it's even more overwhelming to figure out how to apply those concepts to train a model and prepare your own data from scratch.
context: the public university i go to expects the students to self-study topics like AI, machine learning, and data science, without the professors teaching anything TT
i am losing my mind
based on what i've watched on youtube and understood from articles i've read, i think i have to do the following:
data collection (in my case, images)
data annotation (to label the features)
image augmentation (to increase the diversity of my dataset)
image manipulation (to normalize the images in my dataset)
split the training, validation, and test sets
choose a model for object detection (YOLOv4?)
training the model using my custom dataset
evaluate the trained model's performance
so far, i've collected enough images to start annotation. i might use labelbox for that. i'm still not sure if i'm doing things right 🥹
if anyone has any tips for me or if you can suggest references (textbooks or articles) that i can use, that would be very helpful!
55 notes
·
View notes
Text
How do you read a scroll you can’t open?
With lasers!

In 79 AD, Mount Vesuvius erupts and buries the library of the Villa of the Papyri in hot mud and ash
The scrolls are carbonized by the heat of the volcanic debris. But they are also preserved. For centuries, as virtually every ancient text exposed to the air decays and disappears, the library of the Villa of the Papyri waits underground, intact
Then, in 1750, our story continues:
While digging a well, an Italian farm worker encounters a marble pavement. Excavations unearth beautiful statues and frescoes – and hundreds of scrolls. Carbonized and ashen, they are extremely fragile. But the temptation to open them is great; if read, they would more than double the corpus of literature we have from antiquity.
Early attempts to open the scrolls unfortunately destroy many of them. A few are painstakingly unrolled by an Italian monk over several decades, and they are found to contain philosophical texts written in Greek. More than six hundred remain unopened and unreadable.

Using X-ray tomography and computer vision, a team led by Dr. Brent Seales at the University of Kentucky reads the En-Gedi scroll without opening it. Discovered in the Dead Sea region of Israel, the scroll is found to contain text from the book of Leviticus.
This achievement shows that a carbonized scroll can be digitally unrolled and read without physically opening it.
But the Herculaneum Papyri prove more challenging: unlike the denser inks used in the En-Gedi scroll, the Herculaneum ink is carbon-based, affording no X-ray contrast against the underlying carbon-based papyrus.
To get X-rays at the highest possible resolution, the team uses a particle accelerator to scan two full scrolls and several fragments. At 4-8µm resolution, with 16 bits of density data per voxel, they believe machine learning models can pick up subtle surface patterns in the papyrus that indicate the presence of carbon-based ink
In early 2023 Dr. Seales’s lab achieves a breakthrough: their machine learning model successfully recognizes ink from the X-ray scans, demonstrating that it is possible to apply virtual unwrapping to the Herculaneum scrolls using the scans obtained in 2019, and even uncovering some characters in hidden layers of papyrus
On October 12th: the first words have been officially discovered in the unopened Herculaneum scroll! The computer scientists won 40,000 dollars for their work and have given hope that the 700,000 grand prize is within reach: to read an entire unwrapped scroll!

The grand prize: first team to read a scroll by December 31, 2023
#vesuvius#Vesuvius challenge#papyrus#ancient rome#ancient greece#x ray tomography#machine learning#computer vision
62 notes
·
View notes
Text
16 notes
·
View notes
Text
NanoDog is learning to balance and learning various types of instructions using Machine Learning algorithms and Computer Vision applications as Dogs Eyes
NanoDog is powered by Jetson Nano, 13 Servo Motors, PCA9685 I2C Driver, CSI Camera. Final goal is to deploy a LiDAR and 3D Camera for a fully Autonomous AI-Dog for EdTech.
👏🏻👏🏻
#machine learning#computer vision#computer engineering#electronics#electricalengineer#engineers#technologytrends#robotics#technology#artificialintelligenceai#electronics engineering#programming#coding#automation#engineeringstudent#electronicslover
2K notes
·
View notes
Text
sub 15 ms latency in-flight CV pipeline
36 notes
·
View notes
Text
4 notes
·
View notes
Text
Shaping a Sustainable Future by Advancing ESG Goals
U3Core application of DigitalU3 redefines how organizations approach Environmental, Social, and Governance (ESG) strategies by combining real-time data, advanced analytics, and AI-powered automation
Email us at: [email protected]
youtube
#Smart Cities#Green Future#Sustainable operations#Technology for Sustainability#Asset Management#Digital Transformation#Computer Vision#AI for Sustainability#IoT for Sustainability#Youtube#intel
3 notes
·
View notes
Text






Day 19/100 days of productivity | Fri 8 Mar, 2024
Visited University of Connecticut, very pretty campus
Attended a class on Computer Vision, learned about Google ResNet, which is a type of residual neural network for image processing
Learned more about the grad program and networked
Journaled about my experience
Y’all, UConn is so cool! I was blown away by the gigantic stadium they have in the middle of campus (forgot to take a picture) for their basketball games, because apparently they have the best female collegiate basketball team in the US?!? I did not know this, but they call themselves Huskies, and the branding everyone on campus is on point.
#100 days of productivity#grad school#computer vision#google resnet#neural network#deep learning#UConn#uconn huskies#uconn women’s basketball#university of Connecticut
11 notes
·
View notes