#Machine Visions Technologies
Explore tagged Tumblr posts
Text

Has cybercane been done yet?
delirium fuelled arcane x cyberpunk doodles
#jayce uses the technology while viktor is turning himself into it lol#jayce has night vision lenses where he had viktors fingerprints#this one kinda leans into league more#jayvik#arcane#jayce talis#viktor arcane#cyberpunk#cyberpunk 2077#my art#its very rough and hard to read my apologies#league of legends#lol#machine herald#doodle#arcane x cyberpunk#cybercane
126 notes
·
View notes
Text
Tom and Robotic Mouse | @futuretiative
Tom's job security takes a hit with the arrival of a new, robotic mouse catcher.
TomAndJerry #AIJobLoss #CartoonHumor #ClassicAnimation #RobotMouse #ArtificialIntelligence #CatAndMouse #TechTakesOver #FunnyCartoons #TomTheCat
Keywords: Tom and Jerry, cartoon, animation, cat, mouse, robot, artificial intelligence, job loss, humor, classic, Machine Learning Deep Learning Natural Language Processing (NLP) Generative AI AI Chatbots AI Ethics Computer Vision Robotics AI Applications Neural Networks
Tom was the first guy who lost his job because of AI
(and what you can do instead)
⤵
"AI took my job" isn't a story anymore.
It's reality.
But here's the plot twist:
While Tom was complaining,
others were adapting.
The math is simple:
➝ AI isn't slowing down
➝ Skills gap is widening
➝ Opportunities are multiplying
Here's the truth:
The future doesn't care about your comfort zone.
It rewards those who embrace change and innovate.
Stop viewing AI as your replacement.
Start seeing it as your rocket fuel.
Because in 2025:
➝ Learners will lead
➝ Adapters will advance
➝ Complainers will vanish
The choice?
It's always been yours.
It goes even further - now AI has been trained to create consistent.
//
Repost this ⇄
//
Follow me for daily posts on emerging tech and growth
#ai#artificialintelligence#innovation#tech#technology#aitools#machinelearning#automation#techreview#education#meme#Tom and Jerry#cartoon#animation#cat#mouse#robot#artificial intelligence#job loss#humor#classic#Machine Learning#Deep Learning#Natural Language Processing (NLP)#Generative AI#AI Chatbots#AI Ethics#Computer Vision#Robotics#AI Applications
4 notes
·
View notes
Text

Week in Review
October 23rd-29th
Welcome to Fragile Practice, where I attempt to make something of value out of stuff I have to read.
My future plan is to do longer-form original pieces on interesting topics or trends. For now, I'm going to make the weekly reviews habitual and see if I have any time left.
Technology
OpenAI forms team to study ‘catastrophic’ AI risks, including nuclear threats - Tech Crunch; Kyle Wiggers
OpenAI launched a new research team called AI Safety and Security to investigate the potential harms of artificial intelligence focused on AI alignment, AI robustness, AI governance, and AI ethics.
Note: Same energy as “cigarette company funds medical research into smoking risks”.
Artists Allege Meta’s AI Data Deletion Request Process Is a ‘Fake PR Stunt’ - Wired; Kate Knibbs
Artists who participated in Meta’s Artificial Intelligence Artist Residency Program accused the company of failing to honor their data deletion requests and claim that Meta used their personal data to train its AI models without their consent.
Note: Someday we will stop being surprised that corporate activities without obvious profit motive are all fake PR stunts.
GM and Honda ditch plan to build cheaper electric vehicles - The Verge; Andrew J. Hawkins
General Motors and Honda cancel their joint venture to develop and produce cheaper electric vehicles for the US market, citing the chip shortage, rising costs of battery materials, and the changing market conditions.
Note: What are the odds this isn’t related to the 7 billion dollars the US government announced to create hydrogen hubs.
'AI divide' across the US leaves economists concerned - The Register; Thomas Claburn
A new study by economists from Harvard University and MIT reveals a significant gap in AI adoption and innovation across different regions in the US.
The study finds that AI usage is highest in California's Silicon Valley and the San Francisco Bay Area, but was also noted in Nashville, San Antonio, Las Vegas, New Orleans, San Diego, and Tampa, as well as Riverside, Louisville, Columbus, Austin, and Atlanta.
Nvidia to Challenge Intel With Arm-Based Processors for PCs - Bloomberg; Ian King
Nvidia is using Arm technology to develop CPUs that would challenge Intel processors in PCs, and which could go on sale as soon as 2025.
Note: I am far from an NVIDIA fan, but I’m stoked for any amount of new competition in the CPU space.
New tool lets artists fight AI image bots by hiding corrupt data in plain sight - Engadget; Sarah Fielding
A team at the University of Chicago created Nightshade, a tool that lets artists fight AI image bots by adding undetectable pixels into an image that can alter how a machine-learning model produces content and what that finished product looks like.
Nightshade is intended to protect artists work and has been tested on both Stable Diffusion and an in-house AI built by the researchers.
IBM's NorthPole chip runs AI-based image recognition 22 times faster than current chips - Tech Xplore; Bob Yirka
NorthPole combines the processing module and the data it uses in a two-dimensional array of memory blocks and interconnected CPUs, and is reportedly inspired by the human brain.
NorthPole can currently only run specialized AI processes and not training processes or large language models, but the researchers plan to test connecting multiple chips together to overcome this limitation.
Apple’s $130 Thunderbolt 4 cable could be worth it, as seen in X-ray CT scans - Ars Technica; Kevin Purdy
Note: These scans are super cool. And make me feel somewhat better about insisting on quality cables. A+.

The Shifting Web
On-by-default video calls come to X, disable to retain your sanity - The Register; Brandon Vigliarolo
Video and audio calling is limited to anyone you follow or who is in your address book, if you granted X permission to comb through it.
Calling other users also requires that they’ve sent at least one direct message to you before.
Only premium users can place calls, but everyone can receive them.
Google Search Boss Says Company Invests to Avoid Becoming ‘Roadkill’ - The New York Times; Nico Grant
Google’s senior vice president overseeing search said that he sees a world of threats that could humble his company at any moment.
Google Maps is getting new AI-powered search updates, an enhanced navigation interface and more - Tech Crunch; Aisha Malik
Note: These AI recommender systems are going to be incredibly valuable advertising space. It is interesting that Apple decided to compete with Google in maps but not in basic search, but has so far not placed ads in the search results.
Reddit finally takes its API war where it belongs: to AI companies - Ars Technica; Scharon Harding
Reddit met with generative AI companies to negotiate a deal for being paid for its data, and may block crawlers if no deal is made soon.
Note: Google searches for info on Reddit often seem more effective than searching Reddit itself. If they are unable to make a deal, and Reddit follows through, it will be a legitimate loss for discoverability but also an incredibly interesting experiment to see what Reddit is like without Google.
Bandcamp’s Entire Union Bargaining Team Was Laid Off - 404 Media; Emanuel Maiberg
Bandcamp’s new owner (Songtradr) offered jobs to just half of existing employees, with cuts disproportionately hitting union leaders. Every member of the union’s eight-person bargaining team was laid off, and 40 of the union's 67 members lost their jobs.
Songtradr spokesperson Lindsay Nahmiache claimed that the firm didn’t have access to union membership information.
Note: This just sucks. Bandcamp is rad, and it’s hard to imagine it continuing to be rad after this. I wonder if Epic had ideas for BC that didn’t work out.

Surveillance & Digital Privacy
Mozilla Launches Annual Digital Privacy 'Creep-o-Meter'. This Year's Status: 'Very Creepy' - Slashdot
Mozilla gave the current state of digital privacy a 75.6/100, with 100 being the creepiest.
They measured security features, data collection, and data sharing practices of over 500 gadgets, apps, and cars to come up with their score.
Every car Mozilla tested failed to meet their privacy and security standards.
Note: It would be great if even one auto brand would take privacy seriously.
EPIC Testifies in Support of Massachusetts Data Privacy and Protection Act -Electronic Privacy Information Center (EPIC)
Massachusetts version of ADPPA.
Note: While it may warm my dead heart to see any online privacy protections in law, scrambling to do so in response to generative AI is unlikely to protect Americans in any meaningful way from the surveillance driven form of capitalism we’ve all been living under for decades.
Complex Spy Platform StripedFly Bites 1M Victims - Dark Reading
StripedFly is a complex platform disguised as a cryptominer and evaded detection for six years by using a custom version of EternalBlue exploit, a built-in Tor network tunnel, and trusted services like GitLab, GitHub, and Bitbucket to communicate with C2 servers and update its functionality.
iPhones have been exposing your unique MAC despite Apple's promises otherwise - Ars Technica
A privacy feature which claimed to hide the Wi-Fi MAC address of iOS devices when joining a network was broken since iOS 14, and was finally patched in 17.1, released on Wednesday.
Note: I imagine this bug was reported a while ago, but wasn’t publically reported until the fix was released as a term of apple’s bug bounty program.
What the !#@% is a Passkey? - Electronic Frontier Foundation
Note: I welcome our passkey overlords.
#surveillance#tech#technology#news#ai#generative ai#machine vision#electric vehicles#evs#hydrogen#futurism#Apple#iphone#twitter#bandcamp#labor unions#digital privacy#data privacy#espionage#passkeys
11 notes
·
View notes
Text
hi i'm the print technician watching your professor scan pdfs while i unjam the laminator for the third time this week. here is a recreation of what happened:
university professors love to create the most fucked up pdf ever known to mankind. it's enrichment for them.
#not pictured:#ranting to me about technology and how ''kids these days know everything and i can never learn it because i'm old''#''why can't we just use those ditto machines with the crank anymore'' (pro tip: it's because the ink faded to nothing in less than a month)#angry that the scanner can't read all pages of their tome at once via magic x ray vision (they swear this exists. they saw it on facebook)#mad that their ''usb drive'' isn't working (it is an empty microsd reader)#these are all things that have happened. this week#i am very tired
55K notes
·
View notes
Text
Discover Rydot's ConvAI Platform, where generative AI transforms ideas into reality. Unleash the potential of intelligent assistance for your projects today.
0 notes
Text
#AI Applications#AI Technology#Computer Vision#Ethical AI#facts#Future of AI#life#Machine Learning#Podcast#Privacy and AI#serious#straight forward#Surveillance Technology#truth#upfront#Visual Recognition#website
0 notes
Text
Teaching AI to communicate sounds like humans do
New Post has been published on https://thedigitalinsider.com/teaching-ai-to-communicate-sounds-like-humans-do/
Teaching AI to communicate sounds like humans do
Whether you’re describing the sound of your faulty car engine or meowing like your neighbor’s cat, imitating sounds with your voice can be a helpful way to relay a concept when words don’t do the trick.
Vocal imitation is the sonic equivalent of doodling a quick picture to communicate something you saw — except that instead of using a pencil to illustrate an image, you use your vocal tract to express a sound. This might seem difficult, but it’s something we all do intuitively: To experience it for yourself, try using your voice to mirror the sound of an ambulance siren, a crow, or a bell being struck.
Inspired by the cognitive science of how we communicate, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers have developed an AI system that can produce human-like vocal imitations with no training, and without ever having “heard” a human vocal impression before.
To achieve this, the researchers engineered their system to produce and interpret sounds much like we do. They started by building a model of the human vocal tract that simulates how vibrations from the voice box are shaped by the throat, tongue, and lips. Then, they used a cognitively-inspired AI algorithm to control this vocal tract model and make it produce imitations, taking into consideration the context-specific ways that humans choose to communicate sound.
The model can effectively take many sounds from the world and generate a human-like imitation of them — including noises like leaves rustling, a snake’s hiss, and an approaching ambulance siren. Their model can also be run in reverse to guess real-world sounds from human vocal imitations, similar to how some computer vision systems can retrieve high-quality images based on sketches. For instance, the model can correctly distinguish the sound of a human imitating a cat’s “meow” versus its “hiss.”
In the future, this model could potentially lead to more intuitive “imitation-based” interfaces for sound designers, more human-like AI characters in virtual reality, and even methods to help students learn new languages.
The co-lead authors — MIT CSAIL PhD students Kartik Chandra SM ’23 and Karima Ma, and undergraduate researcher Matthew Caren — note that computer graphics researchers have long recognized that realism is rarely the ultimate goal of visual expression. For example, an abstract painting or a child’s crayon doodle can be just as expressive as a photograph.
“Over the past few decades, advances in sketching algorithms have led to new tools for artists, advances in AI and computer vision, and even a deeper understanding of human cognition,” notes Chandra. “In the same way that a sketch is an abstract, non-photorealistic representation of an image, our method captures the abstract, non-phono–realistic ways humans express the sounds they hear. This teaches us about the process of auditory abstraction.”
Play video
“The goal of this project has been to understand and computationally model vocal imitation, which we take to be the sort of auditory equivalent of sketching in the visual domain,” says Caren.
The art of imitation, in three parts
The team developed three increasingly nuanced versions of the model to compare to human vocal imitations. First, they created a baseline model that simply aimed to generate imitations that were as similar to real-world sounds as possible — but this model didn’t match human behavior very well.
The researchers then designed a second “communicative” model. According to Caren, this model considers what’s distinctive about a sound to a listener. For instance, you’d likely imitate the sound of a motorboat by mimicking the rumble of its engine, since that’s its most distinctive auditory feature, even if it’s not the loudest aspect of the sound (compared to, say, the water splashing). This second model created imitations that were better than the baseline, but the team wanted to improve it even more.
To take their method a step further, the researchers added a final layer of reasoning to the model. “Vocal imitations can sound different based on the amount of effort you put into them. It costs time and energy to produce sounds that are perfectly accurate,” says Chandra. The researchers’ full model accounts for this by trying to avoid utterances that are very rapid, loud, or high- or low-pitched, which people are less likely to use in a conversation. The result: more human-like imitations that closely match many of the decisions that humans make when imitating the same sounds.
After building this model, the team conducted a behavioral experiment to see whether the AI- or human-generated vocal imitations were perceived as better by human judges. Notably, participants in the experiment favored the AI model 25 percent of the time in general, and as much as 75 percent for an imitation of a motorboat and 50 percent for an imitation of a gunshot.
Toward more expressive sound technology
Passionate about technology for music and art, Caren envisions that this model could help artists better communicate sounds to computational systems and assist filmmakers and other content creators with generating AI sounds that are more nuanced to a specific context. It could also enable a musician to rapidly search a sound database by imitating a noise that is difficult to describe in, say, a text prompt.
In the meantime, Caren, Chandra, and Ma are looking at the implications of their model in other domains, including the development of language, how infants learn to talk, and even imitation behaviors in birds like parrots and songbirds.
The team still has work to do with the current iteration of their model: It struggles with some consonants, like “z,” which led to inaccurate impressions of some sounds, like bees buzzing. They also can’t yet replicate how humans imitate speech, music, or sounds that are imitated differently across different languages, like a heartbeat.
Stanford University linguistics professor Robert Hawkins says that language is full of onomatopoeia and words that mimic but don’t fully replicate the things they describe, like the “meow” sound that very inexactly approximates the sound that cats make. “The processes that get us from the sound of a real cat to a word like ‘meow’ reveal a lot about the intricate interplay between physiology, social reasoning, and communication in the evolution of language,” says Hawkins, who wasn’t involved in the CSAIL research. “This model presents an exciting step toward formalizing and testing theories of those processes, demonstrating that both physical constraints from the human vocal tract and social pressures from communication are needed to explain the distribution of vocal imitations.”
Caren, Chandra, and Ma wrote the paper with two other CSAIL affiliates: Jonathan Ragan-Kelley, MIT Department of Electrical Engineering and Computer Science associate professor, and Joshua Tenenbaum, MIT Brain and Cognitive Sciences professor and Center for Brains, Minds, and Machines member. Their work was supported, in part, by the Hertz Foundation and the National Science Foundation. It was presented at SIGGRAPH Asia in early December.
#Accounts#ai#ai model#algorithm#Algorithms#ambulance#Art#artificial#Artificial Intelligence#artists#Asia#bees#Behavior#birds#box#Brain#Brain and cognitive sciences#brains#Building#cats#Center for Brains Minds and Machines#cognition#communication#computer#Computer Science#Computer Science and Artificial Intelligence Laboratory (CSAIL)#Computer science and technology#Computer vision#content#creators
1 note
·
View note
Text
Compact Form Factor: 38x38mm GigE Camera Solutions Explained

Compact and effective imaging solutions are crucial in the fast-paced industrial and automation settings of today. GigE camera interfaces, which offer dependable, quick connections over conventional Ethernet networks, are transforming the way organizations record and distribute high-quality visual data. Compact variants, such as the 38x38mm GigE cameras, provide the optimal blend of size and performance, making them great for applications that need minimal power consumption or have limited space. We examine the uses and advantages of these compact yet potent GigE camera solutions in this blog.
The Advantages of Compact GigE Camera Interfaces in Automation
The GigE camera interface offers high-speed data transmission with bandwidths of up to 1 Gbps, supporting real-time video capture. These compact 38x38mm cameras are especially valuable for automation systems, where space-saving components are essential. Their design enables easy installation in robotics, production lines, and inspection systems, while Ethernet-based connectivity reduces cabling costs compared to traditional setups.
Why 38x38mm GigE Cameras Are Perfect for Embedded Systems
Embedded applications benefit from the compact form factor of 38x38mm GigE cameras, as their small size makes them easy to integrate into sensors, kiosks, and compact devices. These cameras offer flexibility, supporting custom lenses and sensors while maintaining compatibility with GigE protocols. This adaptability makes them ideal for machine vision systems, smart devices, and industrial Internet of Things (IIoT) applications.
Low Latency and Power Efficiency: Key Features of GigE Interfaces
One of the critical selling points of the GigE camera interface is its ability to transmit data with minimal latency, which is essential for real-time image processing in critical environments. Additionally, compact GigE cameras are optimized for low power consumption, making them suitable for battery-operated systems or installations where energy efficiency is a priority.
Applications of 38x38mm GigE cameras in robotics and surveillance
In robotics, compact GigE cameras play a crucial role in providing precise vision capabilities for autonomous operations. They also enhance surveillance systems by offering high-resolution images even in space-constrained areas, such as drones or compact security units. Their ability to transmit uncompressed data over long distances without compromising image quality gives them a distinct edge over USB or analog alternatives.
Explore More: How GigE Technology Enhances 4K and HDR Imaging
If you’re looking to enhance your imaging capabilities, combining GigE camera interfaces with 4K and HDR sensors opens up new possibilities. These cameras can support higher resolutions and advanced imaging features, making them ideal for industries such as healthcare, manufacturing, and smart city projects.
Are you prepared to learn how your project might be enhanced by a small GigE camera interface? View our entire selection of camera systems made for embedded and industrial use. To find out more about how these compact yet effective cameras can meet your demands, get in touch with us right now!
0 notes
Text
What Is the Future of Robotics in Everyday Life?
As technology continues to evolve at a rapid pace, many are asking, what is the future of robotics in everyday life? From automated vacuum cleaners to advanced AI assistants, robotics is steadily becoming an integral part of our daily routines. The blending of artificial intelligence with mechanical engineering is opening doors to possibilities that seemed like science fiction just a decade…
#Agriculture#AI#AI Assistants#AI future#AI healthcare#AI integration#AI Robots#artificial intelligence#automation#autonomous vehicles#Cobots#Collaborative Robots#Computer Vision#Domestic Robots#Drone Delivery#drones#education#environmental monitoring#ethics#everyday life#Exoskeletons#future tech#Future Technology#Healthcare#home automation#home security#Industrial Robots#Industry 4.0#job displacement#machine learning
1 note
·
View note
Text
Simplify Art & Design with Leonardo's AI Tools!
Leonardo AI is transforming the creative industry with its cutting-edge platform that enhances workflows through advanced machine learning, natural language processing, and computer vision. Artists and designers can create high-quality images and videos using a dynamic user-friendly interface that offers full creative control.
The platform automates time-consuming tasks, inspiring new creative possibilities while allowing us to experiment with various styles and customized models for precise results. With robust tools like image generation, canvas editing, and universal upscaling, Leonardo AI becomes an essential asset for both beginners and professionals alike.



#LeonardoAI
#DigitalCreativity
#Neturbiz Enterprises - AI Innovations
#Leonardo AI#creative industry#machine learning#natural language processing#computer vision#image generation#canvas editing#universal upscaling#artistic styles#creative control#user-friendly interface#workflow enhancement#automation tools#digital creativity#beginners and professionals#creative possibilities#sophisticated algorithms#high-quality images#video creation#artistic techniques#seamless experience#innovative technology#creative visions#time-saving tools#robust suite#digital artistry#creative empowerment#inspiration exploration#precision results#game changer
1 note
·
View note
Text
youtube
STOP Using Fake Human Faces in AI
#GenerativeAI#GANs (Generative Adversarial Networks)#VAEs (Variational Autoencoders)#Artificial Intelligence#Machine Learning#Deep Learning#Neural Networks#AI Applications#CreativeAI#Natural Language Generation (NLG)#Image Synthesis#Text Generation#Computer Vision#Deepfake Technology#AI Art#Generative Design#Autonomous Systems#ContentCreation#Transfer Learning#Reinforcement Learning#Creative Coding#AI Innovation#TDM#health#healthcare#bootcamp#llm#youtube#branding#animation
1 note
·
View note
Text
Intsoft Tech optical character recognition apply to production line
#machine vision system integrators#industrial automation applications#technology in quality control#automated test equipment companies#an optical inspection system is used to distinguish#inspection in production line
1 note
·
View note
Text
Improved Accuracy with Machine Vision Technology | iCore
iCore's machine vision technology uses complex algorithms to accomplish complex tasks and render precise decisions for machine vision systems. With the latest high-power LED spot lights, you may experience the power of illumination with maximum brightness and energy efficiency. An LED light source with a continuous current is called iLight. With the aid of cutting-edge algorithms, iCore's machine vision technology enables complex job execution and accurate assessment in machine vision systems. The industry's maximum LED driving power—up to 1,000W—is achieved by iLight thanks to its integrated over-driving function. Easy-to-use linear brightness control is made possible by iLight's intensity feedback feature. Even at large currents, iLight has an extremely quick response time to current. After receiving an external trigger input, LED drive is achievable in less than 0.5㎲, and a current pulse as small as 0.5 ㎲can be produced.
iLight source designed specifically for computer vision systems: It makes the most of the built-in LED's lighting management to drive the LED at a sudden 1,000W and produce the highest-level lighting pulse in the world, at least 0.5㎲. Furthermore, it can instantly increase system speed while synchronizing with external signals because to its lightning-fast 0.5㎲ response time.
Hybrid Spot Light:
Hybrid spot lighting is an option to xenon and regular LED lighting that offers a brightness greater than LEDs. The combined 10,000 hours of life provided by laser and LED technology is not impacted by the short life and brightness variations of xenon lamps.
Drop watcher:
Drop Watcher is a program that scans ink droplets as soon as they leave the printhead. High brightness and very short illumination durations are required to record flight parameters at sub-micron resolution.
If you are looking for strobe controller in Korea, you can find it on iCore
Click here to contact iCore
View more: Improved Accuracy with Machine Vision Technology
0 notes
Text

Hubble Space Telescope: Exploring the Cosmos and Making Life Better on Earth
In the 35 years since its launch aboard space shuttle Discovery, the Hubble Space Telescope has provided stunning views of galaxies millions of light years away. But the leaps in technology needed for its look into space has also provided benefits on the ground. Here are some of the technologies developed for Hubble that have improved life on Earth.
Image Sensors Find Cancer
Charge-coupled device (CCD) sensors have been used in digital photography for decades, but Hubble’s Space Telescope Imaging Spectrograph required a far more sensitive CCD. This development resulted in improved image sensors for mammogram machines, helping doctors find and treat breast cancer.

Laser Vision Gives Insights
In preparation for a repair mission to fix Hubble’s misshapen mirror, Goddard Space Flight Center required a way to accurately measure replacement parts. This resulted in a tool to detect mirror defects, which has since been used to develop a commercial 3D imaging system and a package detection device now used by all major shipping companies.

Optimized Hospital Scheduling
A computer scientist who helped design software for scheduling Hubble’s observations adapted it to assist with scheduling medical procedures. This software helps hospitals optimize constantly changing schedules for medical imaging and keep the high pace of emergency rooms going.

Optical Filters Match Wavelengths and Paint Swatches
For Hubble’s main cameras to capture high-quality images of stars and galaxies, each of its filters had to block all but a specific range of wavelengths of light. The filters needed to capture the best data possible but also fit on one optical element. A company contracted to construct these filters used its experience on this project to create filters used in paint-matching devices for hardware stores, with multiple wavelengths evaluated by a single lens.
Make sure to follow us on Tumblr for your regular dose of space!
2K notes
·
View notes
Text
How Can You Extend an Image with AI?

View On WordPress
#ai art#ai creativity#ai generator#ai image#AI image processing#AI tools#art inspiration#Computer vision#creative technology#Deep Learning#Deep learning algorithms#Digital Creativity#Image editing#Image enlargement#Image extension#Machine Learning
0 notes
Text

EXISTING JOBS + HUMAN
A: Jobs today that humans do-but machines will eventually do better.
EXISTING JOBS + MACHINE
B: Current jobs that humans can't do but machines can.
NEW JOBS + MACHINE
D: Jobs that only humans will be able to do-at first.
NEW JOBS + HUMAN
C: Robot jobs that we can't even imagine yet.
#ideas#my screenshots#screenshot#thoughts#life#change#mindset#growth#art#love#human#machine#jobs#future#present#tech#technology#existing#robot#vision#create
0 notes