1000 AI-powered machines: Vision AI on an industrial scale
New Post has been published on https://thedigitalinsider.com/1000-ai-powered-machines-vision-ai-on-an-industrial-scale/
1000 AI-powered machines: Vision AI on an industrial scale
This article is based on Bart Baekelandt’s brilliant talk at the Computer Vision Summit in London. Pro and Pro+ members can enjoy the complete recording here. For more exclusive content, head to your membership dashboard.
Hi, I’m Bart Baekelandt, Head of Product Management at Robovision.
Today, we’re going to talk about the lessons we’ve learned over the last 15 years of applying AI to robots and machines at scale in the real world.
Robovision’s journey: From flawed to fantastic
First, let’s look at what these machines were like in the past.
15 years ago, our machine was very basic with extremely rudimentary computer vision capabilities. It used classical machine vision techniques and could only handle very basic tasks like recognizing a hand. Everything was hard-coded, so if you needed the machine to recognize something new, you’d have to recode the entire application. It was expensive and required highly skilled personnel.
Nowadays, we don’t have just one machine – we have entire populations of machines, with advanced recognition capabilities. There’s a continuous process of training AI models and applying them to production so the machines can tackle the problem at hand.
For example, there are machines that can take a seedling from a conveyor belt and plant it in a tray. We have entire fleets of these specialized machines. One day they’re trained to handle one type of seedling, and the next day they’re retrained to perform optimally for a different variety of plant.
So yeah, a lot has happened in 15 years. We’ve gone from initially failing to scale AI, to figuring out how to apply AI at scale with minimal support from our side. As of today, we’ve produced over 1000 machines with game-changing industrial applications
Let’s dive into a few of the key lessons we’ve picked up along the way.
Lesson one: AI success happens after the pilot
The first lesson is that AI success happens after the pilot phase. We learned this lesson the hard way in the initial stages of applying AI, around 2012.
Let me share a quick anecdote. When we were working on the machine that takes seedlings from a conveyor belt and plants them in trays, we spent a lot of time applying AI and building the algorithm to recognize the right breaking point on each seedling and plant it properly.
Eventually, we nailed it – the algorithm worked perfectly. The machine builder who integrated it was happy, and the customer growing the seedlings was delighted because everything was functioning as intended.
However, the congratulations were short-lived. Within two weeks, we got a call – the system wasn’t picking the seedlings well anymore. What had happened? They were now trying to handle a different seedling variety, and the images looked just different enough that our AI model struggled. The robot started missing the plants entirely or planting them upside down.
We got new image data from the customer’s operations and retrained the model. Great, it worked again! But sure enough, two weeks later, we got another call reporting the same problem all over again.
This highlighted a key problem. The machine builder wanted to sell to many customers, but we couldn’t feasibly support each one by perpetually retraining models on their unique data. That approach doesn’t scale.
That painful lesson was the genesis of our products. We realized the end customers needed to be able to continuously retrain the models themselves without our assistance. So, we developed tooling for them to capture new data, convert it to retrained models, deploy those models to the machines, and interface with the machines for inference.
Our product philosophy stems directly from those harsh real-world lessons about what’s required to successfully scale AI in real-world production.
Lesson two: It’s about getting the odd couple to work together
When you’re creating working AI solutions at scale, there typically are two types of people involved. They’re your classic “odd couple,” but they need to be able to collaborate effectively.
On one hand, you have the data scientists – they generally have advanced degrees like Masters in Engineering or even PhDs. Data scientists are driven by innovation. They live to solve complex problems and find solutions to new challenges.
Once they’ve cracked the core issue, however, they tend to lose interest. They want to move on to the next big innovation, rather than focusing on continuous improvement cycles or incremental optimizations
On the other hand, you have the machine operators who run the manufacturing systems and processes where AI gets applied at scale – whether that’s a factory, greenhouse, or another facility.
The machine operators have intricate knowledge of the products being handled by the machines. If you’re deploying AI to handle seedlings, for example, no one understands the nuances, variations, and defects of those plants better than the operator.
This post is for paying subscribers only
Subscribe now
Already have an account? Sign in
0 notes
NattiKay fursuit 3.0: digitigrade edition is in the works 👀 still in the planning/materials gathering stage atm but hoping to start on the actual work within the next week or so. If all goes according to plan she should be ready in time for AWU in a few months!
NattiKay 1.0 was made in 2017, followed shortly by 2.0 in 2018. I'd wanted a digi suit even back then but I was new to fursuit making and figured digitigrade might be a bit above my level, so I stuck with plantigrade. Welp, since then I've upgraded almost every part of 2.0 at least once (at this point it's probably more like 2.7 or smth lol)...EXCEPT for the wings and bodysuit, which is still that same plantigrade one I made in 2018, so it's been in use for nearly six years now. The most recent upgrade was her current head, which was...I think two years ago?
SOOO...after getting inspired by seeing some of @happyfoxx-art's WIPs, I've decided...it's finally time. No more 2.2, 2.5, 2.8. It's time for 3.0!! Gonna make every single part fresh!! And finally gonna try to upgrade to digi style legs!! Been wanting to for years and now the time is right!!
I'm a little apprehensive because I know it's gonna be a ton of work and I'm wary of getting burnt out (has happened before, usually because of rushing to finish before a con) but I am DETERMINED to pace myself and take my time and do it RIGHT even if that means slow. I'm starting early enough that there should still be plenty of time to get it done before AWU without feeling rushed, and if I'm gonna put in all the time effort to make a whole new suit from scratch instead of just a replacement here and there I want it to look GOOD. Been looking up tons of tips and tutorials and such and definitely hype! I'm trying to temper my expectations because of course my execution is not gonna be pro level BUT there's definitely a lot of potential and it should at least be decent!!
I will admit I'm "cheating" with the head by using a premade base this time though. Found a really adorable expanding-foam-cast one for sale and I'm hoping the fact that it'll be all smooth and symmetrical will help with the furring, especially around where the muzzle connects to the head, which is a spot I've always struggled with sculpting and furring in the past. And I suppose if I end up not liking it for whatever reason I could always go back and sculpt my own anyways. We shall see!
18 notes
·
View notes
NDI Launches NDI 6, Unlocking HDR Support and Expanding WAN Connectivi - Videoguys
New Post has been published on https://thedigitalinsider.com/ndi-launches-ndi-6-unlocking-hdr-support-and-expanding-wan-connectivi-videoguys/
NDI Launches NDI 6, Unlocking HDR Support and Expanding WAN Connectivi - Videoguys
NDI, the global video-over-IP connectivity standard, announced the release of NDI 6, a key update set to significantly impact broadcasting and content creation. NDI 6 introduces native HDR support and expands WAN connectivity for hardware, addressing critical industry requirements and pushing the boundaries of visual quality and remote workflows.
“The highly anticipated NDI 6 introduces 10+ bit HDR support to answer growing quality demands coming from our users,” said Nick Mariette, Director of Product Management, NDI. “The feedback from our customers and beta testers played a pivotal role in developing NDI 6. Many of our partners want to adopt NDI workflows more profoundly, and further improved image quality will enhance the usage of NDI. Now, anyone seeking high-end quality can stream in HDR with the flexibility, efficiency, and interoperability NDI has always offered.”
Equipped with native HDR and 10+ bit color support, NDI 6 is a powerful option for tier 1 Broadcast, meeting industry demands for professional-grade video streaming over IP and making broadcast-quality streaming widely available. The update empowers broadcasters and content creators with higher contrast, which expands brightness headroom. Additionally, it offers wide color gamut and minimal color banding, which ensures seamless transitions and gradients, and broad compatibility, with support for PQ and HLG formats expanding streaming to most HDR and non-HDR devices.
WAN connectivity is now embedded into cameras through an NDI Bridge utility for hardware. This plug-and-play solution promises a transformative impact on the flexibility of remote productions and setups, offering a seamless experience for anyone sending and receiving video, regardless of the device, platform, or location. The new feature allows devices to send encrypted NDI streams over a WAN, facilitating secure, remote real-time collaboration between locations. NDI-enabled cameras with the Bridge utility can join remote networks without depending on additional software or tools.
Since Q4 of 2023, the core technology update has been under beta testing with industry-leading product developers like Autodesk, Chyron, Kiloview, Lumens, Matrox, Panasonic and Vizrt as part of the NDI Beta program. Results have led to the successful integration of NDI 6 into both existing and forthcoming products.
“With NDI 6, NDI emerges as the undisputed standard for cloud-based live production, ushering in a new era of possibility for Vizrt’s suite of production solutions. With its advanced bit depth and expanded color range, NDI 6 elevates the visual quality of our products to unprecedented levels, enhancing every aspect of the content creation process. The new Bridge utility feature also opens doors to seamless connectivity, enabling our customers to effortlessly integrate NDI devices into their remote networks and cloud-based workflows. This shift transforms NDI from a primarily local technology to a cornerstone of global networked broadcasting,” said Ulrich Voigt, Global Head of Product Management, Vizrt.
NDI 6 will be showcased at the highly anticipated 2024 NAB Show where Comprimato, Chyron, Kiloview, Magewell, Telycam and Vizrt will demonstrate the latest features of the core tech update.
“With a longstanding commitment to the NDI standard, Chyron enthusiastically welcomes the progression of NDI connectivity and its advanced iterations through the latest release of NDI 6. Through these innovations, such as NDI HDR HLG, Chyron PRIME, with live graphics, video walls, touchscreen graphics, clips and branding, broadens support for clients seeking to merge top-quality production visuals with flexible network architectures. NDI HDR offers substantial enhancement in image quality and color depth, ultimately delivering viewers a more captivating and immersive visual experience just in time for some of the most prominent global sports broadcasting events planned this summer,” said Nikole McStanley, Product Portfolio Director, Chyron.
NDI 6 is now available for testing and integration by all product developers, with downloads accessible at ndi.video/ndi6/.
ABOUT NDI
NDI, a fast-growing tech company, is removing the limits to video connectivity. NDI – Network Device Interface – is used by millions of customers worldwide and has been adopted by more media organizations than any other IP standard, creating the industry’s largest IP ecosystem of products.
NDI allows multiple video systems to identify and communicate with one another over IP; it can encode, transmit and receive many streams of high-quality, low-latency, frame-accurate video and audio in real-time. The growth of NDI is backed by a growing community of installers, developers, AV professionals, and users who are deeply engaged with the company through community events and initiatives. NDI is a part of Vizrt. For more information: https://ndi.video/
Learn more about NDI below:
0 notes