Don't wanna be here? Send us removal request.
Text
Smart Speaker Prototype AIY Voice Kit Unboxing
Google AIY is an artificial intelligence project that aims to develop some artificial intelligence DIY projects. There are currently two projects, one is a voice kit and the other is a video kit. At present, I only bought the AIY voice kit. Let's just unpack it and see what it is.
The outer packaging is very simple. The front is the finished product picture. The back is the internal parts, including a main circuit board Voice HAT, microphone, speaker, an arcade style button, two pieces of paper, and some connecting wires.
There is a very thick manual with detailed assembly instructions and some gameplay introductions. It can be said to be a magazine. In fact, this kit was originally a gift for the first issue of Map Pi magazine.
Voice HAT is the core of this kit, speakers, microphones, buttons, etc. are connected to it. And it is connected to the Raspberry Pi. For this thing, I bought a Raspberry Pi.
Well, start assembling according to the instructions. First plug the Voice HAT into the Raspberry Pi.
Then start to connect the speaker. The green place on the Voice HAT can be connected to a speaker. The positive and negative wires of the speaker are inserted into the screw holes and the bolts are tightened. In fact, a speaker can be connected to the right side, but there is no welding in this place. That’s a regret.
Now start to connect the microphone. This is easier that just plug it in and it's OK.
The most complicated origami paper starts below, the first is the inner paperboard.
Then fold the outer cardboard and tuck the inner side.
OK, the last thing, the arcade-style buttons can be installed. This button is really good. In fact, there is an LED inside.
Then fix the microphone on it and align it with the hole on it. The manual says to use double-sided tape. Without this thing, it is directly fixed with tape.
Ok, seal it, and the assembly is complete.
The next step is to insert the card, turn it on, and... of course, here is a smart speaker. Compared with Google Home Mini, it is a bit big.
0 notes
Text
Who’s Better: Google Coral USB Accelerator and Intel 2nd Generation Neural Compute Stick?
As artificial intelligence (AI) and machine learning (ML) gradually move from science fiction to real life, we now need a fast and convenient way to prototype this type of system. Although desktop computers can also be sufficient to meet the operating requirements of AI/ML, even single-board computers such as Raspberry Pi can meet these requirements. But what if you just want a simple plug-in device to make your system run faster and more powerful?
Don't worry, you actually have a variety of choices, including Google Coral Edge TPU series hardware USB Accelerator (Coral USB accelerator, hereinafter referred to as CUA) and Intel's Neural Compute Stick 2 (NCS2). Both devices are computing devices plugged into the host via USB. NCS2 uses a visual processing unit (VPU), while Coral USB Accelerator uses a tensor processing unit (TPU), both of which are dedicated processing devices for machine learning. Today I will give you a test and comparison: What is the difference between the two? As a developer, should you choose Coral or NCS2? Please check below.
Coral USB Accelerator
-ML accelerator: Edge TPU ASIC (application specific integrated circuit) chip designed by Google. It provides high-performance ML inference for TensorFlow Lite models (MobileNet V2 400 + fps, from the latest official updated data).
-Support USB 3.1 port and cable (SuperSpeed, 5GB/s transfer speed)
-Dimensions: 30 x 65 x 8 mm
-Official price: $74.99
Intel Neural Compute Stick 2
-Processor: Intel Movidius Myriad X Visual Processing Unit (VPU)
-USB 3.0 Type-A
-Size: 72.5 x 27 x 14mm
-Official price: $87.99
1. Comparison of processor and acceleration performance
Unlike the way you compare traditional computer CPUs, the details of comparing each processor/accelerator are more subtle, depending on how you plan to use them. Although the output format is slightly different (every inference time and the number of frames per second), we can still compare the two devices in some overall performance modes.
When evaluating AI models and hardware platforms for real-time deployment, the first thing to look at is their speed. In computer vision tasks, benchmarks are usually measured in frames per second (FPS). Higher numbers indicate better performance. For real-time video streaming, at least about 10 fps is required to make the video appear smooth.
Operational performance: First, when CUA is added to the desktop CPU, the performance can be increased by about 10 times, and the operation performance is relatively good. (Depending on the selected CPU model, the 10x performance fluctuates slightly) NCS2 "cooperates" with the older Atom processor, which can increase the processing speed by nearly 7 times. However, when paired with a more powerful processor, the results presented by NCS2 are not surprising.
NCS2 can theoretically perform inference at a speed of 4 TOPS. Curiously, CUA also has the exact same rate, although both use different operations to perform ML. In addition, Intel claims that the performance of NCS2 is 8 times that of the original neural computing stick. (If you like, you can choose NCS2 instead of the original neural computing stick, although the price is lower.)
NCS2 can use MobileNet-v2 to run the 30 FPS classification model, which is not bad. However, object detection at 11 FPS is a bit difficult. A frame rate of about 10 FPS may not be sufficient for real-time object tracking, especially for high-speed motion, and many objects may be lost, and developers need very good tracking algorithms to make up for this "hole". (Of course, the official benchmark results are not completely credible. Often, these companies compare their manually optimized software with competitors’ out-of-the-box models.)
User evaluation
Power consumption: NCS2 has lower power consumption. As far as CUA is concerned, the official does list 0.5 watts for each TOPS. Users can also set CUA to the default speed or maximum (2 times the default value) as needed.
It is worth noting that Google’s official documents do clearly remind: the power transmission when the device is running at maximum speed and the maximum ambient temperature may burn your skin. So personally, unless you really need additional processing power, it is best to run it in normal mode.
It is also important to remember that Python is not the first choice for good performance on devices. Both of these devices support the C++ API, which is also my "trick to get the best performance from the device in my tests."
2. Software support
NCS2 can be used with Ubuntu, CentOS, Windows 10 and other operating systems. It can support TensorFlow, Caffe, ApacheMXNet, and PyTorch and PaddlePadle through open neural network conversion.
CUA does not support Windows, but it can run under Debian 6.0 or higher (or any derivative version, such as Ubuntu 10.0+). It is worth mentioning that CUA can only officially run TensorFlow Lite models.
3. Comparison of size, prototype design and other details
After covering the software support, computing power and power consumption, what is the specific situation of the two in the actual construction of product prototypes?
Frankly speaking, both devices look very cool. The CUA is a slightly silver-white checkered body with a partially transparent body and what appears to be a heat sink. The NCS2 is a sleek blue design, and the blue body and integrated radiator look more fashionable.
Of course, appearance is only secondary. The important thing is that NCS2 does, like CUA, will become hot during operation. However, its radiator design allows you to hold it on a cooler integrated heat sink without having to hold it with your fingers in the middle, which is very clever.
The design of NCS2 allows users to use multiple computing sticks together to enhance its processing capabilities. You can arrange them neatly in a vertical USB docking station. Similarly, a host can run multiple CUAs, but you may need to find another way to save each CUA. It is worth mentioning that although both have similar dimensions, the thickness of NCS2 (14 mm) is almost twice that of CUA. In addition, it is inserted through a USB plug (such as an extra large thumb drive) instead of a flexible cable like CUA, which means that in some operating scenarios, NCS2 will make you very difficult to deal with space issues. difficult. You have to use data cables and docking stations extensively. This is something you need to consider before you make a choice.
Finally, NCS2 and CUA seem to be dedicated devices designed for edge computing applications. If you need to run on a Windows system or need to run outside of the Tensorflow Lite framework, then NCS2 has obvious advantages. For its part, Coral USB Accelerator's peripheral supporting hardware, there are more simple and rude development board Dev Board, PCI accelerator designed with Coral Edge TPU as the core, and SoM module similar to the development board, etc. If your need is to quickly bring product prototyping to the market, then Coral is your best choice, and it is more attractive to developers.
Coral USB Accelerator development environment requirements: a Linux computer with a USB port; support Debian 6.0 or higher, or its derivative systems (such as Ubuntu 10.0+); x86_64 or ARM64 system architecture with ARMv8 instruction set.
Therefore, from the above requirements, Coral USB accelerator supports Raspberry Pi. However, it must be Raspberry Pi 2/3 Model B/B+ and run Raspbian system (or other Debian derivative systems).
At this point, the functions between the two are very similar. If you want to add AI/ML to Raspberry Pi or similar projects, both devices can work normally.
Many pre-compiled network models allow you to get better results easily and quickly. Nevertheless, fully quantifying your network is still an advanced task. Conversion requires an in-depth understanding of the network and how it operates. In addition, when I upgraded from FP_32 to FP_16 and from FP_16 to UINT, the loss of accuracy was also great. Interestingly, Myriad can handle half of its floating point, and CUA can only handle 8-bit floating point. This means that Myriad can obtain higher accuracy.
Intel and Google obviously adopted two completely different "routines. Google's advantage is that products can help developers easily build prototypes and promote a complete set of solutions from Google Cloud Platform to edge-tpu. I personally like how all the components are Work together. On the other hand, Intel provides Openvino plug-ins, developers can use them to optimize their network, so that it can run on a variety of hardware. OpenVINO currently supports Intel CPU, GPU, FPGA and VPU. At Intel The challenge ahead is that these "combined punches" are always difficult to use the optimal function of each component.
Google Coral USB Accelerator can train network models online, which is essential for migration learning. Obviously, Google believes that their pre-training network and migration learning provide developers with an efficient combination. In addition, Intel NCS2 has three pairs of built-in stereo depth hardware, which are valuable in many use cases (such as obstacle avoidance).
Application scenario:
Intel NCS2 also provides prototype, verification and deployment of DNN. For driverless and driverless vehicles and IoT devices, low power consumption is essential. For those who wish to develop deep learning inference applications, NCS2 is one of the most energy-efficient and lowest-cost USB sticks.
For example, the upcoming Titanium AIX also has built-in Intel Movidius Myraid X computing acceleration chip, which lowers the threshold of AI learning and development, and helps AI enthusiasts and developers quickly build AI applications and solutions that can listen, talk, and watch.
Google Coral is more than just hardware. It easily combines the functions of custom hardware, open software and advanced AI algorithms and provides high-quality AI solutions. Coral has many application cases in helping industrial development, including predictive maintenance, anomaly detection, robotics, machine vision, and voice recognition. It has great application value in manufacturing, healthcare, retail, smart space, internal monitoring and transportation sectors.
0 notes
Text
Why being engaged in 5G? Talking about Edge AI and Model Play
5G will change the world, which will not shock people, not just publicity. why? Because the capabilities of 5G technology can change existing technologies in unimaginable ways.
Research shows that by 2035, 5G is expected to provide 12.3 trillion U.S. dollars in global economic output and support 22 million jobs worldwide, with huge potential. This technology will not only support devices, but can also change lives. In addition to mobile device technology, we also see that the fields of artificial intelligence, the Internet of Things and robotics will be affected by 5G. In this article, we will explore the potential of 5G in these areas.
Self-driving car
As the Internet of Things binds our physical world and brings its entities to the digital platform, 5G is critical to its sustainability. From finding obstacles, interacting with smart signs and following maps, to establishing communication between other manufacturers/cars, the responsibility of these cars is huge.
All of this can only happen when large amounts of data are transmitted and processed in real time. To this end, a network with equal speed and potential is needed, and 5G seems to be able to provide such a network. 5G has high capacity, low latency, and safety, all of which must put self-driving cars on the road.
Smart City
The city we will have in the future will be different from the city we live in today. They will include connected devices, interactive autonomous vehicles, on-demand smart buses, driverless taxis, etc. Smart cities will also include smart buildings, which will enable companies to increase efficiency by regulating energy consumption.
Data from these cities will help us understand how to use resources in a specific area and how to optimize resource use. Although the possibilities are endless, we will need the next-generation network-5G to make it a reality.
IoT technology
The Internet of Things has begun to change the world, but the integration of 5G will completely change it. It will connect billions of other devices to the Internet. Although the home Internet of Things has great potential, the real deal lies in the Industrial Internet of Things.
From manufacturing, agriculture to retail, healthcare, etc., the Internet of Things will be omnipresent. 5G will fully expand its coverage. For example, 5G in healthcare will enable robotic surgery, personalized medicine, wearable healthcare, etc.
robot technology
We all know the potential that robotics brings to the industry, but many people may not know what can be done with 5G collaboration. In order to operate efficiently, robots need to exchange large amounts of data with systems and employees. To this end, the capacity and capabilities of 5G networks are required.
For example, in agriculture, robots can easily monitor the condition of crop fields and send near real-time video and information back to farmers. After receiving the instructions, the robot can perform the required operations, such as trimming, spraying or harvesting crops. They can also measure features and transmit them to remote scientists.
Why is it so important? The world’s population is growing, and our needs are also growing. In order to maintain food supplies, new technologies need to be brought to the field.
AI entertainment
One obvious use of 5G networks is to address and support the growing demand for mobile video. The data capacity, speed and low latency of the network will promote innovative entertainment methods, including virtual reality and augmented reality. We may see a lot of innovation in AR and VR, but not only in the entertainment field-companies will also see benefits.
AI, Internet of Things and 5G – Why?
We also see a lot of confusion hovering over AI and IoT. One thing we all understand is that all of this boils down to data and processes large amounts of data in real time.
However, we do not yet have a network that can support this function, but 5G promises:
Low power consumption
Utilize IoT sensors that will last a long time
Compared with 4G, supports more devices
Provides incredible high-speed data connections.
Deliver data with low latency so that more data can be processed.
From predictive maintenance and cost reduction to problem solving/making necessary changes, 5G will revolutionize the industry.
Network optimization and distribution
For example, 5G will enable network slicing, during which you can use a portion of the network bandwidth to prioritize and meet specific needs. This means that the network can be appropriately sliced and distributed among participants according to the priority of the task and used for a given task.
5G's low latency
Remember, 5G is definitely about speed, not speed. Low latency enables 5G networks to provide very near real-time video transmission for sports or security purposes. In industries such as construction and healthcare, where regular and real-time coordination is a key industry, this feature may prove to be extremely beneficial.
In the field of construction, low latency enables effective video conferencing between members to complete work.
In medical care, medical service providers can monitor patients with the same efficiency even when they are outside the hospital.
Edge artificial intelligence with low latency, high efficiency and low consumption
Edge TPU is a supplement to Google Cloud TPU and Google cloud services. It provides end-to-end, cloud-to-end, hardware + software basic architecture to facilitate the deployment of AI-based solutions for customers.
Edge TPU can be used in more and more industrial application scenarios, such as predictive maintenance, anomaly detection, machine vision, robotics, voice recognition, etc. It can be used in manufacturing, on-premises, healthcare, retail, smart spaces, transportation, etc.
LG's internal IT service department has tested Edge TPUs and plans to use them on testing equipment in the product line. Shingyoon Hyun, chief technology officer of LG CNS organization, said that the current LG inspection device processes more than 200 display panel images per second, and all problems are manually inspected. The accuracy of the existing system is about 50%, and Google AI can Increase the accuracy to 99.9%.
Model Play is an AI model resource platform for global developers, with built-in diversified AI models, compatible with Tiorb AIX, and supports Google Edge TPU edge artificial intelligence computing chips, accelerating professional development.
In addition, Model Play provides a complete and easy-to-use migration learning model training tool and a wealth of model examples, which can be perfectly matched and combined with Tiorb AIX to realize the rapid development of various artificial intelligence applications. Based on Google's open source neural network architecture and algorithm, the autonomous migration learning function is built. Users do not need to write code. AI model training can be completed by selecting pictures, defining models and category names, realizing easy learning and easy development of artificial intelligence.
0 notes
Text
Google TensorFlow Team: How to Use AI to Train Dogs?
Dog training usually requires a human dog trainer. But if there is no trainer, how to achieve the purpose of training? A company called Companion Labs invented the first dog training device driven by artificial intelligence AI. Among them, computer vision is the key to the operation of this machine. It can detect the behavior of dogs in real time and adjust its delivery of rewards to strengthen the desired behavior. For example, use its computer vision to detect when dogs are doing satisfactory things and issue rewards.
This trainer, called CompanionPro, looks like a Soviet-era space heater with an image sensor, Google Edge TPU AI processor, wireless connection, lights, speakers, and a proprietary "dog food transmitter."
In this article, the TensorFlow team will share how to develop a system to understand the dog's behavior and influence the dog's behavior through training, as well as our process of miniaturizing the computer to adapt to B2B products.
Improve the lives of our pets through technology
Today's technology has the potential to improve the lives of our pets. As a personal example, I am a 4 year old pet daddy who rescues Adele. Like many other pet parents, I work full time and only leave my cat at home for up to 8 hours a day. When I am at work or away from home, I will worry about her because I never want her to feel lonely or bored, without water or food, or get sick without me.
Fortunately, in the absence of me, the available technology allows me to check her and improve her welfare. For example, I have an automatic pet food dispenser that distributes 2 meals to Adele every day, so she always feeds in the right amount on time, and I have an automatically cleaned bin so that she can keep it clean throughout the day Rubbish, and I used the Whistle GPS tracker to help ensure she would never get lost.
Although these devices can help me ensure the health of my pets, I am always curious and hope to further adopt technology to solve animal welfare issues. What if we can use technology to communicate with animals? How can we keep them happy when we are not around them? Every day, millions of dogs stay alone at home for several hours, and millions of dogs are placed in shelters, and they rarely receive personal care. Can we use this time to improve the quality of people and pets? These are issues that our Companion team is interested in solving.
For the past two years, our team of small animal behavior experts and engineers has been working on a new device that can interact with dogs and automatically train when people can’t be with them. Our first product, CompanionPro, interacts with dogs through lights, sounds, and snack dispensers to handle basic obedience behaviors such as sitting down, sitting down, staying, and recalling.
Our users are dog shelters and businesses, and they want to help them improve dog training services. Although there is evidence that dogs that respond to basic obedience orders are more likely to be adopted in permanent homes, shelters are often under-resourced and unable to provide training for all dogs. Puppy day care tells us that it is difficult for them to find enough dog trainers to meet the demand for dog training services in their facilities. Institutions that specialize in dog training tell us that they want the coach to focus on teaching dog advanced tasks, but they have to spend a lot of time repeating the same exercises to ensure that the dog responds to basic obedience. Fortunately, the machine is good at performing repetitive tasks with perfect consistency and unlimited patience, which puts us on the path to creating autonomous training equipment.
Building this product poses many challenges. We must first conduct experiments to prove that we can train the dog without human trainers, then build a model to understand the dog's behavior and determine how to react to the dog, and finally miniaturize our technology to make it suitable for productization, And can be sold to related companies.
At Companion, we have developed a device that uses Google's TensorFlowLite (and EdgeTPU), which can automatically train dogs' advanced response to multiple behaviors, including sitting down, lying down, stopping, etc. We believe that this will provide all dogs with a wealth of rearing and training by reducing costs, which will have a very positive impact on the adoption rate and retention rate of lean shelters throughout the United States. Google Edge TPU allows us to understand the behavior of the dog when interacting with the device. This understanding allows the CompanionPro device to use precise positive reinforcement by giving food rewards when the dog exhibits the desired behavior. Ultimately, it can help us understand and improve the lives of pets.
More realistic examples of Google Edge TPU and TensorFlow
The Model Play and Tiorb AIX launched by Gravitylink can also perfectly support Edge TPU. AIX is an artificial intelligence hardware that integrates the two core functions of computer vision and intelligent voice interaction. Model Play is an AI model resource platform for global developers. It has built-in diverse AI models, combined with Tiorb AIX, based on Google open source neural Network architecture and algorithms, build autonomous transfer learning functions, without writing code, you can complete AI model training by selecting pictures, defining models and category names.
0 notes
Text
Google Coral USB Accelerator Installation Guide
Google Coral USB accelerator is a USB device that provides Edge TPU as a computer co-processor. When connected to a Linux, Mac or Windows host, it can speed up the reasoning speed of machine learning models.
All you need to do is download the Edge TPU runtime and TensorFlow Lite library on the computer connected to the USB Accelerator. Then, use the sample application to perform image classification.
System Requirements:
A computer with one of the following operating systems:
· Linux Debian 6.0 or higher, or any of its derivatives (such as Ubuntu 10.0+), and x86-64 or ARM64 system architecture (support Raspberry Pi, but we only tested Raspberry Pi 3 Model B + and Raspberry Pi 4)
· MacOS 10.15 with MacPorts or Homebrew installed
· Windows 10
-A usable USB port (for best performance, please use a USB 3.0 port)
-Python 3.5, 3.6 or 3.7
Operating Procedures
1. Install Edge TPU runtime
Edge TPU runtime is required to communicate with Edge TPU. You can install it on the host, Linux, Mac or Windows by following the instructions below.
1) Linux system
① Add the official Debian package to your system;
② Install Edge TPU runtime:
Connect the USB Accelerator to the computer using the included USB 3.0 cable. If it is inserted, please delete and re-insert it to make the newly installed udev rules take effect.
※ Install at maximum working frequency (optional)
The above command will install Linux's standard Edge TPU runtime, which will run the device at the default clock frequency. You can install the runtime version, which runs at maximum frequency (twice the default value). This can increase the speed of inference, but at the same time also increase power consumption. USB Accelerator will become very hot.
If you are not sure whether your application needs to improve performance, you should use the default operating frequency. Otherwise, you can install the maximum frequency runtime as follows:
sudo apt-get install libedgetpu1-max
You cannot install two versions of the runtime at the same time, but you can switch by simply installing the alternate runtime, as shown above.
Note: When operating the device at maximum frequency, the metal on the USB Accelerator may become very hot. This may cause burns. To avoid injury, keep the device out of reach when operating the device at the maximum frequency, or use the default frequency.
2) Mac system
① Download and unzip the Edge TPU runtime
② Install Edge TPU runtime
The installation script will ask if you want to enable the maximum operating frequency. Running at the maximum operating frequency will increase the speed of inference, but it will also increase power consumption and make the USB Accelerator very hot. If you are not sure that your application needs to improve performance, you should type "N" to use the default operating frequency.
You can read more about performance settings in the official USB Accelerator data sheet.
Now, use the included USB 3.0 cable to connect the USB Accelerator to the computer. Then continue to install the TensorFlow Lite library.
3) Windows system:
① Click to download the latest official compressed package. Unzip the ZIP file, and then double-click the install.bat file.
A console window will open to run the installation script, and it will ask if you want to enable the maximum operating frequency. Running at the maximum operating frequency will increase the speed of inference, but it will also increase power consumption and make the USB Accelerator very hot. If you are not sure that your application needs to improve performance, you should type "N" to use the default operating frequency.
You can read more about performance settings in the Coral USB Accelerator data sheet provided by Google.
Now, use the included USB 3.0 cable to connect the USB Accelerator to the computer.
2. Install the TensorFlow Lite library
There are multiple ways to install the TensorFlow Lite API, but to start using Python, the easiest option is to install the tflite_runtime library. The library provides the most basic code (mainly Interpreter API) required to run inference using Python, which saves a lot of disk space.
To install it, follow the TensorFlow Lite Python quick start and then return to this page after running the pip3 install command.
3. Use the TensorFlow Lite API to run the model
It is now possible to infer on the Edge TPU. Perform image classification using sample code and models.
1) From GitHub: Download the sample model
2) Download bird classifier models, label files and bird photos
3) Run the image classifier using photos of birds
The inferred speed may vary depending on the host system and whether USB 3.0 connection is used.
To run other types of neural networks, check out the official sample projects, including examples of performing real-time object detection, pose prediction, key phrase detection, on-device transfer learning, and more.
AI hardware and software supporting Google Edge TPU
The Model Play and Tiorb AIX developed by Gravitylink can perfectly support Edge TPU. AIX is an artificial intelligence hardware that integrates the two core functions of computer vision and intelligent voice interaction. The built-in AI acceleration chip (Coral Edge TPU/intel movidius) supports edge deep learning inference and provides a reliable Performance support.
Model Play is an AI model resource platform for global developers, built-in diversified AI models, combined with Tiorb AIX, based on Google open source neural network architecture and algorithms, to build autonomous transfer learning functions without writing code, by selecting pictures, defining models and the category name to complete the AI model training.
0 notes
Text
The First Global AI Model Platform Based on Google Edge TPU Chip Comes Out
With the wave of artificial intelligence sweeping the world, it is said that the intelligence represented by artificial intelligence is the core force of the fourth industrial revolution. So artificial intelligence has also become one of the current research hotspots in academia and industry.
What is Edge TPU chip
Edge TPU is an ASIC chip designed specifically to run TensorFlow Lite ML models on the edge. Edge TPU can be used in more and more industrial use scenarios, such as predictive maintenance, anomaly detection, machine vision, robotics, speech recognition, etc. It can be applied to various fields such as manufacturing, local deployment, healthcare, retail, smart space, transportation and so on. It has a small size, low energy consumption, but excellent performance, which can deploy high-precision AI on the edge. Edge TPU complements CPU, GPU, FPGA, and other ASIC solutions that run AI at the edge.
If artificial intelligence is compared to a tank, then the model plays the role of "engine". The model is an important guarantee for the real landing of artificial intelligence technology in production practice and the promotion of industrial development. It is also an important part of the artificial intelligence ecosystem. The first Chinese model platform based on Google Edge TPU chip - Model Play Chinese platform was officially launched.
Global AI model sharing platform-----Model Play
Model Play is an AI model resource communication and trading platform for global users. It provides a rich and diverse functional model for machine learning and deep learning. It supports various types of mainstream smart terminal hardware such as TiOrb AIX, helping users quickly create And deploy models, significantly improve model development and application efficiency, and lower the threshold for artificial intelligence development and application.
The AI model in the Model Play platform is compatible with mainstream edge computing AI chips in many types of markets, including Google Coral Edge TPU, Intel Movidius, and NVIDIA Jetson Nano. Especially for Google Coral Edge TPU, after downloading the AI model, it can run directly with Tiorb AIX.
Share ideas with global developers
As a global AI developer ecological platform created by Gravity Internet, Model Play can provide AI model trading and data, case and other content sharing functions, effectively connecting the AI development ecological chain participants. Users can subscribe to interested models in the AI model market for retraining or deployment as an inference service, or they can freely upload their own AI models to share ideas with global developers.
Diversified AI models, extensive landing scenarios
Based on a rich and diverse model resource library, Model Play is suitable for a wide range of AI application scenarios. No matter it is a smart product design team, a production enterprise, or even an education industry or an individual developer group, you can get valuable content here. Users can not only publish their trained AI models, but also download models they are interested in, to retrain and expand their AI ideas, and realize the process of idea-prototype-product.
In addition, Model Play also opened an activity for soliciting models for global developers. The developers who are interested in it can give it a try.
0 notes
Text
TensorFlow and PyTorch: Who is the best development framework?
Gradient recently published a blog that prominently shows the rise and adoption of PyTorch in academia (based on the number of papers published on CVPR, ICRL, ICML, NIPS, ACL, ICCV, etc.). The data shows that PyTorch is clearly a minority in 2018, and compared to 2019, it has been unanimously welcomed by researchers in academic research.
TensorFlow and PyTorch: History of development
Both libraries are open source and contain licenses suitable for commercial projects.
TensorFlow was first developed by the Google Brain team in 2015 and is currently used by Google for academic research and production purposes.
On the other hand, PyTorch was originally developed by Facebook based on the popular Torch framework and was originally a premium alternative to NumPy. However, in early 2018, Caffe2 (a convolution architecture for rapid feature embedding) was merged into PyTorch, effectively shifting the focus of PyTorch into the field between data analysis and deep learning. PyTorch is one of the latest deep learning frameworks and is popular because of its simplicity and ease of use. Pytorch is very popular because of its dynamic computation graph and effective memory usage. Dynamic graphs are ideal for certain use cases, such as processing text. Pytorch is easy to learn and easy to code.
TensorFlow and PyTorch: growing in popularity
TensorFlow is widely used and provides strong support in the community/forum. The TensorFlow team also released TensorFlow Lite, which can run on mobile devices.
In order to speed up the processing speed of TensorFlow, you need to use hardware devices such as Tensor Processing Unit (TPU), and the Edge TPU accessible on Google Cloud Platform. Edge TPU is the ASIC chip on most Android devices running TensorFlow Lite .
TensorFlow is Google's open source machine learning library and is currently the most popular AI library. It is welcomed through its distributed training support, scalable production deployment options, and support for various devices (such as Android). One of the best features in TensorFlow is Tensorboard visualization. Usually during training, you must run multiple times to adjust hyperparameters or identify any potential data problems. Using Tensorboard makes it easy to view and discover problems.
PyTorch, which challenges TensorFlow, is familiar to most Python developers.
PyTorch is native to Python and easy to integrate with other Python software packages. This fact makes it easy for researchers to choose it. Many researchers use Pytorch because the API is intuitive and easy to learn, and you can quickly experiment without reading the documentation.
It can replace NumPy, which is the industry-standard universal array processing package. Since PyTorch and NumPy have very similar interfaces, Python developers can easily adapt to it.
TensorFlow and PyTorch: technical differences
1) Dynamic calculation chart
The real highlight of PyTorch is that it uses dynamic rather than static (TensorFlow) computation graphs. Deep learning frameworks use computation graphs to define the order of computation that needs to be performed in any artificial neural network.
The static graph must be compiled before the model can be used for testing. This is incredibly tedious and not suitable for rapid prototyping. For example, with TensorFlow, the entire calculation graph must be defined before the model can be run.
But with PyTorch, graphics can be dynamically defined and manipulated. This greatly improves developer productivity and is useful when using variable-length input in recurrent neural networks (RNN). Fortunately, TensorFlow added support for dynamic computation graphs in the TensorFlow Fold library released in 2018.
2) Save and load the model
Both libraries have saved and loaded the model very well. PyTorch has a simple API that can save the weight of the model for easy replication.
TensorFlow can also handle save/load well. The entire model can be saved as a protocol buffer, including parameters and operations. This feature also supports saving the model in one language and then loading the model in another language (such as C++ or Java), which is critical for deployment stacks where Python cannot be selected.
3) Deployment method
The traditional interface of the AI/ML model is the REST API. For most Python models, a simple Flask server will be created to provide convenient access.
Both libraries can be easily packaged using Flask server. For mobile and embedded deployments, TensorFlow is by far the best method. With the help of tools such as TensorFlow Lite, it can be easily integrated into Android or even iOS frameworks.
TensorFlow Serving is another great feature. The model will become obsolete over time, and new data will be needed for retraining. TensorFlow Serving allows replacing old models with new models without shutting down the entire service.
So what are the latest developments in TensorFlow and PyTorch?
TensorFlow 2.1.0... Is "Advent" Windows?
In the latest version of TensorFlow, the tensorflow pip package now includes GPU support for Linux and Windows by default (same as tensorflow-gpu). It can run on machines with and without NVIDIA GPUs. tensorflow-gpu is still available, for users who care about the size of the software package, you can download the CPU-only software package on tensorflow-cpu.
In order to take advantage of the new /d2 Reduced Optimize Huge Functions compiler flag, the officially released tensorflow Pip package is now built using Visual Studio 2019 version 16.4. Whether TensorFlow is popular on Windows remains to be seen.
AI chip designed to run TensorFlow Lite on the edge
Edge TPU is an ASIC chip designed specifically to run TensorFlow Lite ML models on the edge. Edge TPU can be used in more and more industrial use scenarios, such as predictive maintenance, anomaly detection, machine vision, robotics, speech recognition, etc. It can be applied to various fields such as manufacturing, local deployment, healthcare, retail, smart space, transportation and so on. Its small size and low energy consumption, but excellent performance, can deploy high-precision AI on the edge.
The first global AI model platform based on Google Edge TPU chip-Model Play
Model Play is an AI model resource communication and trading platform for global users. It provides a rich and diverse functional model for machine learning and deep learning. It supports various types of mainstream smart terminal hardware such as Titanium (TiOrb) AIX, helping users quickly create And deploy models, significantly improve model development and application efficiency, and lower the threshold for artificial intelligence development and application.
The AI model in the Model Play platform is compatible with mainstream edge computing AI chips in many types of markets, including Google Coral Edge TPU, Intel Movidius, and NVIDIA Jetson Nano. Especially for Googl Coral Edge TPU, after downloading the AI model, it can run directly with TiOrb AIX.
The Titanium AI market launched by Google Edge TPU global distributor partner Gravitylink has also been launched. Titanium AI Market is a global AI algorithm and solution trading market created by Google AI technology promotion partner Gravitylink, which is committed to helping excellent AI service providers and demanders from the world establish more efficient direct connection and accelerate The landing and application of AI technology in various fields.
0 notes
Text
Google Edge TPU New Hardware Device Coral Mini PCIe Accelerator Overview
Mini PCIe Accelerator: PCIe device that can easily integrate Edge TPU into existing systems.
The Coral Mini PCIe accelerator is a PCIe module that can integrate the Edge TPU chip into existing systems and products.
The Mini PCIe accelerator is a half-size Mini PCIe module card that can be adapted to any standard Mini PCIe slot. This form factor makes it easy to integrate into ARM and x86 platforms, so you can add native ML acceleration to products such as embedded platforms, small PCs, and industrial gateways.
Perform high-speed ML inference
The onboard Edge TPU coprocessor can perform 4 trillion operations per second (TOPS), using 0.5 watts per TOPS (2 TOPS per watt). It can execute the most advanced mobile vision models in a fairly low-cost way, such as the MobileNet v2 model at 400 FPS.
Use with Debian Linux
It can be integrated with any Debian-based Linux system and compatible card module slots.
Support TensorFlow Lite
No need to build models from scratch. The TensorFlow Lite model can be compiled and run on the Edge TPU.
Support AutoML Vision Edge
With AutoML Vision Edge, you can easily build and deploy high-speed, high-precision custom image classification models to your device.
Features:
· Google Edge TPU Machine Learning Accelerator
· Standard half-size Mini PCIe module card
· Supports Debian Linux and other CPU versions
System Requirements:
Coral Mini PCIe acceleration needs to be connected to a host with the following specifications:
· Any Linux computer with compatible Mini PCIe module slot
· Support Debian 6.0 or higher, or any derivative system (such as Ubuntu 10.0+)
· X86-64 or ARM32/64 system architecture with ARMv8 instruction set
Machine learning acceleration: Google Edge TPU auxiliary processor
Connector: Half-size Mini PCIe
Dimensions: 30 mm x 26.8 mm
About Google Coral
Last year Google released Coral. This platform, which contains hardware components and software tools, allows developers to easily achieve local AI products from prototype to scale. The Coral Edge TPU product portfolio, including Coral Dev Board, USB Accelerator and PCIe Accelerator, is now available in 36 countries and areas. Since the product was released, developers have built on Coral with various applications covering many industries, including healthcare, agriculture, smart cities, etc..
About Edge TPU:
The Edge TPU is Google's ASIC specifically designed to run AI at the edge. It has a small size, low energy consumption. But with excellent performance, it allows you to deploy high-precision AI on the edge. Edge TPU is not just a hardware solution. It combines custom hardware, open source software and the most advanced AI algorithms to provide high-quality, easy-to-deploy AI solutions for the edge. Due to its excellent performance, small size, and extremely low energy consumption, the Edge TPU enables the widespread deployment of high-quality AI at the edge. Edge TPU can be used in more and more industries, such as predictive maintenance, anomaly detection, machine vision, robotics, speech recognition, etc. It can be applied in various fields such as manufacturing, local deployment, healthcare, retail, smart space, transportation and so on.
If you are interested in Google Coral Edge TPU series hardware products, please visit Google Coral agent Gravitylink online store (https://store.gravitylink.com/global) to order. Enterprise users can also obtain preferential prices for bulk orders through email consultation. ([email protected] / [email protected])
0 notes
Text
Google Coral USB Accelerator Latest Installation Guide
Google Coral USB accelerator is a USB device that provides Edge TPU as a computer co-processor. When connected to a Linux, Mac or Windows host, it can speed up the reasoning speed of machine learning models.
All you need to do is download the Edge TPU runtime and TensorFlow Lite library on the computer connected to the USB Accelerator. Then, use the sample application to perform image classification.
System Requirements:
A computer with one of the following operating systems:
·Linux Debian 6.0 or higher, or any of its derivatives (such as Ubuntu 10.0+), and x86-64 or ARM64 system architecture. It can support Raspberry Pi. But we only tested Raspberry Pi 3 Model B+ and Raspberry Pi 4.
·MacOS 10.15 with MacPorts or Homebrew installed
·Windows 10
-A usable USB port (for best performance, please use a USB 3.0 port)
-Python 3.5, 3.6 or 3.7
Operating procedures
1. Install Edge TPU runtime
Edge TPU runtime is required to communicate with Edge TPU. You can install it on the host, Linux, Mac or Windows by following the instructions below.
1). Linux system
Ⅰ. Add the official Debian package to your system;
Ⅱ. Install Edge TPU runtime.
Connect the USB Accelerator to the computer using the included USB 3.0 cable. If it is inserted, please delete it and re-insert it to make the newly installed udev rules take effect.
※ Install at maximum working frequency (optional)
The above command will install Linux's standard Edge TPU runtime, which will run the device at the default clock frequency. You can install the runtime version, which runs at maximum frequency (twice the default value). This can increase the speed of inference, but at the same time also increase power consumption. USB Accelerator will become very hot.
If you are not sure whether your application needs to improve performance, you should use the default operating frequency. Otherwise, you can install the maximum frequency runtime as follows:
sudo apt-get install libedgetpu1-max
You cannot install two versions of the runtime at the same time, but you can switch by simply installing the alternate runtime, as shown above.
Note: When operating the device at maximum frequency, the metal on the USB Accelerator may become very hot. This may cause burns. To avoid injury, keep the device out of reach when operating the device at the maximum frequency, or use the default frequency.
2). Mac system
I. Download and unzip the Edge TPU runtime
Ⅱ、Install Edge TPU runtime
The installation script will ask if you want to enable the maximum operating frequency. Running at the maximum operating frequency will increase the speed of inference, but it will also increase power consumption and make the USB Accelerator very hot. If you are not sure that your application needs to improve performance, you should type "N" to use the default operating frequency.
You can read more about performance settings in the official USB Accelerator data sheet.
Now, use the included USB 3.0 cable to connect the USB Accelerator to the computer. Then continue to install the TensorFlow Lite library.
3). Windows system:
Ⅰ. Click to download the latest official compressed package. Unzip the ZIP file, and then double-click the install.bat file.
A console window will open to run the installation script, and it will ask if you want to enable the maximum operating frequency. Running at the maximum operating frequency will increase the speed of inference, but it will also increase power consumption and make the USB Accelerator very hot. If you are not sure that your application needs to improve performance, you should type "N" to use the default operating frequency.
You can read more about performance settings in the Coral USB Accelerator data sheet provided by Google.
Now, use the included USB 3.0 cable to connect the USB Accelerator to the computer.
2. Install the TensorFlow Lite library
There are multiple ways to install the TensorFlow Lite API. But to start using Python, the easiest option is to install the tflite_runtime library. The library provides the most basic code (mainly Interpreter API) required to run inference using Python, which saves a lot of disk space.
To install it, follow the TensorFlow Lite Python quick start and then return to this page after running the pip3 install command.
3. Use the TensorFlow Lite API to run the model
It is now possible to infer on the Edge TPU. Perform image classification using sample code and models.
1) From GitHub: download the sample model
2) Download the bird classifier model, tag files and bird photos
3) Run the image classifier using photos of birds
The inferred speed may vary depending on the host system and whether USB 3.0 connection is used.
To run other types of neural networks, check out the official sample projects, including examples of performing real-time object detection, pose prediction, key phrase detection, on-device transfer learning, etc.
AI hardware and software supporting Google Edge TPU
The Model Play and Tiorb AIX developed by Gravitylink can perfectly support Edge TPU. AIX is an artificial intelligence hardware that integrates the two core functions of computer vision and intelligent voice interaction. The built-in AI acceleration chip (Coral Edge TPU/intel movidius) supports edge deep learning inference and provides a reliable performance support for users’ every crazy idea.
Model Play is an AI model resource platform for global developers, built-in diversified AI models, combined with Tiorb AIX, based on Google open source neural network architecture and algorithms, to build autonomous transfer learning functions. without writing code. No need to write code, you can complete AI model training by selecting pictures, defining models and category names.
0 notes
Text
Edge AI performance: Google Edge TPU chip Vs. NVIDIA Nano
Hardware
The device I am interested in is the new NVIDIA Jetson Nano (128CUDA) and Google Coral Edge TPU (USB accelerator). And I will also test i7-7700K+GTX1080 (2560CUDA), Raspberry Pi 3B+, and my old friend, a 2014 macbook pro contains an i7-4870HQ (without CUDA-capable kernel).
Software
I will use MobileNetV2 as a classifier for pre-training on the imagenet dataset. And I will use the model directly from Keras, TensorFlow backend, the GPU's floating-point weights, as well as the 8-bit quantitative tflite version of the CPU and Coral Edge TPU.
First, loading the magpie model and image. Then I perform 1 prediction as a warm-up (because I notice that the first prediction is always much slower than the next prediction). I let it sleep for 1 second, so all threads must be finished. Then the script runs for it and classifies the same image 250 times. By using the same image for all classifications, we ensure that it will remain close to the data bus throughout the test. After all, we are interested in the speed of inference rather than the ability to load random data faster.
It is different for the score of the quantified tflite model with CPU. But it always seems to return the same prediction as others. So I think this is strange in the model, and I'm pretty sure it won't affect performance.
Now, because the results of different platforms are so different. It is difficult to imagine, so here are some charts, choose your favorite...
Analysis
There are 3 bar graphs jumping into the view in the first picture. (Yes, the first picture, the linear scale fps, is my favorite. Because it shows the difference in high-performance results.) Of these 3 bars, 2 of them are implemented by Google Coral Edge TPU USB accelerator, The third is the full NVIDIA GTX1080 assisted by Intel i7-7700K.
Looking carefully, you will see that GTX1080 is actually defeated by Coral. Let it sink for a few seconds, and then be ready to be blown away. Because the maximum power of GTX1080 is 180W, which is absolutely huge compared to Coral2.5W.
What we see next is that Nvidia’s Jetson Nano score is not high. Although there is a GPU that supports CUDA, it is actually not much faster than my original i7-4870HQ. But this is the problem, "not faster". It is still faster than the 50W, quad-core, hyper-threaded CPU. Jetson Nano has never consumed a short-term average of more than 12.5W, because this is my motivation. Power consumption is reduced by 75% and performance is improved by 10%.
Obviously, its own Raspberry Pi is not an impressive thing. It is not a floating point model, and it is still not useful for quantitative models. But hey, anyway I have the file ready. And it will run the test. The more the better? And it's still an interesting thing. Because it shows the difference between the ARM Cortex A53 in Pi and the A57 in Jetson Nano.
NVIDIA Jetson Nano
Jetson Nano does not use the MobileNetV2 classifier to provide impressive FPS rates. But as I have already said, this does not mean that it is not a very useful project. It's cheap and it doesn't require a lot of energy to run. Perhaps the most important attribute is that it runs TensorFlow-gpu (or any other ML platform), just like any other machine you have been using before.
As long as your script is not deep into the CPU architecture, you can run the exact same script as i7 + CUDA GPU, or you can train! I still think NVIDIA should use TensorFlow to pre-load L4T. But I will try not to be angry anymore. After all, they have a good explanation on how to install it (don't be fooled, TensorFlow 1.12 is not supported, only 1.13.1).
Google Coral Edge TPU
Edge TPU is also called "ASIC" (application specific integrated circuit), which means it has a combination of small electronic components such as FET and the capacity burned directly on the silicon layer, so that it can fully realize what it needs to do is Speed up reasoning.
Infer, yes, the Edge TPU cannot perform backward propagation.
The logic behind this sounds more complicated than it is now. (Actually creating hardware and making it work is a completely different thing, and very, very complicated. But the logic function is much simpler). If you are really interested in the way it works, you can check "Digital Circuit" and "FPGA", you may find enough information to keep you busy in the next few months. Sometimes it’s complicated at first, but it’s really fun!
But this is exactly why Coral is so different when comparing performance/wattage. It is a bunch of electronic devices designed to perform the required bitwise operations, with virtually no overhead.
Why doesn't the GPU have an 8-bit model?
The GPU is essentially designed as a fine-grained parallel floating-point calculator. Therefore, the use of floating is exactly what it creates and its advantages. The Edge TPU is designed to perform 8-bit operations, and the CPU has a faster method than 8-bit content that is faster than full-bit wide floating-point numbers because they must deal with this problem in many cases.
Why choose MobileNetV2?
I can give you many reasons why MobileNetV2 is a good model, but the main reason is that it is one of the pre-compiled models provided by Google for Edge TPU.
What other products does Edge TPU have?
It used to be different versions of MobileNet and Inception, but as of last weekend, Google introduced an update that allows us to compile custom TensorFlow Lite models. But the limitation is, and may always be the TensorFlow Lite model. This is different from Jetson Nano, that thing can run anything you imagine.
Raspberry Pi + Coral compared to others
Why does Coral look much slower when connected to Raspberry Pi? The answer is simple and straightforward: Raspberry Pi only has a USB 2.0 port, and the rest have USB 3.0 ports. And since we can see that the i7-7700K is faster on Coral and Jetson Nano, but still has not got the score of the Coral development board in NVIDIA test, we can conclude that the bottleneck is the data rate, not the Edge TPU.
I think it’s long enough for me, and maybe it’s the same for you. I am very shocked by the powerful features of Google Coral Edge TPU. But for me, the most interesting setting is the combination of NVIDIA Jetson Nano and Coral USB accelerator. I will definitely use this setting and it feels like a dream.
Speaking of Dev Board of Google Coral, and Edge TPU, then by the way, Model Play based on Coral Dev Board. It is developed by a domestic team and is an AI model sharing market for global AI developers. Model Play not only provides a platform for AI model display and communication for global developers, but also can be used with Coral Dev Board with Edge TPU to accelerate ML inference, preview the running effect of the model in real time through mobile phones, and help AI to expand from prototype to product.
Developers can not only publish their trained AI models, but also subscribe and download the models they are interested in, to retrain and expand their AI ideas, and realize the idea-prototype-product process. Model Play also presets various commonly used AI models, such as MobileNetV1, InceptionV2, etc., and supports the submission and release of retrainable models to facilitate users to optimize and fine-tune their own business data.
Just like Google at this year's I/O conference, the developers were called to jointly contribute to the development community. At the same time, the Model Play team is also issuing AI model convening orders to developers around the world, soliciting deep learning models based on TensorFlow that can run on the Google Coral Dev Board to encourage more developers to participate in the activities. Ten thousand AI developers share ideas together.
0 notes
Text
April update from Coral
Beta API for model pipelining with multiple Edge TPUs
Tested by the Coral Team
We've just released an updated Edge TPU Compiler and a new C++ API to enable pipelining a single model across multiple Edge TPUs. This can improve throughput for high-speed applications and can reduce total latency for large models that cannot fit into the cache of a single Edge TPU. To use this API, you need to recompile your model to create separate .tflite files for each segment that runs on a different Edge TPU.
Here are all the changes included with this release:
The Edge TPU Compiler is now version 2.1. You can update by running sudo apt-get update && sudo apt-get install edgetpu, or follow the instructions here.
The model pipelining API is available as source in GitHub. (Currently in beta and available in C++ only.) For details, read our guide about how to pipeline a model with multiple Edge TPUs.
New embedding extractor models for EfficientNet, for use with on-device backpropagation.
Minor update to the Edge TPU Python library (now version 2.14) to add new size parameter for run_inference().
New Colab notebooks to build C++ examples.
Send us any questions or feedback at [email protected]
0 notes
Text
Manufacturing more efficiently
Local AI can maximize throughput and increase safety in manufacturing processes
Edge AI use cases in industries are wide ranging, from quality control in manufacturing lines to safety monitoring of human-machine interaction. These applications need fast, low latency inference without compromising on accuracy.
QUALITY CONTROL
Imagine spotting defects the human eye can’t see.
Quality control in manufacturing can be complex, especially where high precision is needed. Components defects are difficult or impossible for human eyes to see, which makes the error rate on this type of inspection very high. Missing defects can be costly and industries are deploying local AI at a rapid pace. Coral can enable visual inspection systems that can detect faults with high accuracy in situations where human vision falls short.
PREDICTIVE MAINTENANCE
Imagine extending the operating life of expensive machines.
Downtime of a production line or critical machine can lead to slower production, costly repairs, or even catastrophic failure. With Coral, equipment manufacturers can incorporate features that monitor and analyze machine behavior and warn of impending failures. That can inform a system of predictive maintenance to avoid expensive downtime.
WORKER SAFETY
Imagine a worksite that can see accidents before they happen.
Many worksite injuries are due to preventable accidents — workers falling, failing to see heavy equipment, or failing to be seen by machinery. Using Coral enabled cameras and other local sensors monitoring a job site, operators can give robots and vehicles the ability to operate safely alongside human workers, preventing collisions and making collaborative work with machines a reality. Incident and avoidance data pooled into predictive models allow site managers to anticipate activities that may prove dangerous and make process changes.
0 notes
Text
Cooperation between Benchmark Electronic INC and Gravitylink Technology co., LTD
Gravitylink Technology co., LTD today announced a new partnership with Benchmark Electronic INC. This partnership will write a new chapter of mutually beneficial and win-win cooperation! With the signing of this contract, Benchmark and Gravitylink launch a regular, wide-ranging cooperation between them for the mutual benefit.
As a global distribution partner of Google AI chipset, Gravity Link has made great progress along the way, launched sales in more than 40 countries and regions around the world and successfully reached cooperation with many well-known enterprises. Brand strength is the key empowerment and one of the core competitiveness. It can be seen that the cooperation between Gravity Link and Benchmark is a general trend, which can not only provide mutual benefit, but also promote the rapid development of the artificial intelligence industry.
Benchmark Electronics Inc is an EMS, ODM, and OEM company based in Tempe, Arizona in the Phoenix metropolitan area. It provides contract manufacturing services. While basic computing-related products made up the majority of its earlier product lines, the company also manufactures telecommunications equipment and medical devices. Benchmark has also expanded its business into precision technologies, providing extensive precision mechanical manufacturing capabilities.
1 note
·
View note