#neuralprocessingunit
Explore tagged Tumblr posts
Photo
Intel: Neuer CEO plant umfassende Neuausrichtung in Fertigung und KI
0 notes
Text
What Is Neural Processing Unit NPU? How Does It Works

What is a Neural Processing Unit?
A Neural Processing Unit NPU mimics how the brain processes information. They excel at deep learning,��machine learning, and AI neural networks.
NPUs are designed to speed up AI operations and workloads, such as computing neural network layers made up of scalar, vector, and tensor arithmetic, in contrast to general-purpose central processing units (CPUs) or graphics processing units (GPUs).
Often utilized in heterogeneous computing designs that integrate various processors (such as CPUs and GPUs), NPUs are often referred to as AI chips or AI accelerators. The majority of consumer applications, including laptops, smartphones, and mobile devices, integrate the NPU with other coprocessors on a single semiconductor microchip known as a system-on-chip (SoC). However, large data centers can use standalone NPUs connected straight to a system’s motherboard.
Manufacturers are able to provide on-device generative AI programs that can execute AI workloads, AI applications, and machine learning algorithms in real-time with comparatively low power consumption and high throughput by adding a dedicated NPU.
Key features of NPUs
Deep learning algorithms, speech recognition, natural language processing, photo and video processing, and object detection are just a few of the activities that Neural Processing Unit NPU excel at and that call for low-latency parallel computing.
The following are some of the main characteristics of NPUs:
Parallel processing: To solve problems while multitasking, NPUs can decompose more complex issues into smaller ones. As a result, the processor can execute several neural network operations at once.
Low precision arithmetic: To cut down on computing complexity and boost energy economy, NPUs frequently offer 8-bit (or less) operations.
High-bandwidth memory: To effectively complete AI processing tasks demanding big datasets, several NPUs have high-bandwidth memory on-chip.
Hardware acceleration: Systolic array topologies and enhanced tensor processing are two examples of the hardware acceleration approaches that have been incorporated as a result of advancements in NPU design.
How NPUs work
Neural Processing Unit NPU, which are based on the neural networks of the brain, function by mimicking the actions of human neurons and synapses at the circuit layer. This makes it possible to execute deep learning instruction sets, where a single command finishes processing a group of virtual neurons.
NPUs, in contrast to conventional processors, are not designed for exact calculations. Rather, NPUs are designed to solve problems and can get better over time by learning from various inputs and data kinds. AI systems with NPUs can produce personalized solutions more quickly and with less manual programming by utilizing machine learning.
One notable aspect of Neural Processing Unit NPU is their improved parallel processing capabilities, which allow them to speed up AI processes by relieving high-capacity cores of the burden of handling many jobs. Specific modules for decompression, activation functions, 2D data operations, and multiplication and addition are all included in an NPU. Calculating matrix multiplication and addition, convolution, dot product, and other operations pertinent to the processing of neural network applications are carried out by the dedicated multiplication and addition module.
An Neural Processing Unit NPU may be able to do a comparable function with just one instruction, whereas conventional processors need thousands to accomplish this kind of neuron processing. Synaptic weights, a fluid computational variable assigned to network nodes that signals the probability of a “correct” or “desired” output that can modify or “learn” over time, are another way that an NPU will merge computation and storage for increased operational efficiency.
Testing has revealed that some NPUs can outperform a comparable GPU by more than 100 times while using the same amount of power, even though NPU research is still ongoing.
Key advantages of NPUs
Traditional CPUs and GPUs are not intended to be replaced by Neural Processing Unit NPU. Nonetheless, an NPU’s architecture enhances both CPUs’ designs to offer unparalleled and more effective parallelism and machine learning. When paired with CPUs and GPUs, NPUs provide a number of significant benefits over conventional systems, including the ability to enhance general operations albeit they are most appropriate for specific general activities.
Among the main benefits are the following:
Parallel processing
As previously indicated, Neural Processing Unit NPU are able to decompose more complex issues into smaller ones in order to solve them while multitasking. The secret is that, even while GPUs are also very good at parallel processing, an NPU’s special design can outperform a comparable GPU while using less energy and taking up less space.
Enhanced efficiency
NPUs can carry out comparable parallel processing with significantly higher power efficiency than GPUs, which are frequently utilized for high-performance computing and artificial intelligence activities. NPUs provide a useful way to lower crucial power usage as AI and other high-performance computing grow more prevalent and energy-demanding.
Multimedia data processing in real time
Neural Processing Unit NPU are made to process and react more effectively to a greater variety of data inputs, such as speech, video, and graphics. When response time is crucial, augmented applications such as wearables, robotics, and Internet of Things (IoT) devices with NPUs can offer real-time feedback, lowering operational friction and offering crucial feedback and solutions.
Neural Processing Unit Price
Smartphone NPUs: Usually costing between $800 and $1,200 for high-end variants, these processors are built into smartphones.
Edge AI NPUs: Google Edge TPU and other standalone NPUs cost $50–$500.
Data Center NPUs: The NVIDIA H100 costs $5,000–$30,000.
Read more on Govindhtech.com
#NeuralProcessingUnit#NPU#AI#NeuralNetworks#CPUs#GPUs#artificialintelligence#News#Technews#Technology#Technologynews#Technologytrends#govindhtech
0 notes
Link
#AdrenoGPU#AI#artificialintelligence#automateddriving#digitalcockpits#Futurride#generativeAI#Google#Hawaii#largelanguagemodels#LiAuto#LLMs#Maui#Mercedes-Benz#multimodalAI#neuralprocessingunit#NPU#OryonCPUSnapdragonDigitalChassis#Qualcomm#QualcommOryon#QualcommSnapdragonSummit#QualcommTechnologies#SDVs#SnapdragonCockpitElite#SnapdragonRideElite#SnapdragonSummit#software-definedvehicles#sustainablemobility
0 notes
Text
Durabook Unlocks New AI PC Experiences With Its Z14I Rugged Laptop Powered By Intel Core Ultra Processors And NVIDIA RTX A500 GPU

Durabook recently unveiled an impressive update to its Z14I laptop, now considered the most robust and dependable fully rugged AI PC globally. The enhanced Z14I boasts Intel® Core™ Ultra 5 and 7 processors with Intel vPro®, an integrated neural processing unit (NPU), and an optional NVIDIA RTX™ A500 professional GPU, making it perfect for professionals in various fields like architecture, engineering, public safety, military, and construction.
The upgraded Z14I introduces AI capabilities to the field, enabling professionals to leverage AI for tasks such as architectural design, crime scene analysis, and factory floor virtual assistants. This advancement boosts productivity and efficiency by providing access to workflows and insights that were previously limited to office settings. Durabook Americas President Sasha Wang highlights that the next-gen Z14I is tailored for "anytime, anywhere AI," ushering in a new era for AI PCs that enhance user experiences both personally and professionally.
Security and reliability are paramount in the Z14I's design, ensuring functionality in extreme conditions and harsh environments. The laptop holds certifications like IP66, MIL-STD-810H, and 6-foot drop resistance, guaranteeing durability against elements like dust, water, and temperature fluctuations. Additionally, the Z14I features Coolfinity™ fanless cooling, enhancing reliability by eliminating a power-consuming component that is prone to failure. This cooling system, combined with the laptop's battery options, offers up to 21.5 hours of battery life and enables hot-swapping batteries without any downtime.
Read More - https://www.techdogs.com/tech-news/business-wire/durabook-unlocks-new-ai-pc-experiences-with-its-z14i-rugged-laptop-powered-by-intel-core-ultra-processors-and-nvidia-rtx-a500-gpu
#Durabook#ComputerHardware#RuggeComputingSolutions#3DArchitecturalIllustrations#NeuralProcessingUnit#ArtificialIntelligence
0 notes
Link
Get ready for a revolution in PC performance and AI capabilities. At Computex 2024, AMD unveiled its groundbreaking Zen 5 architecture, powering the next generation of Ryzen processors. This exciting lineup includes the all-new Ryzen 9000 series for desktop PCs and the 3rd generation Ryzen AI processors for ultrabooks. Computex 2024 A New Era of Desktop Processing: The Ryzen 9000 Series AMD has taken the crown for the most advanced desktop processors with the Ryzen 9000 series. Built on the AM5 platform, these processors boast cutting-edge features like PCIe 5.0 and DDR5 support. They also deliver a significant 16% improvement in instructions per core (IPC) compared to their Zen 4 predecessors. Here's a closer look at the specs of the Ryzen 9000 family: Flagship Performance: The Ryzen 9 9950X reigns supreme with 16 cores, 32 threads, and a blazing-fast clock speed reaching up to 5.7 GHz. This powerhouse surpasses the competition in graphics bandwidth and AI acceleration, translating to impressive performance gains in creative applications like Blender (up to 56% faster) and high frame rates in demanding games (up to 23% improvement). Multiple Options: The Ryzen 9000 series caters to diverse needs with the Ryzen 9 9900X, Ryzen 7 9700X, and Ryzen 5 9600X processors. All models boast impressive core counts, thread counts, and clock speeds, ensuring smooth performance for gamers, content creators, and professionals alike. Availability: Gear up for an upgrade! The Ryzen 9000 series is slated for release in July 2024. Ryzen AI 300: Unleashing On-Device AI Power for Next-Gen Laptops The future of AI-powered computing is here with the Ryzen AI 300 series. Designed for ultrabooks, these processors integrate a powerful dedicated Neural Processing Unit (NPU) capable of delivering a staggering 50 trillion operations per second (TOPs). This translates to impressive on-device AI experiences, including: Real-time translation: Break down language barriers effortlessly with real-time translation powered by the NPU. Live captioning: Never miss a beat with live captioning that keeps you in the loop during meetings or lectures. Co-creation: Unleash your creativity with AI-assisted tools that enhance your workflow. The Ryzen AI 300 series comes in two variants: Ryzen AI 9 HX 370: This flagship model boasts the full power of the NPU with 50 TOPs and 16 compute units, ideal for demanding AI workloads. Ryzen AI 9 365: Offering exceptional value, this processor delivers 40 TOPs of AI performance with 10 CPU cores, catering to a wide range of AI applications. Look forward to experiencing the power of Ryzen AI 300 in upcoming Copilot+ PCs and AI+ PCs starting July 2024. Frequently Asked Questions Q: When will the Ryzen 9000 series and Ryzen AI 300 processors be available? A: Both processor lines are expected to hit the market in July 2024. Q: What are the key benefits of the Ryzen 9000 series? A: The Ryzen 9000 series offers significant advantages, including: Increased performance with a 16% IPC improvement over Zen 4 processors. Support for cutting-edge technologies like PCIe 5.0 and DDR5. A wide range of processor options for various needs and budgets. Q: What kind of AI experiences can I expect with the Ryzen AI 300 series? A: The Ryzen AI 300 series unlocks a new level of on-device AI capabilities, including: Real-time language translation. Live captioning for videos and meetings. AI-powered co-creation tools for enhanced creativity. Q: Which laptops will feature the Ryzen AI 300 processors? A: Look for the Ryzen AI 300 series in upcoming Copilot+ PCs and AI+ PCs from various manufacturers.
#AIPCs#AItasks#AM5platform#amdradeongraphics#AMDRyzenAI9HX370#amdzen5#Computex2024#CPU#DDR5#desktopprocessors#gamingPCs.#GPU#NeuralProcessingUnit#PCIe5.0#RDNA3.5architecture#Ryzen9000series#RyzenAI300series#TOPS#ultrabooks
0 notes
Link
#AIPrivacy#ComputerArchitecture#cybersecurity#dataethics#GDPRCompliance#hardwareinnovation#Microsoft#neuralprocessingunits
0 notes
Text
HiSilicon Kirin SoC in Huawei and Honor smartphones Review
Of course, the chipset is one of the main components of a modern smartphone, affecting its performance. Therefore, many choose a smartphone taking into account the capabilities of chipset. Accordingly, all smartphone manufacturers pay great attention to this aspect. As known, System-on-a-Chip (SoC) is an integrated circuit that integrates all components of electronic system. Typically, a SoC contains one or more DSP (Digital Signal Processing) cores. But almost all modern smartphones use multiprocessor systems (MPSoC). In addition, a modern SoC often includes a GPU, memory controller, GSM, 3G and 4G LTE modules, fragments of GPS, USB, NFC, Bluetooth, and cameras. The list of leaders in SoC-platforms segment includes the American Qualcomm Snapdragon (Qualcomm Inc), Nvidia Tegra (NVIDIA Corporation), TI OMAP (Texas Instruments), Chinese MediaTek (MediaTek Inc). In addition, some large companies are developing their own chipsets. For example, this list includes ST-Ericsson NovaThor (Sony), Samsung Exynos (Samsung), Apple Ax (iPhone, Apple Inc), Kirin (Huawei Honor, HiSilicon Technologies Co., Ltd). As known, in 1991, Huawei formed the HiSilicon Technologies division. In 2004, it became an independent company and began to create its own RISC processor based on a license from the British ARM. Today, HiSilicon Technologies Co Ltd is developing high-performance mobile processors, successfully competing even with Qualcomm Snapdragon. The list of their latest chipsets of 2018 and 2019 includes the Kirin 970, 980, 990, 990 5G and Kirin 810.
HiSilicon Kirin 970
In Berlin at IFA 2017, Huawei introduced the flagship single-crystal HiSilicon Kirin 970 chipset. According to the company, Kirin 970 is the first Neural Processing Unit (NPU) or AI accelerator. As known, NPUs use the cluster asynchronous architecture developed at Cornell University. Unlike traditional computing architectures, this highly specialized logic is designed to create different types of artificial neural networks. Usually, each core contains a task scheduler, its own memory such as SRAM and a router for communication with other kernels. One chip can contain several thousand cores. The manufacture of a single-chip Kirin 970 system uses 10nm technology with the placement of 5.5 billion transistors in an area of about a square centimeter. The configuration of this SoC includes four ARM Cortex-A73 cores (2.4 GHz) and four ARM Cortex-A53 cores (1.8 GHz), combined according to the standard big.LITTLE scheme. Additionally, the Kirin 970 for the first time contains the 12-core Mali G72MP12 GPU (ARM GPU). Moreover, the built-in LTE Advanced Pro (4.5G) modem supports up to 1.2 Gb/s, and a dual DSP provides motion capture, face detection, four-level hybrid autofocus and improved shooting of moving objects even in low light. Additionally, Kirin 970 supports LPPDR 4X RAM (1883 MHz) and UFS 2.1, a 4K video codec, HDR10, a separate Security Engine with TEE and inSE support, an i7 sensor, and HiFi sound.
HiSilicon Kirin 980
A year later, at IFA 2018, Huawei introduced the new flagship Kirin 980. Kirin 980 became the first Huawei 7nm chipset. As known, it directly affects the energy consumption and, accordingly, the heating the processor during operation. SoC Kirin 980 has 6.9 billion transistors, which is 1.6 times more compared to Kirin 970. Its eight cores are divided into three clusters, including two Cortex-A76 with a frequency of 2.6 GHz, two Cortex-A76 with a frequency of 1.92 GHz and four Cortex-A55 with a frequency of 1.8 GHz. In addition, it uses the integrated GPU Mali-G76 MP10. According to the company, the performance increase of Kirin 980 vs 970 for the CPU reaches 58-75%, GPU - 46% with an increase in energy efficiency by 178%. In addition, the company doubled the number of NPU units to accelerate AI apps. In fact, the neuromorphic Huawei processor has become dual-core, capable of recognizing up to 4,500 images per minute. Additionally, the Kirin 980 uses the fourth generation ISP (image signal processor). The new SoC debuted in the flagship Mate 20 и Mate 20 Pro smartphones of 2018.
Kirin 810
In 2019, the company continued to improve its chipsets, introducing Kirin 810, 990 and 990 5G. In summer, the company introduced the new 7nm 8-core Kirin 810 with two ARM Cortex-A76 clocked up to 2.27 GHz and six ARM Cortex-A55 clocked up to 1.88 GHz. It also has an ARM Mali-G52 MP6 GPU and an AI task processing unit based on the new Huawei DaVinci architecture, which reduces energy consumption. Key Features: - 7nm technology; - the high quality of the built-in modems provides a very stable connection; - the improved Mali-G52 MP6 GPU significantly improves gaming performance, outperforming even the Adreno 618 (Qualcomm Snapdragon) in the GFXBench test; - the neural processor module on the Huawei DaVinci architecture in combination with the Huawei HiAi 2.0 platform surpasses even the Snapdragon 855 and Kirin 980 in the performance test for AI tasks.
Kirin 990 and Kirin 990 5G
At IFA 2019, the company introduced the new flagship Kirin 990 and Kirin 990 5G. This new 7nm SoC for the first time includes a 5G modem with support for NSA and SA networks. In addition, engineers were able to place 10.3 billion transistors on it due to the use of innovative EUV lithography, which forms the elements of electronic microcircuits smaller than 45 nm. Additionally, the new chipset contains a neuromorphic coprocessor with Da Vinchi architecture and a 16-core graphics core. Advanced adaptive receiver can work on any 5G NR (New Radio) networks, including Non-Standalone (NSA) and Standalone (SA). Today, companies are building 5G mainly to improve mobile broadband (eMBB) by increasing the speed of data transmission in the network. Today, it uses the 4G LTE infrastructure, operating in the millimeter range. Such networks are called non-autonomous. But this is only the first stage of 5G implementation. The next stage involves the creation of a new infrastructure with its own network core, which will additionally use low and medium frequencies. Of course, this will increase the coverage of 5G networks and the stability of high-speed data transmission. Such 5G networks will be called Standalone. Thus, Huawei phones already Standalone support. Additionally, Kirin 990 supports the use of two SIM cards (2G + 3G + 4G + 5G or 2G + 3G + 4G). Kirin 990 5G uses 2 + 2 + 4 configuration with two Cortex-A76 (2.86 GHz), two Cortex-A76 (2.36 GHz) and four Cortex A-55 (1.95 GHz). Kirin 990 is manufactured without the use of EUV lithography. It does not have a 5G modem, operates at lower frequencies, and its neuromorphic processor uses only one high-performance core.
Conclusion
Today we can note the rapid improvement of HiSilicon Kirin chipsets. In just 3 years, the company introduced 5 new chipsets, each of which justifiably claims the highest level. As a result, today HiSilicon Kirin successfully competes even with Qualcomm Snapdragon. For example, Kirin 990 5G shows the same performance as Qualcomm Snapdragon 855, but has fewer memory channels, which affects the speed of data exchange, does not have NX-bit (No-eXecute) and does not use HMP (heterogeneous multiprocessor). But the price / quality ratio compensates for these differences. Of course, Kirin chipsets to a large extent ensure the success of Huawei and Honor smartphones, complementing its new Harmony OS and excellent cameras manufactured in collaboration with the famous German Leica Camera AG. In addition, Harris Interactive conducted a study of the reliability of smartphones by analyzing 130,050 cases of after-sales appeals from customers of a large European Darty retail network. According to its results, Huawei and Apple have the lowest percentage of failures and breakdowns. Thus, Huawei, Honor and iPhone smartphones today can be considered the most reliable. Of course, all these factors retain excellent prospects for the smartphones of the Chinese giant. This video offers Speed, Gaming & Screen Test comparison of Huawei Mate 30 with Kirin 990 5G vs iPhone 11 with A13 Bionic. Read the full article
#4GLTEinfrastructure#EUVlithography#heterogeneousmultiprocessor#HiSiliconKirinchipsets#HMP#HuaweiMate30#Kirin810#Kirin970#Kirin980#Kirin990#Kirin9905G#LTEAdvancedPro(4.5G)modem#Mali-G52MP6GPU#NeuralProcessingUnit#No-eXecute#Non-Standalone#NX-bit#Standalone
0 notes
Text
Intel Core Ultra 200V Series CPUs Improve AI PC Performance

For the AI PC Age, New Core Ultra Processors Offer Groundbreaking Performance and Efficiency.
Intel Core Ultra
Leading laptop makers may benefit from the exceptional AI performance, interoperability, and power efficiency of Intel Core Ultra 200V series CPUs due to their large size. The Intel Core Ultra 200V series processors are the most efficient x86 CPU family that Intel has ever released. Their performance is outstanding, they provide revolutionary x86 power efficiency, a significant improvement in graphics performance, uncompromised application compatibility, heightened security, and unparalleled AI compute.Image Credit To Intel
With more than 80 consumer designs from more than 20 of the biggest manufacturing partners in the world, including Acer, ASUS, Dell Technologies, HP, Lenovo, LG, MSI, and Samsung, the technology will power the most comprehensive and powerful AI PCs on the market.
Preorders open today, and beginning on September 24, systems will be sold both online and in-store at more than 30 international shops. Beginning in November, all designs with Intel Core Ultra 200V series CPUs and the most recent version of Windows are eligible for a free upgrade that includes Copilot+ PC capabilities.
“Intel’s most recent Core Ultra processors dispel myths about x86 efficiency and set the industry standard for mobile AI and graphics performance. With our relationships with OEMs, ISVs, and the larger tech community, only Intel has the reach to provide customers an AI PC experience that doesn’t compromise.
Customers of today are more and more producing, interacting, playing, and learning while on the road. They need a system with outstanding performance, extended battery life, uncompromised application compatibility, and improved security. It should also be able to use AI hardware via widespread software enablement.
Intel Core Ultra Platform
With up to 50% lower package power and up to 120 total platform TOPS (tera operations per second) across central processing unit (CPU), graphic processing unit (GPU), and neural processing unit (NPU) to deliver the most performant and compatible AI experiences across models and engines, Intel Core Ultra 200V series processors were designed with all of that in mind. With up to four times the power of its predecessor, the fourth-generation NPU is perfect for energy-efficiently performing AI tasks over an extended period of time.
As part of its AI PC Acceleration Program, Intel works with over 100 integrated software suppliers (ISVs) and developers to activate industry-leading platform TOPS in more than 300 AI-accelerated features.
Through carefully calibrated power management and entirely redesigned Performance-cores (P-core) that are optimized for performance per power per area, the new processors provide efficient and remarkable core performance. Additionally, Intel’s most potent Efficient-cores (E-cores) can now handle a greater workload, guaranteeing silent and cool operation.
With a 30% average performance boost, Intel’s new X 2 graphics microarchitecture, which is included in the Intel Core Ultra 200V line of CPUs, represents a considerable improvement in mobile graphics performance. Support for three 4K displays, eight new 2nd Gen Xe-cores, eight upgraded ray tracing units, and new integrated Intel XMX AI. engines with up to 67 TOPS are all included in the integrated Intel Arc GPU. Enhanced XSS kernels allow the AI engines to power creative applications and improve gaming performance.
Intel Core Ultra 200V series processors
A great PC must be a great PC before it can be a great AI PC. With up to three times the performance per thread, an 80% peak performance boost, and up to 20 hours of battery life in productivity use scenarios, Intel Core Ultra 200V series processors are productivity powerhouses. These fantastic PCs are the next step in the AI PC’s progression. With over 500 optimized AI models, extensive ecosystem support, and collaborations with top ISVs, PCs equipped with the newest Intel Core Ultra CPUs enable customers to fully benefit from AI. The new CPUs, with their several powerful AI engines, deliver:
Content Generation: To make video editing simpler and quicker, work more quickly by automatically recognizing changes in the video scene. Use word prompts to unleash your imagination and create beautiful vector and raster art.
Safety: Check whether videos on the internet have been manipulated by using local AI deep-fake detection. AI screening, identification, and safeguarding of important files against dangerous programs and users may protect your PC’s personal data.
Efficiency: One-time video presentation recordings save time, and fresh audio and video including fresh conversation minimize the need for retakes.
Video games: Enhance gaming experiences and increase frames-per-second performance by using AI to provide upscaled, high-quality pictures.
Concerning Intel Evo Edition Utilizing the most recent Intel Core Ultra Processors: The majority of laptop designs with Intel Core Ultra 200V series CPUs will be Intel Evo Edition models, which are rigorously tested and co-engineered with Intel’s partners to provide the best possible AI PC experience.
These laptops are designed to help eliminate latency, limit distractions, and lessen reliance on battery charges by integrating essential platform technologies with system improvements. This ensures amazing experiences from any location. Intel Evo designs, which are new this year, have to achieve improved metrics for quieter and cooler operation.
Features consist of:
Performance and responsiveness in ultra-thin designs that are cooler and quieter.
Extended battery life in practice.
integrated security that reduces vulnerabilities and aids in stopping malware assaults.
Integrated Intel Arc graphics provide faster game development and more fluid gameplay, even while playing on the fly.
Connectivity that is lightning fast thanks to Intel Wi-Fi 7 (5 Gig).
The ability to use Thunderbolt Share to charge a PC, transmit data, and connect it to numerous displays.
Wake up instantly and charge quickly.
The highest accreditation for sustainability, EPEAT Gold.
What’s Next: Starting today, consumers may pre-order consumer devices equipped with Intel Core Ultra 200V series processors. Commercial products based on the Intel vPro platform will be released in the Next year.
IFA 2024 conference
Image Credit To Intel
The next generation of Intel Core Ultra processors, code-named Lunar Lake, was introduced ahead of the IFA 2024 conference by Jim Johnson, senior vice president and general manager of the Client Business Group, and Michelle Johnston Holthaus, executive vice president and general manager of Intel’s Client Computing Group. Partners from Intel joined them in launching a line of processors that redefines mobile AI performance.
The executives of Intel demonstrated how the new processors’ remarkable core performance, remarkable x86 power efficiency, revolutionary advances in graphics performance, and AI processing capacity provide users everything they need to create, connect, play, or study on the move.
Read more on govindhtech.com
#mobileai#wifi7#IntelCoreUltra#intelvpro#intelevo#iocalai#ai#gpu#intelgpu#neuralprocessingunit#CopilotPC#CoreUltraProcessors#86cpu#cpu#pc#technology#technews#news#govindhtech
0 notes
Link
Microsoft has unveiled a bold new chapter in its Surface lineup, introducing the Surface Pro 10 and Surface Laptop 6 specifically designed for business users. These innovative devices boast the title of "the first Surface AI PCs built exclusively for business," hinting at their potential to revolutionize the way businesses operate through the power of artificial intelligence (AI). Let's delve deeper into the features and functionalities of these exciting new additions to the Surface family. Surface Pro 10 for Business: Geared for Flexibility and On-the-Go Productivity The Surface Pro 10 for Business caters to the needs of professionals who value flexibility and mobility. It retains the signature 2-in-1 design that has made the Surface Pro series a popular choice, seamlessly transforming from a traditional laptop with a kickstand and keyboard to a convenient tablet for tasks like note-taking or presentations. Performance and Power: Microsoft equips the Surface Pro 10 for Business with the latest generation of Intel Core Ultra processors, offering users a choice between Core Ultra 5 135U and Core Ultra 7 165U options. These processors are paired with a base configuration of 8GB RAM, expandable up to an impressive 64GB, and a speedy 256GB Gen4 solid-state drive (SSD). This combination assures smooth multitasking, fast boot times, and efficient handling of demanding applications. Additionally, Microsoft promises an impressive battery life of up to 19 hours, allowing users to stay productive throughout their workday without the need for frequent charging. Enhanced Display and User Experience: The Surface Pro 10 for Business features a 13-inch display with an upgraded anti-reflective coating, ensuring clear visuals even in bright environments. Microsoft has also boosted the display's brightness by 33%, further enhancing the user experience. While some rumors suggested an OLED display, Microsoft opted for a high-quality LCD panel. However, there's a hint that a consumer-focused version with an OLED display might be unveiled in the future. For improved video conferencing and presentations, the Surface Pro 10 for Business boasts a significantly upgraded front-facing camera with a wider 114-degree field of view and a crisp 1440p resolution. Security and Innovation: The Surface Pro 10 for Business prioritizes user security through the inclusion of an NFC reader. This allows for convenient and secure authentication using devices like YubiKey NFC security keys. Furthermore, Microsoft is exploring the possibility of introducing a 5G option for this device, catering to users who require seamless connectivity on the go. Surface Laptop 6 for Business: Powerhouse Performance for Demanding Tasks For users who prioritize a traditional laptop experience for their work environment, the Surface Laptop 6 for Business offers a compelling solution. This clamshell design provides a stable platform for extended typing sessions and features a larger keyboard compared to the Surface Pro 10. Unleashing Desktop-Grade Power: The Surface Laptop 6 for Business packs a performance punch with the integration of Intel's Core Ultra H-series processors. These processors, designed for demanding workloads, empower users to run complex software and manage heavy multitasking with ease. Customers can choose between Core Ultra 5 135H and Core Ultra 7 165H options, ensuring they have the processing power to tackle even the most challenging tasks. The device also boasts a wide range of configurable memory options, starting from 8GB and scaling up to 64GB of RAM, to cater to varying user needs. Storage options start at 256GB Gen4 SSD and extend to a spacious 1TB, providing ample space for storing important files and applications. Designed for Connectivity: The Surface Laptop 6 for Business comes in two sizes: 13.5-inch and 15-inch. The larger 15-inch model boasts two USB-C Thunderbolt 4 ports, while the more compact 13.5-inch version features a single USB-C Thunderbolt 4 port. These ports provide high-speed data transfer capabilities and can be used for connecting external displays or peripherals. Both models come equipped with a suite of essential ports, ensuring users have the necessary connections for a versatile and productive work environment. FAQs: Q: What are the key features of the Surface Pro 10 for Business? A: The Surface Pro 10 for Business prioritizes mobility and offers a 2-in-1 design, the latest Intel Core Ultra processors, a long-lasting battery, an upgraded display with an anti-reflective coating and 1440p front-facing camera, an NFC reader for secure login, and potential future 5G connectivity. Q: What are the benefits of the Surface Laptop 6 for Business? A: The Surface Laptop 6 for Business is ideal for users who prefer a traditional laptop form factor. It boasts desktop-grade performance with Intel Core Ultra H-series processors, a wide range of RAM and storage options, a high-resolution display with an anti-reflective coating, a 1080p front-facing camera with Windows Studio Effects, and multiple USB-C Thunderbolt 4 ports for enhanced connectivity. Q: When will consumer versions of the Surface Pro 10 and Surface Laptop 6 be available? A: Consumer versions of these devices are expected to be released in May 2024. Q: Do the Surface Pro 10 and Surface Laptop 6 support AI features? A: Both devices are equipped with a Neural Processing Unit (NPU) and a Copilot key, suggesting they are designed to integrate with upcoming AI features in Windows 11. Microsoft is expected to reveal more details about these features shortly.
#AIpoweredComputing#BusinessLaptops#IntelCoreUltraprocessors#MicrosoftCopilotKey#MicrosoftSurfaceLaptop6#MicrosoftSurfacePro10#MicrosoftUnveilsSurfacePro10Laptop6#NeuralProcessingUnit#SurfaceAIPCs#SurfaceLaptopServiceability#SurfaceProKeyboard#Windows11AIFeatures
0 notes
Text
How The AI Inferencing Circuitry Powers Intelligent Machines

AI Inferencing
Expand the capabilities of PCs and pave the way for future AI applications that will be much more advanced.
AI PCs
The debut of “AI PCs” has resulted in a deluge of news and marketing during the last several months. The enthusiasm and buzz around these new AI PCs is undeniable. Finding clear-cut, doable advice on how to fully capitalize on their advantages as a client, however, may often seem like searching through a haystack. It’s time to close this knowledge gap and provide people the tools they need to fully use this innovative technology.
All-inclusive Guide
At Dell Technologies, their goal is to offer a thorough manual that will close the knowledge gap regarding AI PCs, the capabilities of hardware for accelerating AI, such as GPUs and neural processing units (NPUs), and the developing software ecosystem that makes use of these devices.
All PCs can, in fact, process AI features; but, newer CPUs are not as efficient or perform as well as before due to the advent of specialist AI processing circuits. As a result, they can do difficult AI tasks more quickly and with less energy. This PC technological breakthrough opens the door to AI application advances.
In addition, independent software vendors (ISVs) are producing cutting-edge GenAI-powered software and fast integrating AI-based features and functionality to current software.
It’s critical for consumers to understand if new software features are handled locally on your PC or on the cloud in order to maximize the benefits of this new hardware and software. By having this knowledge, companies can be confident they’re getting the most out of their technological investments.
Quick AI Functions
Microsoft Copilot is an example of something that is clear. Currently, Microsoft Copilot’s AI capabilities are handled in the Microsoft cloud, enabling any PC to benefit from its time- and productivity-saving features. In contrast, Microsoft is providing Copilot+ with distinctive, incremental AI capabilities that can only be processed locally on a Copilot+ AI PC, which is characterized, among other things, by a more potent NPU. Later, more on it.
Remember that even before AI PCs with NPUs were introduced, ISVs were chasing locally accelerated AI capabilities. In 2018, NVIDIA released the RTX GPU line, which included Tensor Cores, specialized AI acceleration hardware. As NVIDIA RTX GPUs gained popularity in these areas, graphics-specific ISV apps, such as games, professional video, 3D animation, CAD, and design software, started experimenting with incorporating GPU-processed AI capabilities.
AI workstations with RTX GPUs quickly became the perfect sandbox environment for data scientists looking to get started with machine learning and GenAI applications. This allowed them to experiment with private data behind their corporate firewall and realized better cost predictability than virtual compute environments in the cloud where the meter is always running.
Processing AI
All of these GPU-powered AI use cases prioritize speed above energy economy, often involving workstation users using professional NVIDIA RTX GPUs. NPUs provide a new feature for using AI features to the market with their energy-efficient AI processing.
For clients to profit, ISVs must put in the laborious code required to support any or all of the processing domains NPU, GPU, or cloud. Certain functions may only work with the NPU, while others might only work with the GPU and others might only be accessible online. Gaining the most out of your AI processing gear is dependent on your understanding of the ISV programs you use on a daily basis.
A few key characteristics that impact processing speed, workflow compatibility, and energy efficiency characterize AI acceleration hardware.
Neural Processing Unit NPU
Now let’s talk about NPUs. NPUs, which are relatively new to the AI processing industry, often resemble a section of the circuitry found in a PC CPU. Integrated NPUs, or neural processing units, are a characteristic of the most recent CPUs from Qualcomm and Intel. This circuitry promotes AI inferencing, which is the usage of AI characteristics. Integer arithmetic is at the core of the AI inferencing technology. When it comes to the integer arithmetic required for AI inferencing, NPUs thrive.
They are perfect for using AI on laptops, where battery life is crucial for portability, since they can do inferencing with very little energy use. While NPUs are often found as circuitry inside the newest generation of CPUs, they can also be purchased separately and perform a similar purpose of accelerating AI inferencing. Discrete NPUs are also making an appearance on the market in the form of M.2 or PCIe add-in cards.
ISVs are only now starting to deliver software upgrades or versions with AI capabilities backing them, given that NPUs have just recently been introduced to the market. NPUs allow intriguing new possibilities today, and it’s anticipated that the number of ISV features and applications will increase quickly.
Integrated and Discrete from NVIDIA GPUs
NVIDIA RTX GPUs may be purchased as PCIe add-in cards for PCs and workstations or as a separate chip for laptops. They lack NPUs’ energy economy, but they provide a wider spectrum of AI performance and more use case capability. Metrics comparing the AI performance of NPUs and GPUs will be included later in this piece. However, GPUs provide more scalable AI processing performance for sophisticated workflows than NPUs do because of their variety and the flexibility to add many cards to desktop, tower, and rack workstations.
Another advantage of NVIDIA RTX GPUs is that they may be trained and developed into GenAI large language models (LLMs), in addition to being excellent in integer arithmetic and inferencing. This is a consequence of their wide support in the tool chains and libraries often used by data scientists and AI software developers, as well as their acceleration of floating-point computations.
Bringing It to Life for Your Company
Trillions of operations per second, or TOPS, are often used to quantify AI performance. TOPS is a metric that quantifies the maximum possible performance of AI inferencing, taking into account the processor’s design and frequency. It is important to distinguish this metric from TFLOPs, which stands for a computer system’s capacity to execute one trillion floating-point computations per second.
The broad range of AI inferencing scalability across Dell’s AI workstations and PCs. It also shows how adding more RTX GPUs to desktop and tower AI workstations may extend inferencing capability much further. To show which AI workstation models are most suited for AI development and training operations, a light blue overlay has been introduced. Remember that while TOPS is a relative performance indicator, the particular program running in that environment will determine real performance.
To fully use the hardware capacity, the particular application or AI feature must also support the relevant processing domain. In systems with a CPU, NPU, and RTX GPU for optimal performance, it could be feasible for a single application to route AI processing across all available AI hardware as ISVs continue to enhance their apps.
VRAM
TOPS is not the only crucial component for managing AI. Furthermore crucial is memory, particularly for GenAI LLMs. The amount of memory that is available for LLMs might vary greatly, depending on how they are managed. They make use of some RAM memory in the system when using integrated NPUs, such as those found in Qualcomm Snapdragon and Intel Core Ultra CPUs. In light of this, it makes sense to get the most RAM that you can afford for an AI PC, since this will help with general computing, graphics work, and multitasking between apps in addition to the AI processing that is the subject of this article.
Separate For both mobile and stationary AI workstations, NVIDIA RTX GPUs have dedicated memory for each model, varying somewhat in TOPS performance and memory quantities. AI workstations can scale for the most advanced inferencing workflows thanks to VRAM memory capacities of up to 48GB, as demonstrated by the RTX 6000 Ada, and the ability accommodate 4 GPUs in the Precision 7960 Tower for 192GB VRAM.
Additionally, these workstations offer a high-performance AI model development and training sandbox for customers who might not be ready for the even greater scalability found in the Dell PowerEdge GPU AI server range. Similar to system RAM with the NPU, RTX GPU VRAM is shared for GPU-accelerated computation, graphics, and AI processing; multitasking applications will place even more strain on it. Aim to purchase AI workstations with the greatest GPU (and VRAM) within your budget if you often multitask with programs that take use of GPU acceleration.
The potential of AI workstations and PCs may be better understood and unwrapped with a little bit of knowledge. You can do more with AI features these days than only take advantage of time-saving efficiency and the capacity to create a wide range of creative material. AI features are quickly spreading across all software applications, whether they are in-house custom-developed solutions or commercial packaged software. Optimizing the setup of your AI workstations and PCs can help you get the most out of these experiences.
Read more on Govindhtech.com
#AI#AIPCs#GPUs#neuralprocessingunits#NPUs#CPUs#PC#AIcapabilities#NVIDIARTXGPUs#PCCPU#AIinferencing#AIprocessing#GenAI#largelanguagemodels#LLMs#news#technews#technology#technologynews#technologytrends#govindhtech
0 notes