Tumgik
#PCProcessors
govindhtech · 4 months
Text
Intel Xeon 6 E-core Processors for Gamers and Creators
Tumblr media
Intel Xeon 6 E-core
Intel revealed state-of-the-art technologies and architectures today at Computex that have the potential to significantly accelerate the AI ecosystem from the data centre, cloud, and network to the edge and PC. With increased processing power, cutting-edge power efficiency, and an affordable total cost of ownership (TCO), clients may now take use of the full potential of AI systems.
AI Data Centres Benefit from Intel Xeon 6 Processors
Companies are under increasing pressure to update their outdated data centre systems in order to maximise physical floor and rack space, save costs, meet sustainability targets, and develop new digital capabilities throughout the organisation as digital transformations pick up speed.
With both Intel Xeon 6 E-core (Efficient-core) and P-core (Performance-core) SKUs to address the wide range of use cases and workloads, from AI and other high-performance compute needs to scalable cloud-native applications, the Xeon 6 platform and processor family as a whole were designed with these issues in mind. Built on a shared software stack and an open ecosystem of hardware and software manufacturers, E-cores and P-cores share a compatible architecture.
The Intel Xeon 6 E-core, code-named Sierra Forest, is the first of the Xeon 6 CPUs to be released and will be available starting today. The Xeon 6 P-cores, also known as Granite Rapids, should be released the following quarter.
The Intel Xeon 6 E-core processor offers good performance per watt and a high core density, allowing for efficient computing at much lower energy costs. For the most demanding high-density, scale-out workloads, such as cloud-native apps and content delivery networks, network microservices, and consumer digital services, the enhanced performance with higher power efficiency is ideal.
Furthermore, when compared to 2nd Gen Intel Xeon processors on media transcoding tasks, Intel Xeon 6 E-core enormous density advantages allow for rack-level consolidation of 3-to-1, giving clients a rack-level performance gain of up to 4.2x and performance per watt gain of up to 2.6×1. Xeon 6 processors free up computational capability and infrastructure for creative new AI applications by consuming less power and rack space.
Intel Gaudi AI Accelerators Improve GenAI Performance at Lower Cost
These days, it’s getting cheaper and quicker to use generative AI. The industry standard for infrastructure, x86 runs at scale in almost all data centre environments and provides the basis for integrating AI capabilities while guaranteeing affordable interoperability and the enormous advantages of an open community of developers and users.
When used in conjunction with Intel Gaudi AI accelerators, which are specifically intended for AI applications, Intel Xeon processors make the best CPU head node for AI workloads. When combined, these two provide a potent solution that blends in well with the current infrastructure.
For training and inference of large language models (LLM), the Gaudi architecture is the only MLPerf-bench marked substitute for Nvidia H100 that offers customers the desired GenAI performance at a price-performance advantage that offers choice, quick deployment, and a lower total cost of operation.
System providers can purchase a basic AI kit for $65,000, which includes eight Intel Gaudi 2 accelerators and a universal baseboard (UBB). This kit is anticipated to be one-third less expensive than equivalent competitor platforms. Eight Intel Gaudi 3 accelerators with a UBB will be included in a kit that will retail for $125,000; this is around two-thirds less than comparable competition platforms.
With the help of Intel Gaudi 3 accelerators, businesses will be able to extract more value from their unique data by achieving notable performance gains for training and inference workloads on top GenAI models. According to projections, Intel Gaudi 3 in a 8,192-accelerator cluster will provide up to 15% better training throughput for a 64-accelerator cluster compared to Nvidia H100 on the Llama2-70B model and up to 40% faster time-to-train compared to the equal size Nvidia H100 GPU cluster. Furthermore, it is anticipated that Intel Gaudi 3 will provide up to two times quicker inferencing on average when compared to Nvidia H100 while running widely used LLMs like Mistral-7B and Llama-70B.
Intel is working with at least ten of the leading international system providers, including six new companies that just stated they will be releasing Intel Gaudi 3, to make these AI systems widely accessible. Leading system providers Dell, HPE, Lenovo, and Supermicro now have more production options thanks to new partners Asus, Foxconn, Gigabyte, Inventec, Quanta, and Wistron.
Revolutionary laptop AI architecture triples compute and power efficiency
Intel is expanding its AI presence outside of the data centre, both in the PC and at the edge. Intel has been enabling enterprise choice for decades with more than 200 million CPUs deployed to the ecosystem and more than 90,000 edge deployments.
Intel is leading the charge in this category-creating moment as the AI PC category is revolutionising every facet of the computing experience today. The goal now is to create edge devices that learn and change in real time, anticipating user requirements and preferences and ushering in a completely new era of productivity, efficiency, and creativity. It is no longer just about faster processing speeds or sleeker designs.
By 2028, 80% of PC sales are expected to come from AI models, according to Boston Consulting Group. Intel reacted swiftly, enabling over 100 independent software vendors (ISVs), 300 features, and support for 500 AI models throughout its Core Ultra platform, to provide the best hardware and software platform for the AI PC.
Building swiftly on these unparalleled benefits, the company today unveiled the Lunar Lake architecture, which serves as the flagship processor for the upcoming AI PC generation. Lunar Lake is expected to provide up to 40% reduced SoC power and more than three times the AI compute, thanks to a significant leap in graphics and AI processing power and an emphasis on capability-efficient compute performance for the thin-and-light market. It is anticipated to ship in 2024’s third quarter, just in time for the Christmas shopping season.
The brand-new architecture of Lunar Lake will allow for:
The new Performance-cores (P-cores) and Efficient-cores improve performance and energy efficiency.
A fourth-generation Intel NPU with 48 tera-operations per second (TOPS) AI capabilities. Improves in generative AI are made possible by this potent NPU, which provides up to 4x AI compute compared to the previous iteration.
The brand-new X2 GPU cores for visuals and the X Matrix Extension (XMX) arrays for AI are combined in the Battlemage GPU design. The new XMX arrays provide a second AI accelerator with up to 67 TOPS of performance for exceptional throughput in AI content production, while the X2 GPU cores increase gaming and graphics performance by 1.5x over the previous version.
Amazing laptop battery life is made possible by an innovative compute cluster, an advanced low-power island, and Intel innovation that manages background and productivity tasks extremely well.
Intel is already shipping at scale, delivering more AI PC processors through the first quarter of 2024 than all competitors combined, while others get ready to join the AI PC market. More than 80 distinct AI PC designs from 20 original equipment manufacturers (OEMs) will be powered by Lunar Lake. This year, Intel anticipates putting more than 40 million Core Ultra processors on the market.
Read more on govindhtech.com
0 notes
americanfinancenews1 · 11 months
Text
Tech titans collide! NVIDIA challenges Intel's supremacy in the PC processor market. Is this the future of computing? Find out in our latest article. #NVIDIA #Intel #TechNews #PCProcessors"
Hashtags: #NVIDIA #Intel #TechNews #PCProcessors
Keywords: NVIDIA, Intel, tech news, computing, PC processors, competition
Tumblr media
0 notes
govindhtech · 5 months
Text
Apple M4 chip powers new iPad Pro’s groundbreaking design
Tumblr media
Apple M4 chip
The new iPad Pro’s innovative design and breathtaking display are made possible by M4, which also offers a massive performance boost.
M4 chip release date
On May 7, 2024, Apple formally unveiled the M4 chip together with the new iPad Pro. Sadly, for the moment, it is limited to the iPad Pro. We’ll probably have to wait until later this year or even early 2025 for Macs and MacBooks.
Apple M4 chip, the newest chip that powers the revolutionary iPad Pro with incredible performance. The M4 system-on-a-chip (SoC), which was created using second-generation 3-nanometer technology, improves Apple silicon’s industry-leading power efficiency and makes the iPad Pro’s extraordinarily thin design possible. In order to power the astounding accuracy, colour, and brightness of the ground-breaking Ultra Retina XDR display on iPad Pro, it also has a completely new display engine.
Up to 10 cores may be found in a new CPU, while the 10-core GPU expands upon the next-generation GPU architecture seen in the M3, introducing hardware-accelerated mesh shading, hardware-accelerated ray tracing, and dynamic caching to the iPad for the first time. With a maximum speed of 38 trillion operations per second, the M4 has Apple’s fastest neural engine ever, surpassing the neural processing unit found in any AI PC on the market today.
The next-generation machine learning (ML) accelerators in the CPU, a high-performance GPU, and higher memory bandwidth, together with the Apple M4 chip, make the new iPad Pro an incredibly potent artificial intelligence tool.
“Creating best-in-class custom silicon enables breakthrough products,” stated Johny Srouji, senior vice president of Hardware Technologies at Apple, “and the new iPad Pro with M4″ The iPad Pro’s innovative display and thin design are made possible by the M4’s power-efficient performance and new display engine, while significant upgrades to the CPU, GPU, memory system, and neural engine make the M4 ideally suited for the newest AI-enabled applications. The iPad Pro is now the most powerful gadget of its kind thanks to this latest CPU.
Novel Technologies Activating the Latest iPad Pro
The M4 iPad Pro offers a significant improvement in performance compared to the M2 model. It is composed of 28 billion transistors, which are constructed with a second-generation 3-nanometer technology, hence increasing the power efficiency of Apple silicon. The Ultra Retina XDR display, a cutting-edge display made possible by mixing the light of two OLED panels, has remarkable precision, colour accuracy, and brightness uniformity thanks to an all-new display engine built using cutting-edge technology included in the M4.
A new CPU with ten cores
The new Apple M4 chip features up to 10 cores, with four performance cores and six efficiency cores. With deeper execution engines for efficiency cores and wider decode and execution engines for performance cores, the next-generation cores provide better branch prediction. Additionally, improved, next-generation ML accelerators are a characteristic of both types of cores.
Compared to the potent M2 in the previous iPad Pro, the Apple M4 chip offers up to 1.5 times faster CPU performance. M4 improves performance across professional workflows, whether utilizing Logic Pro to work with intricate orchestral music files or LumaFusion to add challenging effects to 4K video.
GPU Gives the iPad Pro New Features
The next-generation graphics architecture of the M3 family of processors is expanded upon by the new 10-core GPU of the Apple M4 chip. It has Apple’s revolutionary Dynamic Caching, which significantly raises the average GPU utilization by dynamically allocating local memory in real time and hardware. Performance for even the most demanding pro apps and games is greatly improved by this.
iPad hardware-accelerated ray tracing adds lifelike reflections and shadows to games and other visually exciting material for the first time. The GPUs hardware accelerated mesh shading speeds geometry processing and supports more graphically sophisticated scenes in games and graphics-intensive applications. Apple M4 chip significantly improves pro rendering speed, which is now up to four times faster than on M2, in tools such as Octane. The CPU and GPU enhancements in M4 preserve Apple silicon’s industry-best performance per watt. With only half the power, M4 can perform at par with M2. Additionally, the M4 requires only a quarter of the power to achieve the same performance as the newest PC processor in a light and thin laptop.
The fastest neural engine ever found in an IP block specifically designed to accelerate AI workloads is found in the Most Powerful Neural Engine Ever the M4. With a staggering 38 trillion operations per second, or 60 times faster than the first neural engine in the A11 Bionic, this is Apple’s most powerful neural engine ever. The Neural Engine in the Apple M4 chip creates an incredibly potent chip for AI, especially when combined with the high-performance GPU, higher-bandwidth unified memory, and next-generation ML accelerators in the CPU.
Additionally, the new iPad Pro enables users to complete incredible AI tasks quickly and on-device thanks to iPadOS AI features like Live Captions, which provides real-time audio captions, and Visual Look Up, which recognises items in images and videos.
With just a tap, the iPad Pro with Apple M4 chip can tap-freely separate a subject from the background of a 4K film in Final Cut Pro, and by listening to a piano performance in StaffPad, it can automatically write musical notation in real time. Additionally, inference workloads can be completed effectively and discreetly with the least amount of negative effects on battery life, app memory, and app performance. More potent than any neural processing unit found in any AI PC on the market today, the M4’s neural engine is Apple’s most powerful to date.
Cutting-Edge Media Engine for Fluid, Effective Streaming
The most sophisticated Media Engine available for iPad is found in the M4. It not only supports the most widely used video codecs, such as H.264, HEVC, and ProRes, but it also introduces hardware acceleration for AV1 to the iPad a first. This enables streaming services to play back high-definition videos with greater power efficiency.
More Beneficial to the Environment
Apple has high requirements for energy economy, and the all-new iPad Pro meets those standards and offers all-day battery life thanks to the power-efficient performance of Apple M4 chip. As a result, it requires less time to be plugged in and uses less energy overall in its lifetime.
Apple is currently carbon neutral for worldwide corporate operations and expects to be so by 2030 for its manufacturing supply chain and product life cycle.
Read more on govindhtech.com
0 notes