Tumgik
#aibenchmark
govindhtech · 4 months
Text
Aurora Supercomputer Sets a New Record for AI Tragic Speed!
Tumblr media
Intel Aurora Supercomputer
Together with Argonne National Laboratory and Hewlett Packard Enterprise (HPE), Intel announced at ISC High Performance 2024 that the Aurora supercomputer has broken the exascale barrier at 1.012 exaflops and is now the fastest AI system in the world for AI for open science, achieving 10.6 AI exaflops. Additionally, Intel will discuss how open ecosystems are essential to the advancement of AI-accelerated high performance computing (HPC).
Why This Is Important:
From the beginning, Aurora was intended to be an AI-centric system that would enable scientists to use generative AI models to hasten scientific discoveries. Early AI-driven research at Argonne has advanced significantly. Among the many achievements are the mapping of the 80 billion neurons in the human brain, the improvement of high-energy particle physics by deep learning, and the acceleration of drug discovery and design using machine learning.
Analysis
The Aurora supercomputer has 166 racks, 10,624 compute blades, 21,248 Intel Xeon CPU Max Series processors, and 63,744 Intel Data Centre GPU Max Series units, making it one of the world’s largest GPU clusters. 84,992 HPE slingshot fabric endpoints make up Aurora’s largest open, Ethernet-based supercomputing connection on a single system.
The Aurora supercomputer crossed the exascale barrier at 1.012 exaflops using 9,234 nodes, or just 87% of the system, yet it came in second on the high-performance LINPACK (HPL) benchmark. Aurora supercomputer placed third on the HPCG benchmark at 5,612 TF/s with 39% of the machine. The goal of this benchmark is to evaluate more realistic situations that offer insights into memory access and communication patterns two crucial components of real-world HPC systems. It provides a full perspective of a system’s capabilities, complementing benchmarks such as LINPACK.
How AI is Optimized
The Intel Data Centre GPU Max Series is the brains behind the Aurora supercomputer. The core of the Max Series is the Intel X GPU architecture, which includes specialised hardware including matrix and vector computing blocks that are ideal for AI and HPC applications. Because of the unmatched computational performance provided by the Intel X architecture, the Aurora supercomputer won the high-performance LINPACK-mixed precision (HPL-MxP) benchmark, which best illustrates the significance of AI workloads in HPC.
The parallel processing power of the X architecture excels at handling the complex matrix-vector operations that are a necessary part of neural network AI computing. Deep learning models rely heavily on matrix operations, which these compute cores are essential for speeding up. In addition to the rich collection of performance libraries, optimised AI frameworks, and Intel’s suite of software tools, which includes the Intel oneAPI DPC++/C++ Compiler, the X architecture supports an open ecosystem for developers that is distinguished by adaptability and scalability across a range of devices and form factors.
Enhancing Accelerated Computing with Open Software and Capacity
He will stress the value of oneAPI, which provides a consistent programming model for a variety of architectures. OneAPI, which is based on open standards, gives developers the freedom to write code that works flawlessly across a variety of hardware platforms without requiring significant changes or vendor lock-in. In order to overcome proprietary lock-in, Arm, Google, Intel, Qualcomm, and others are working towards this objective through the Linux Foundation’s Unified Acceleration Foundation (UXL), which is creating an open environment for all accelerators and unified heterogeneous compute on open standards. The UXL Foundation is expanding its coalition by adding new members.
As this is going on, Intel Tiber Developer Cloud is growing its compute capacity by adding new, cutting-edge hardware platforms and new service features that enable developers and businesses to assess the newest Intel architecture, innovate and optimise workloads and models of artificial intelligence rapidly, and then implement AI models at scale. Large-scale Intel Gaudi 2-based and Intel Data Centre GPU Max Series-based clusters, as well as previews of Intel Xeon 6 E-core and P-core systems for certain customers, are among the new hardware offerings. Intel Kubernetes Service for multiuser accounts and cloud-native AI training and inference workloads is one of the new features.
Next Up
Intel’s objective to enhance HPC and AI is demonstrated by the new supercomputers that are being implemented with Intel Xeon CPU Max Series and Intel Data Centre GPU Max Series technologies. The Italian National Agency for New Technologies, Energy and Sustainable Economic Development (ENEA) CRESCO 8 system will help advance fusion energy; the Texas Advanced Computing Centre (TACC) is fully operational and will enable data analysis in biology to supersonic turbulence flows and atomistic simulations on a wide range of materials; and the United Kingdom Atomic Energy Authority (UKAEA) will solve memory-bound problems that underpin the design of future fusion powerplants. These systems include the Euro-Mediterranean Centre on Climate Change (CMCC) Cassandra climate change modelling system.
The outcome of the mixed-precision AI benchmark will serve as the basis for Intel’s Falcon Shores next-generation GPU for AI and HPC. Falcon Shores will make use of Intel Gaudi’s greatest features along with the next-generation Intel X architecture. A single programming interface is made possible by this integration.
In comparison to the previous generation, early performance results on the Intel Xeon 6 with P-cores and Multiplexer Combined Ranks (MCR) memory at 8800 megatransfers per second (MT/s) deliver up to 2.3x performance improvement for real-world HPC applications, such as Nucleus for European Modelling of the Ocean (NEMO). This solidifies the chip’s position as the host CPU of choice for HPC solutions.
Read more on govindhtech.com
0 notes
aicentury · 3 months
Text
💥 Big News
AI Releases From Big Tech
OpenAI’s newly launched CriticGPT catches ChatGPT’s coding errors (& outperforms humans 60% of the time).
Runway’s Gen-3 Alpha AI video generator is now available to everyone.
Amazon’s new AI solution promises to reduce CPMs by up to 34% & improve CPC by 8.8% by delivering targeted ads based on what consumers are viewing in real time.
Eleven Labs’ new Reader App narrates articles, PDFs, ePubs, newsletters, your own content, & more aloud — right from your phone.
Anthropic launched a funding program that promotes the creation of more reliable AI benchmarks around societal impact, security risks, & more.
Figma unveiled new AI tools in limited beta (join the waitlist by navigating to the bottom of your Figma screen then selecting ‘?’ > ‘Join UI3 + AI waitlist).
Snap introduced new AI features that enhance personalization.
Meta’s “AI Info” (formerly “Made with AI”) label now specifies whether an image is entirely AI-generated or was simply edited using AI tools.
YouTube now lets you request removal of AI-generated content that resembles your face or voice — bolstering privacy precautions.
___________________________
### أخبار ضخمة: إصدارات جديدة من كبرى شركات التكنولوجيا في مجال الذكاء الاصطناعي
💥 **OpenAI** - تم إطلاق CriticGPT من OpenAI حديثًا، وهو يقوم باكتشاف أخطاء التشفير في ChatGPT ويتفوق على البشر بنسبة 60٪ من الوقت.
💥 **Runway** - مولد الفيديو Gen-3 Alpha AI من Runway أصبح متاحًا الآن للجميع.
💥 **Amazon** - الحل الجديد من Amazon للذكاء الاصطناعي يعد بخفض تكاليف الألف ظهور (CPMs) بنسبة تصل إلى 34٪ وتحسين تكلفة النقرة (CPC) بنسبة 8.8٪ من خلال تقديم إعلانات مستهدفة بناءً على ما يشاهده المستهلكون في الوقت الفعلي.
💥 **Eleven Labs** - تطبيق Reader الجديد من Eleven Labs يقوم بسرد المقالات، وملفات PDF، وePubs، والنشرات الإخبارية، ومحتوياتك الخاصة والمزيد بصوت عالٍ — مباشرة من هاتفك.
💥 **Anthropic** - أطلقت Anthropic برنامج تمويل يشجع على إنشاء معايير أكثر موثوقية للذكاء الاصطناعي تتعلق بتأثيره على المجتمع والمخاطر الأمنية والمزيد.
💥 **Figma** - كشفت Figma عن أدوات ذكاء اصطناعي جديدة في بيتا محدودة (يمكنك الانضمام إلى قائمة الانتظار بالتنقل إلى أسفل شاشة Figma الخاصة بك ثم تحديد '?' > 'Join UI3 + AI waitlist').
💥 **Snap** - قدمت Snap ميزات ذكاء اصطناعي جديدة تعزز من التخصيص.
💥 **Meta** - أصبحت علامة "معلومات الذكاء الاصطناعي" (التي كانت تُعرف سابقًا بـ "مصنوع بالذكاء الاصطناعي") من Meta الآن تحدد ما إذا كانت الصورة مُنشأة بالكامل بواسطة الذكاء الاصطناعي أو تم تعديلها باستخدام أدوات الذكاء الاصطناعي فقط.
💥 **YouTube** - يتيح لك YouTube الآن طلب إزالة المحتوى المُنشأ بواسطة الذكاء الاصطناعي الذي يشبه وجهك أو صوتك — مما يعزز من إجراءات الخصوصية.
Tumblr media
0 notes
eurekakinginc · 5 years
Photo
Tumblr media
"[P] AI Benchmark: A New Standard for ML Performance Assessment of CPUs, GPUs and TPUs"- Detail: AI Benchmark is an open source python library for evaluating AI performance of various hardware platforms, including CPUs, GPUs and TPUs. The benchmark is relying on TensorFlow machine learning library, and is providing a lightweight solution for assessing inference and training speed for key Deep Learning models, including:​MobileNet-V2Inception-V3Inception-V4Inception-ResNet-V2ResNet-V2-50ResNet-V2-152VGG-16SRCNN 9-5-5VGG-19ResNet-SRGANResNet-DPEDU-NetNvidia-SPADEICNetPSPNetDeepLabPixel-RNNLSTMGNMT​It is currently distributed as a Python pip package, installation instructions can be found at https://ift.tt/2Yk4rsi and https://pypi.org/project/ai-benchmark/​Note: Fast installation [if TensorFlow is already installed]:pip install ai-benchmarkRun benchmark using the following python code:from ai_benchmark import AIBenchmarkresults = AIBenchmark().run()​A detailed information about test setups: https://ift.tt/2xkt5xm short preliminary ranking is available here: http://ai-benchmark.com/ranking_cpus_and_gpus.html​A global ranking with the results of various hardware and software platforms, drivers / configs and TF builds should be available soon. Original post: https://ift.tt/2Yk4rsi. Caption by aiff22. Posted By: www.eurekaking.com
0 notes
moneydjnews · 5 years
Text
余承東:若谷歌服務不能用,華為P40料首發鴻蒙系統
IT之家報導,大陸華為消費者業務CEO余承東在《IFA 2019》媒體見面會回應提問時表示,鴻蒙系統已基本準備就緒,但不會先去使用它,因為還要考慮到相關決定和合作;但如果華為的手機繼續不被允許使用谷歌服務,就會考慮使用鴻蒙系統,而第一款搭載鴻蒙系統的手機產品可能是明(2020)年3月發布的「華為P40」。
在被問及華為是否考慮將麒麟晶片出售的時候,余承東表示,其實有很多人在問這個問題,目前華為麒麟晶片只生產給自家使用,但也在考慮銷售晶片給其他產業,例如IoT(物聯網)領域等,這部份目前華為還在猶豫也還在討論階段。
另據快科技報導,型號為「Huawei Dev Phone」的神秘設備現身AIbenchmark,其搭載的是麒麟990 5G晶片,AI跑分高達76206,是第二名麒麟810的兩倍之多。爆料人士Slashleaks指出,Huawei Dev Phone可能是華為Mate…
View On WordPress
0 notes