Tumgik
#HighPerformanceComputing(HPC)
alexanderrogge · 4 months
Text
Hewlett Packard Enterprise - One of two HPE Cray EX supercomputers to exceed an exaflop, Aurora is the second-fastest supercomputer in the world:
https://www.hpe.com/us/en/newsroom/press-release/2024/05/hewlett-packard-enterprise-delivers-second-exascale-supercomputer-aurora-to-us-department-of-energys-argonne-national-laboratory.html
HewlettPackard #HPE #Cray #Supercomputer #Aurora #Exascale #Quintillion #Argonne #HighPerformanceComputing #HPC #GenerativeAI #ArtificialIntelligence #AI #ComputerScience #Engineering
2 notes · View notes
rnoni · 12 days
Text
0 notes
govindhtech · 13 days
Text
SK Hynix Unveils PEB110 E1.S SSD For Efficient Data Centers
Tumblr media
The development of the high-performance solid-state drive (SSD) for data centers, called PEB110 E1.S, was announced today by SK Hynix Inc.
The AI era has increased consumer demand for ultra-fast DRAM chips, such as high bandwidth memory (HBM), and high-performance NAND solutions, such as SSDs for data centers. Following this trend, the company used the fifth-generation (Gen5) PCIe1 specifications to design and launch a new product with increased data processing speed and power efficiency.
- Advertisement -
After successfully introducing PS10102 into mass production, SK Hynix hopes to better serve a wider range of customer demands with a more comprehensive SSD portfolio with the release of PEB110.
Peripheral Component Interconnect Express (PCle): A high-speed, serially organized input/output interface found on digital device motherboards.
PS1010: Based on PCIe Gen5 E3.S and U.2/3, this data center/server-oriented SSD offers ultra-high performance and high capacity.
With a worldwide data center customer, the company is presently in the qualification process. Subject to qualification, mass manufacturing of the product is scheduled to start in the second quarter of the following year.
- Advertisement -
PEB110 can transport data at up to 32 gigatransfers per second (GT/s) thanks to PCle Gen5, which is used to the new device and offers double the bandwidth of Gen4. As a result, PEB110 may perform twice as well as the previous generation while increasing power efficiency by over 30%.
In order to greatly improve information security characteristics, SK Hynix has also implemented the security protocol and data model technology, or SPDM, to PEB110 for the first time in its data-center SSDs.
Secure server monitoring and authentication are provided by SPDM, a critical security solution focused on safeguarding server systems. Given the surge in assaults targeting data centers, the business anticipates that PEB110 with SPDM will satisfy clients’ demanding information security requirements.
For increased compatibility across worldwide data centers, the product will be available in three capacity versions: 2 terabyte (TB), 4 TB, and 8 TB. It also satisfies the OCP3 version 2.5 criteria.
Open Compute Project (OCP): A global consortium of top data center providers debating specifications for eSSDs, software and hardware to construct extremely energy-efficient data centers.
Ahn Hyun, Head of the N-S Committee at SK Hynix, stated, “The new product builds on the company’s best-in-class 238-high 4D NAND, boasting the most competitive standards in the industry in terms of cost, performance, and quality.”
Ahn stated, “Going forward, SK Hynix is on track to move forward with customer qualification and volume production to firmly establish their position as the leading global supplier of AI memory in the steadily expanding SSD market for data centers.”
News Highlights
By utilizing 5th-generation PCIe, the SSD offers 2x performance and over 30% more power efficiency than the previous version. In 2Q of next year, mass manufacturing will start after client qualification. SK Hynix will expand its data center SSD line to better serve a wider range of clientele. After success with HBM, the company will strengthen its position as the top global supplier of AI memory in NAND solutions, such as SSDs.
The PEB110 E1.S solid-state drive (SSD) from SK Hynix was created to meet the increasing needs of data centers, especially in the era of artificial intelligence. Utilizing the fifth-generation PCIe requirements, this new device offers notable gains in efficiency and performance.
Principal attributes and advantages
Improved Performance
When compared to its predecessors, the PEB110 E1.S offers a significant performance boost. It can transport data at up to 32 gigatransfers per second (GT/s) using PCIe Gen5. Because of the resulting much faster data processing, it is perfect for demanding applications such as AI inference and training.
Greater Power Efficiency
The greater power efficiency of the PEB110 E1.S is one of its most notable features. For data centers looking to cut expenses and their environmental effect, the SSD’s low power consumption and excellent performance are essential features.
Enhanced Security
The PEB110 E1.S integrates the Security Protocol and Data Model (SPDM) technology in response to the growing concerns around data security. Strong authentication and monitoring features offered by SPDM aid in preventing unwanted access to critical data.
Expanded Capacity Options
SK Hynix will provide 2TB, 4TB, and 8TB PEB110 E1.S capacities. Thanks to this, data centers may choose the storage choice that best suits their needs and workloads.
Possible Uses
AI and machine learning
The strong performance and low latency of the PEB110 E1.S make it an excellent choice for workloads including AI training and inference, which helps to expedite the creation and deployment of models.
High-Performance Computing (HPC)
The SSD can effectively manage big datasets and intricate simulations in HPC environments.
Cloud computing and data centers
The PEB110 E1.S can assist in enhancing overall performance and efficiency as data centers continue to expand in size and complexity.
In summary
The SK Hynix PEB110 E1.S signifies a noteworthy progression in data center SSD technology. It is an appealing option for businesses looking to improve their storage infrastructure because of its mix of high performance, power efficiency, and better security features.
Read more on govindhtech.com
0 notes
nonitha · 28 days
Text
0 notes
jjbizconsult · 1 year
Text
"Unleashing Innovation: NVIDIA H100 GPUs and Quantum-2 InfiniBand on Microsoft Azure"
0 notes
kittubhawsar · 2 years
Text
0 notes
mikyit · 6 months
Text
Discover CINECA, a non-profit #consortium of #Italian 🇮🇹 #universities 🎓 and #research 🔬 institutions. Proudly operating the world's 4th most powerful #supercomputer 🖥️💥 #Leonardo, #CINECA is at the forefront of advancing #research 👨‍🔬 and #innovation 🛰️. #CINECA's contributions play a crucial role in advancing #scientific #research and #innovation in #Italy and beyond. The consortium continues to evolve its services and #infrastructure 🏗️ to meet the growing demands of the scientific #community. -_- #HighPerformanceComputing (#HPC): It operates and manages supercomputing systems that are among the most powerful in Europe. These systems are used for a wide range of scientific simulations, modeling, and data-intensive research projects. -_- #Supercomputers: These supercomputers are designed to handle complex computations and simulations, enabling researchers to tackle scientific challenges in areas such as physics, chemistry, biology, and engineering. -_- #ResearchandInnovation: It collaborates with academic and industrial partners to advance scientific knowledge and technological capabilities. The consortium supports projects that leverage advanced computing resources to address complex problems. -_- #InternationalCollaboration: CINECA collaborates with other European and international research organizations and consortia. -_- #DataManagement and #Services: Apart from HPC, CINECA is involved in providing services related to data management, storage, and analysis. This includes supporting researchers in handling and processing large datasets generated by their experiments and simulations.
0 notes
ebrusf-blog · 5 years
Link
1 note · View note
kickstandwilly · 2 years
Text
5 benefits of investing in high-performance computing (HPC) - #incaseyoumissedit #ICYMI #Dell #HPC #workplace #modernization #highperformancecomputing #Zones | #RoadmapForSuccess
5 benefits of investing in high-performance computing (HPC) – #incaseyoumissedit #ICYMI #Dell #HPC #workplace #modernization #highperformancecomputing #Zones | #RoadmapForSuccess
For corporate IT leaders, having an intense focus on workplace modernization is nothing new. Whether it’s deploying new devices, new apps, or new data management platforms, bringing in top-notch technology has always been a priority. But right now, modern technology is reaching an entirely new level. 5G. Smart devices. High-speed connectivity. The Internet of Things. All of these innovations are…
Tumblr media
View On WordPress
0 notes
edchicago · 7 years
Photo
Tumblr media
Supercomputer #hpc #highperformancecomputing #nersc #cori #latergram #almostexascale #latergram (at NERSC)
0 notes
rnoni · 25 days
Text
0 notes
rnoni · 2 months
Text
0 notes
govindhtech · 4 months
Text
Aurora Supercomputer Sets a New Record for AI Tragic Speed!
Tumblr media
Intel Aurora Supercomputer
Together with Argonne National Laboratory and Hewlett Packard Enterprise (HPE), Intel announced at ISC High Performance 2024 that the Aurora supercomputer has broken the exascale barrier at 1.012 exaflops and is now the fastest AI system in the world for AI for open science, achieving 10.6 AI exaflops. Additionally, Intel will discuss how open ecosystems are essential to the advancement of AI-accelerated high performance computing (HPC).
Why This Is Important:
From the beginning, Aurora was intended to be an AI-centric system that would enable scientists to use generative AI models to hasten scientific discoveries. Early AI-driven research at Argonne has advanced significantly. Among the many achievements are the mapping of the 80 billion neurons in the human brain, the improvement of high-energy particle physics by deep learning, and the acceleration of drug discovery and design using machine learning.
Analysis
The Aurora supercomputer has 166 racks, 10,624 compute blades, 21,248 Intel Xeon CPU Max Series processors, and 63,744 Intel Data Centre GPU Max Series units, making it one of the world’s largest GPU clusters. 84,992 HPE slingshot fabric endpoints make up Aurora’s largest open, Ethernet-based supercomputing connection on a single system.
The Aurora supercomputer crossed the exascale barrier at 1.012 exaflops using 9,234 nodes, or just 87% of the system, yet it came in second on the high-performance LINPACK (HPL) benchmark. Aurora supercomputer placed third on the HPCG benchmark at 5,612 TF/s with 39% of the machine. The goal of this benchmark is to evaluate more realistic situations that offer insights into memory access and communication patterns two crucial components of real-world HPC systems. It provides a full perspective of a system’s capabilities, complementing benchmarks such as LINPACK.
How AI is Optimized
The Intel Data Centre GPU Max Series is the brains behind the Aurora supercomputer. The core of the Max Series is the Intel X GPU architecture, which includes specialised hardware including matrix and vector computing blocks that are ideal for AI and HPC applications. Because of the unmatched computational performance provided by the Intel X architecture, the Aurora supercomputer won the high-performance LINPACK-mixed precision (HPL-MxP) benchmark, which best illustrates the significance of AI workloads in HPC.
The parallel processing power of the X architecture excels at handling the complex matrix-vector operations that are a necessary part of neural network AI computing. Deep learning models rely heavily on matrix operations, which these compute cores are essential for speeding up. In addition to the rich collection of performance libraries, optimised AI frameworks, and Intel’s suite of software tools, which includes the Intel oneAPI DPC++/C++ Compiler, the X architecture supports an open ecosystem for developers that is distinguished by adaptability and scalability across a range of devices and form factors.
Enhancing Accelerated Computing with Open Software and Capacity
He will stress the value of oneAPI, which provides a consistent programming model for a variety of architectures. OneAPI, which is based on open standards, gives developers the freedom to write code that works flawlessly across a variety of hardware platforms without requiring significant changes or vendor lock-in. In order to overcome proprietary lock-in, Arm, Google, Intel, Qualcomm, and others are working towards this objective through the Linux Foundation’s Unified Acceleration Foundation (UXL), which is creating an open environment for all accelerators and unified heterogeneous compute on open standards. The UXL Foundation is expanding its coalition by adding new members.
As this is going on, Intel Tiber Developer Cloud is growing its compute capacity by adding new, cutting-edge hardware platforms and new service features that enable developers and businesses to assess the newest Intel architecture, innovate and optimise workloads and models of artificial intelligence rapidly, and then implement AI models at scale. Large-scale Intel Gaudi 2-based and Intel Data Centre GPU Max Series-based clusters, as well as previews of Intel Xeon 6 E-core and P-core systems for certain customers, are among the new hardware offerings. Intel Kubernetes Service for multiuser accounts and cloud-native AI training and inference workloads is one of the new features.
Next Up
Intel’s objective to enhance HPC and AI is demonstrated by the new supercomputers that are being implemented with Intel Xeon CPU Max Series and Intel Data Centre GPU Max Series technologies. The Italian National Agency for New Technologies, Energy and Sustainable Economic Development (ENEA) CRESCO 8 system will help advance fusion energy; the Texas Advanced Computing Centre (TACC) is fully operational and will enable data analysis in biology to supersonic turbulence flows and atomistic simulations on a wide range of materials; and the United Kingdom Atomic Energy Authority (UKAEA) will solve memory-bound problems that underpin the design of future fusion powerplants. These systems include the Euro-Mediterranean Centre on Climate Change (CMCC) Cassandra climate change modelling system.
The outcome of the mixed-precision AI benchmark will serve as the basis for Intel’s Falcon Shores next-generation GPU for AI and HPC. Falcon Shores will make use of Intel Gaudi’s greatest features along with the next-generation Intel X architecture. A single programming interface is made possible by this integration.
In comparison to the previous generation, early performance results on the Intel Xeon 6 with P-cores and Multiplexer Combined Ranks (MCR) memory at 8800 megatransfers per second (MT/s) deliver up to 2.3x performance improvement for real-world HPC applications, such as Nucleus for European Modelling of the Ocean (NEMO). This solidifies the chip’s position as the host CPU of choice for HPC solutions.
Read more on govindhtech.com
0 notes