#nersc
Explore tagged Tumblr posts
govindhtech · 29 days ago
Text
Dell NERSC 10 Supercomputer With NVIDIA Vera Rubin & DOE
Tumblr media
Our Nobel Prize-Winning Supercomputer Accelerates Science
Dell Technologies will build NERSC 10, the next flagship supercomputer of the National Energy Research Scientific Computing Centre (NERSC), under a new DOE contract. DOE user site Lawrence Berkeley National Laboratory houses NERSC. Chris Wright, Secretary of Energy, made the announcement at Berkeley Lab.
The DOE Office of Science user facility Nation Energy Research Scientific Computer Centre (NERSC) houses the 10th-generation computer system NERSC 10. HPC facilities at NERSC enable physics, climate, energy, biology, and materials science research.
Overview of NERSC 10
In 2024, NERSC 10 will replace Perlmutter, its flagship supercomputer.
The implementation of NERSC 10 is expected in 2026 or later.
Developed for DOE Office of Science's growing data and computational demands until 2030.
NERSC 10 returns to LBNL in California.
Main features
Main goals of NERSC 10 Extreme Performance:
Near-exascale or exascale computer power.
Support for increasingly complex workloads.
Balanced Architecture:
High memory bandwidth and capacity.
I/O for rapid storage.
Complex interconnects for low-latency communication.
Integration of HPC and AI:
We support AI and ML workloads in addition to simulations.
built for hybrid workflows.
Energy Efficiency:
Boosting FLOP power efficiency.
Green technology like liquid cooling may be investigated.
User-centred design:
Maintaining the NERSC user experience and software stack.
A focus on usability and productivity for many scientists.
Purchase and Development
DOE normally issues an RFP years before system delivery.
Community Engagement: NERSC solicits scientific user community comments throughout system design to ensure practical research needs are met.
Strategic Importance
Supports hundreds of research projects and over 9,000 users throughout DOE mission areas.
Leadership Role: Unlike experimental exascale systems, NERSC systems are easy to use by many scientists.
The 2020 Nobel Prize for Chemistry winner for developing CRISPR, Berkeley Lab biologist Jennifer Doudna, will name the new system “Doudna” in 2026. Secretary Wright was surprised and pleased by Doudna's nomination, praising his biological results and the potential for computational powers to speed illness and tumour cures.
The Dell Technologies Doudna supercomputer will run on NVIDIA's next-generation Vera Rubin platform. Large-scale HPC workloads including high-energy physics, molecular dynamics, and AI training and inference are its focus. Simulation, data, and AI on one platform stabilise cutting-edge science workflows and expedite findings.
The system “represents DOE’s commitment to advancing American leadership in science, AI, and high-performance computing,” Secretary Wright said. He called Doudna a “powerhouse for rapid innovation” that will revolutionise quantum computing and supply cheap, abundant energy. Wright called AI “the Manhattan Project of the time,” emphasising that Doudna will assist American scientists compete globally in AI.
Doudna supercomputer would speed up numerous scientific operations, said NERSC Director Sudip Dosanjh. NERSC is collaborating with NVIDIA and Dell to prepare its 11,000 users for the system's improved workflow. Doudna will be connected to DOE observational and experimental facilities via the Energy Sciences Network (ESnet), allowing scientists to evaluate and stream data in real time. Because of this integration, the supercomputer is no longer a passive workflow player.
Doudna may boost innovation in several areas. Doudna can accelerate the finding of plentiful, usable energy because NERSC finances fusion research. Its strong GPUs will let DOE-funded researchers swiftly integrate large-scale AI into their workflows, speeding up basic physics, biomolecular modelling, and advanced materials design research. The system will support modern quantum simulation tools including NVIDIA's CUDA-Q platform for co-designing next integrated quantum-HPC systems and scalable quantum algorithms.
The Vera-Rubin CPU-GPU platform and Dell's latest ORv3 direct liquid-cooled server technologies are used, according to NVIDIA. It will use Dell Integrated Rack Scalable Systems and PowerEdge servers with NVIDIA accelerators, NVIDIA Quantum-X800 InfiniBand networking, and high-performance data management and storage.
Doudna is expected to exceed NERSC's flagship supercomputer, Perlmutter, by over ten times in scientific output. Two to three times the power of Perlmutter is expected, boosting performance by three to five times per watt. The goal is to substantially reduce the time needed for major scientific breakthroughs.
We’re not just developing a quicker computer', said Nick Wright, NERSC’s Doudna principal architect and advanced technologies group lead. We're creating a framework to help researchers think broadly and discover faster. He added that the system is designed to quickly address global concerns and encourage study in physics, chemistry, and other unimagined fields.
0 notes
spacetimewithstuartgary · 9 months ago
Text
Tumblr media
Magnifying deep space through the “carousel lens”
A newly discovered cluster-scale strong gravitational lens, with a rare alignment of seven background lensed galaxies, provides a unique opportunity to study cosmology.
In a rare and extraordinary discovery, researchers have identified a unique configuration of galaxies that form the most exquisitely aligned gravitational lens found to date. The Carousel Lens is a massive cluster-scale gravitational lens system that will enable researchers to delve deeper into the mysteries of the cosmos, including dark matter and dark energy.
“This is an amazingly lucky ‘galactic line-up’ – a chance alignment of multiple galaxies across a line-of-sight spanning most of the observable universe,” said David Schlegel, a co-author of the study and a senior scientist in Berkeley Lab’s Physics Division. "Finding one such alignment is a needle in the haystack. Finding all of these is like eight needles precisely lined up inside that haystack."
The Carousel Lens is an alignment consisting of one foreground galaxy cluster (the ‘lens’) and seven background galaxies spanning immense cosmic distances and seen through the gravitationally distorted space-time around the lens. In the dramatic image below:
The lensing cluster, located 5 billion light years away from Earth, is shown by its four brightest and most massive galaxies (indicated by La, Lb, Lc, and Ld), and these constitute the foreground of the image.  
Seven unique galaxies (numbered 1 through 7), appear through the lens. These are located far beyond, at distances from 7.6 to 12 billion light years away from Earth, approaching the limit of the observable universe.
Each galaxy’s repeated appearances (indicated by each number’s letter index, e.g., a through d) show differences in shape that are curved and stretched into multiple “fun house mirror” iterations caused by the warped space-time around the lens.
Of particular interest is the discovery of an Einstein Cross – the largest known to date – shown in galaxy number 4’s multiple appearances (indicated by 4a, 4b, 4c, and 4d). This rare configuration of multiple images around the center of the lens is an indication of the symmetrical distribution of the lens’ mass (dominated by invisible dark matter) and plays a key role in the lens-modeling process. 
Light traveling from far-distant space can be magnified and curved as it passes through the gravitationally distorted space-time of nearer galaxies or clusters of galaxies. In rare instances, a configuration of objects aligns nearly perfectly to form a strong gravitational lens. Using an abundance of new data from the Dark Energy Spectroscopic Instrument (DESI) Legacy Imaging Surveys, recent observations from NASA’s Hubble Space Telescope, and the Perlmutter supercomputer at the National Energy Research Scientific Computing Center (NERSC), the research team built on their earlier studies (in May 2020 and Feb 2021) to identify likely strong lens candidates, laying the groundwork for the current discovery. 
“Our team has been searching for strong lenses and modeling the most valuable systems,” explains Xiaosheng Huang, a study co-author and member of Berkeley Lab’s Supernova Cosmology Project, and a professor of physics and astronomy at the University of San Francisco. “The Carousel Lens is an incredible alignment of seven galaxies in five groupings that line up nearly perfectly behind the foreground cluster lens. As they appear through the lens, the multiple images of each of the background galaxies form approximately concentric circular patterns around the foreground lens, as in a carousel. It’s an unprecedented discovery, and the computational model generated shows a highly promising prospect for measuring the properties of the cosmos, including those of dark matter and dark energy.” 
The study also involved several Berkeley Lab student researchers, including the lead author, William Sheu, an undergraduate student intern with DESI at the beginning of this study, now a PhD student at UCLA and a DESI collaborator. 
The Carousel Lens will enable researchers to study dark energy and dark matter in entirely new ways based on the strength of the observational data and its computational model.
“This is an extremely unusual alignment, which by itself will provide a testbed for cosmological studies,” observes Nathalie Palanque-Delabrouille, director of Berkeley Lab’s Physics Division. “It also shows how the imaging done for DESI can be leveraged for other scientific applications,” such as investigating the mysteries of dark matter and the accelerating expansion of the universe, which is driven by dark energy.
IMAGE: Hubble Space Telescope image of the Carousel Lens, taken in two 10-minute exposures, one using an optical filter and another using an infrared filter. The “L” indicators near the center (La, Lb, Lc, and Ld) show the most massive galaxies in the lensing cluster, located 5 billion light years away. Seven unique galaxies (numbered 1 through 7) – located an additional 2.6 to 7 billion light years beyond the lens – appear in multiple, distorted “fun-house mirror” iterations (indicated by each number’s letter index, e.g., a through d), as seen through the lens. Credit William Sheu (UCLA) using Hubble Space Telescope data.
9 notes · View notes
weetechsolution · 8 months ago
Text
The World's Most Powerful Supercomputers: A Race for Exascale
Tumblr media
Supercomputers which were very reserved at the start only for certain research institutes are now being made available in large quantities and have become more powerful than ever. The computing behemoths are the ones who have been enabling various innovations such as climate modeling and artificial intelligence. Have a look at the world's most powerful supercomputers around the world:
World's Best Powerful Supercomputers
1. Frontier
Location: Oak Ridge National Laboratory, Tennessee, USA
Performance: E-Exaflops computing which refers to systems that can do a quintillion (10^18) calculations in one second.
Applications: The Environmental synergy material is mainly the research of materials and artificial intelligence science.
2. Fugaku
Location: RIKEN Center for Computational Science, Kobe, Japan
Performance: Exascale computing, a highly advanced supercomputer, is often regarded as the best in terms of energy efficiency.
Applications: A weather forecast, drug discovery, and the area of materials science.
3. Perlmutter
Location: National Energy Research Scientific Computing Center (NERSC), Berkeley Lab, California, USA
Performance: Exascale-class supercomputer.
Applications: Artificial intelligence, climate modeling, and materials science.
Other Notable Supercomputers
Tianhe-3: A supercomputer from China that has a high performance and low energy consumption.
Sierra: A supercomputer based in the USA that has been created for simulating nuclear weapons and doing scientific research.
Summit: The ancient guy still has a wide range of applicability and has been used in different studies around the world.
These supercomputers are vital not only in solving complicated problems but also in improving technology. As technology continues to develop, we should expect to see more and more powerful supercomputers that will push the limits of what is currently achievable.
0 notes
peterchiublack · 1 year ago
Text
對中(共)國禁運的超級計算機,2017年03月。是舊聞,撫今感昔的舊聞。仍有一看價值。。。。
2016年國際超級計算機「TOP500」組織榜單顯示,排在榜首的是由中國國家超級計算無錫中心研製的「神威·太湖之光」,原來排第一的中國天河二號屈居第二,中國入榜的超級計算機數量達到167台,美國則是165台,其他國家168台。(全球超級計算機500強中國雙第一)。
神威太湖之光與之前的超算相比,最大亮點是全部採用自主中國芯:「申威26010」眾核處理器。Peak performance 3TFLOPS DP per chip: (每秒雙精度浮點運算3x10^12次/每晶片),性能較高,與Intel的Knights Landing many core一個量級。
相關團隊基於全球最快超級計算機「神威·太湖之光」系統的三項全機應用入圍國際高性能計算應用領域最高獎——戈登貝爾獎。該獎設立30年以來,中國團隊從未入圍。此次,基於「神威·太湖之光」系統研發的三個課題入圍該獎項,占入圍總數一半。
1、太湖之光,Sunway MPP,SW26010,國家超級計算中心,中國無錫,1060萬個核心,93.01pf(每秒千萬億次運算)
2、天河二號,TH-IVB-FEP Cluster,國家超級計算機中心,中國廣州,312萬個核心,33.86pf
3、泰坦,Cray XK7系統,,美國能源部,橡樹嶺國家實驗室,17.59pf
4、紅杉,IBM BlueGene/Q系統,美國能源部,勞倫斯利弗莫爾國家實驗室,加州,157萬個核心,16.32pf
5、Cori,Cray XC40,伯克利實驗室,美國國家能源研究科學計算中心(NERSC),14pf
6、Oakforest-PACs,富士通Primergy CX1640 M1 cluster,日本,先進高性能計算聯合中心,13.6pf
7、K Computer,SPARC64系統,70.5萬個核心,日本,Riken計算科學高級研究所,10.5pf
8、Piz Daint,Cray XC30,11.6萬個Xeon和英偉達核心,瑞士國家計算中心,瑞士,9.8pf
9、Mira,IBM BlueGene/Q系統,美國DOE/SC/Argonne國家實驗室,78.6萬個核心,8.6pf
10、Trinity,Cray CX40,美國DOE/NNSA/LANL/SNL,30.1萬個核心,8.1pf
檢測是由國際TOP500組織完成的,排行榜也是由該組織發布的。TOP500組織在聲明中表示:「除了超級計算系統數量上的對決之外,中國和美國在Linpack性能上也表現出並駕齊驅的態勢。」美國占到全球超級計算能力總和的33.9%,排名第一;中國以33.3%的份額緊隨其後。全球最大的500台超級計算機的運算能力總和為672pf,比去年的運算能力總和增長了60%。
IBM是超級計算機CPU的最大製造商,AMD排在第二位,但其市場份額僅為7%。上榜的所有超級計算機有96%都使用了配備英偉達GPU的核心加速器,它所占份額為60%。緊隨其後的是Xeon Phi,所占份額為21%。共有206台超級計算機系統使用了Gigabit Ethernet,187台超級計算機系統使用的是Infiniband,使用英特爾最新Omni-Path技術的超級計算機系統數量由去年的20個增加到28個。見不得好的美國在背地裡使壞,藉助與其點亮科技樹的優勢,決定對中國四家國家超級計算機中心禁售至強PHI計算卡,當時各種唱衰中國超算的聲音不絕於耳。即便天河2號在2015年國際超級計算機大會上蟬聯5連冠,但因使用美國Intel公司的CPU而飽受一些別有用心之人的詬病,不少人聲稱沒有美國的CPU,天河2號要完蛋了。
網友調侃,在超級計算機領域,中國超過了美國,不過還有一個其他國家還沒有超過,這���的是在中國人眼裡,只有兩個國家,那就是中國和外國!要知道,這次的太湖之光用的是國產晶片。中國的「天河二號」超級計算機自2013年誕生以來,就一直雄居全球超級計算機500強榜單之首,這弄得美國很沒面子,很不淡定,於是在2015年美國發飆了,那就是美國禁止Intel向中國發售高端伺服器晶片(美國禁售CPU未能阻止中國在超級計算機上稱王)。沒想到,中國第二年就出自主晶片了,還是得了第一,看來深得獨立自主,自力更生發展精髓,能買也要能造,這也是國人對航空發動機的要求。
可以說,隨著中國經濟和科技的發展,中國打破西方壟斷的技術會越來越多!這也是中國經濟發展,要提高人均GDP,達到中等發達國家水平所必須要走的一條路。另一方面,中國也不會滿足於跟隨西方的步伐,而是追求在一些領域處於世界領先地位,引領世界潮流。比如量子通信領域,中國就處於世界領先地位,發射了首顆量子通信衛星。感謝那些為中華民族復興而默默做出貢獻的科學家,正是因為你們的辛勤奉獻,才有我們的闔家安康。
【科技評論】西方對中國禁運的先進設備和技術來源:中國鋼研戰略所,2017/03/21。
https://read01.com/Jgna2R.html
0 notes
yo-sostenible · 1 year ago
Text
Es la conclusión principal de un nuevo estudio del el Institut de Ciències del Mar del CSIC que prueba la eficacia de la detección remota para monitorizar las principales variables oceanográficas, entre ellas la salinidad Las conclusiones de estudio tienen posibles repercusiones en el sistema de circulación mundial que regula el clima de la Tierra / Pixabay Mientras los efectos del cambio climático siguen remodelando el paisaje ártico, un nuevo estudio liderado por el Institut de Ciències del Mar (ICM-CSIC) de Barcelona en colaboración con Nansen Environmental and Remote Sensing Center (NERSC) de Noruega ha revelado que los datos recopilados por los satélites ayudan a reducir hasta en un 70 % el sesgo de las estimaciones del deshielo hechas a partir de modelos numéricos. En concreto, el trabajo habla de las mediciones de salinidad tomadas por el satélite ‘SMOS‘ (Soil Moisture and Ocean Salinity), que lleva desde 2009 enviando a las antenas de la Agencia Espacial Europea (ESA) datos para calcular la salinidad oceánica y la humedad terrestre, importantes para comprender mejor la circulación de los océanos y el ciclo del agua, profundizar en la prevención de huracanes o incendios y mejorar las estimaciones del deshielo. “Al integrar las mediciones de salinidad superficial obtenidas por satélite con datos del modelo numérico Ártico TOPAZ, pudimos mejorar significativamente nuestra estimación del contenido de agua dulce y supervisar mejor los cambios en el mar de Beaufort, una zona crítica dentro del Ártico afectada por rápidos cambios medioambientales”, expone la investigadora del ICM-CSIC y autora principal del trabajo, Marta Umbert. Esto pone de relieve la capacidad de los datos satelitales de contribuir al seguimiento de la dinámica del agua dulce en regiones frías como el Ártico, con implicaciones significativas para la comprensión de los sistemas climáticos globales. Un fenómeno en expansión La hidrografía del Ártico ha experimentado importantes transformaciones en las dos últimas décadas, con una notable disminución de la extensión del hielo marino y un aumento del contenido de agua dulce líquida debido principalmente al deshielo de los glaciares y el hielo marino.  El nuevo estudio, publicado recientemente en la revista Ocean Science, ha integrado las mediciones de salinidad superficial del satélite SMOS para evaluar el contenido de agua dulce en mar de Beaufort entre los años 2011 y 2019 y compararlo con estimaciones a partir de datos in situ. Los resultados muestran una subestimación del contenido de agua dulce teniendo en cuenta solo los datos del modelo numéricos, llegándose a reducir el sesgo en un 70 % al incorporar las mediciones satelitales. “Nuestra investigación demuestra el papel fundamental que desempeña la detección remota de la salinidad en la mejora de nuestra capacidad para vigilar el contenido de agua dulce del Ártico y comprender los procesos clave que influyen en los sistemas climáticos mundiales”, apunta Eva de Andrés, de la Universidad Politécnica de Madrid (UPM). Con todo, las implicaciones del estudio se extienden más allá del Ártico, con posibles repercusiones en el sistema de circulación mundial que regula el clima de la Tierra. Y es que, el hecho de mejorar la comprensión de las variaciones de salinidad y su relación con el contenido de agua dulce permitirá predecir mejor y mitigar los efectos del cambio climático tanto a escala regional como mundial”, concluye el equipo científico. Referencia: Marta Umbert et al. “Contribution of satellite sea surface salinity to the estimation of liquid freshwater content in the Beaufort Sea”. Ocean Science Fuente: Instituto de Ciencias del Mar del CSIC
Tumblr media
View On WordPress
0 notes
blanchektechnologyhub · 4 years ago
Text
Nvidia, NERSC Claims Perlmutter is the World's Quickest AI Supercomputer
 Nvidia and the National Energy Research Scientific Computing Center (NERSC) on Thursday flipped the “on” switch for Perlmutter, charged as the world’s quickest supercomputer for AI jobs. Named for astrophysicist Saul Perlmutter, the new supercomputer brags 6,144 NVIDIA A100 Tensor Core GPUs and will be entrusted with sewing together the biggest ever 3D guide of the noticeable universe, among different undertakings.
Tumblr media
Perlmutter is “the quickest framework on earth” at preparing responsibilities with the 16-bit and 32-bit blended exactness math utilized in computerized reasoning (AI) applications, said Nvidia worldwide HPC/AI item advertising lead Dion Harris during press instructions recently. Not long from now, a subsequent stage will add considerably more AI supercomputing capacity to Perlmutter, which is housed at NERSC at the Lawrence Berkeley National Laboratory.
“In one venture, the supercomputer will help gather the biggest 3D map of the noticeable universe to date. It will handle information from the Dark Energy Spectroscopic Instrument (DESI), a sort of infinite camera that can catch upwards of 5,000 cosmic systems in a solitary openness,” Harris wrote in a blog entry reporting the news.
“Specialists need the speed of Perlmutter’s GPUs to catch many openings from one night to realize where to point DESI the following evening. Setting up a year of the information for distribution would require weeks or months on earlier frameworks, however, Perlmutter should assist them with achieving the assignment in as little as a couple of days,” he composed.
Supercharging HPC with AI and machine learning Starting up an AI-streamlined supercomputer “addresses an undeniable achievement,” said Wahid Bhimji, acting lead for NERSC’s information and examination administrations bunch.
Perlmutter with its A100 GPUs, all-streak document framework, and streaming information capacities are all around planned to address this issue for AI,” Bhimji added.
Perlmutter will give NERSC’s around 7,000 upheld analysts admittance to four exaflops of blended exactness registering execution for AI-helped logical activities. Notwithstanding the DESI planning project, specialists are teeing uptime with the supercomputer for work in fields like environmental science, where Perlmutter will help with examining subatomic cooperations to find efficient power fuel sources.
That venture, which will create recreations of molecules associating, requires the extraordinary mix of AI and superior registering (HPC) that Perlmutter conveys, Harris said. “Conventional supercomputers can scarcely deal with the math needed to create reproductions of a couple of atoms over a couple of nanoseconds with projects like Quantum Espresso. However, by consolidating their exceptionally exact reenactments with AI, researchers can concentrate more atoms throughout longer time frames,” he said.
The capacity to use AI in supercomputing likewise has scientists hopeful about the DESI project. As well as planning the known universe, the venture “expects to reveal insight into dull energy, the strange material science behind the speeding up an extension of the universe,” NERSC information engineer Rollin Thomas said. Framework namesake Saul Perlmutter, who stays a functioning astrophysicist at Berkeley Lab, was granted the 2011 Nobel Prize for Physics for his commitments to the disclosure of dull energy.
He added that in preliminary work with analysts to prepare code for Perlmutter supercomputer responsibilities, NERSC was at that point seeing 20x quicker GPU handling execution than in the beforehand accessible system.
Meta: The Nvidia and NERSC claim Perlmutter is the world’s quickest AI supercomputer and also found out the quicker execution. Learn everything you need to know about the venture.
Hello, I am Blanche Harris. Being an online security expert, I love to make people aware of cyber threats and share helpful information to them regarding them. Download, install and activate your office setup at office.com/setup.
Source: https://blanchektechnologyhub.wordpress.com/2021/07/02/nvidia-nersc-claims-perlmutter-is-the-worlds-quickest-ai-supercomputer/
0 notes
edchicago · 5 years ago
Photo
Tumblr media
The City, Alcatraz, the BB, the GG, Berkeley, Sather Tower, and Wang Hall #berkeleylab #nersc #hpc #ucberkeley #turkeys (at Lawrence Berkeley National Laboratory) https://www.instagram.com/p/CFNU-AIhi_D/?igshid=1rl0uko114629
0 notes
noosphe-re · 2 years ago
Photo
Tumblr media
Plasma density fluctuation in a tokamak plasma turbulence driven by ion temperature gradient. The green line shows the magnetic separatrix surface that contains the edge plasma pedestal within a few centimeters from it. Image: C.S. Chang, Princeton Plasma Physics Laboratory (NERSC Sims Show How Recycled Atoms Boost Plasma Turbulence, https://cs.lbl.gov/news-media/news/2017/simulations-show-how-recycled-atoms-boost-plasma-turbulence/)
43 notes · View notes
Photo
Tumblr media
Buzz about thermoelectrics heats up with promising new magnesium-based materials
The landing of NASA's Perseverance rover was another leap forward not only for space exploration but also for the technology that's powering the craft on its years-long mission on Mars—a thermoelectric generator that turns heat into electricity.
Looking for the next leap in thermoelectric technologies, researchers at Duke University and Michigan State University gained new fundamental insights into two magnesium-based materials (Mg3Sb2 and Mg3Bi2) that have the potential to significantly outperform traditional thermoelectric designs and would also be more environmentally friendly and less expensive to manufacture. Contrary to prevailing scientific wisdom regarding the use of heavy elements, the researchers showed that replacing atoms of heavier elements such as calcium and ytterbium with lighter magnesium atoms actually led to a threefold increase in the magnesium-based materials' performance.
In their research, published in the journal Science Advances, the team used neutron and X-ray scattering experiments at the Department of Energy's (DOE) Oak Ridge (ORNL) and Argonne national laboratories, as well as supercomputer simulations at the National Energy Research Scientific Computing Center (NERSC). Investigations at the atomic scale revealed the origin and mechanism behind the materials' ability to convert thermal energy at room temperature into electricity. The findings indicate possible new pathways for improving thermoelectric applications such as those in the Perseverance rover and myriad other devices and energy-generation technologies.  
Read more.
25 notes · View notes
scienceisbeauty · 5 years ago
Photo
Tumblr media
The paper, “Accelerating Large-Scale Excited-State GW Calculations on Leadership HPC Systems,” finalist for the ACM Gordon Bell Prize (the Oscars of Supercomputing).
[It] demonstrates for the first time the possibility of performing high-fidelity excited state calculations of complex materials at very large scales within minutes on current HPC systems, paving the way for future efficient HPC software development in materials, physical, chemical, and engineering sciences.
Source:
Berkeley Lab Computing Sciences: CRD/NERSC-led Paper a Gordon Bell Finalist at SC20
93 notes · View notes
andromeda1023 · 4 years ago
Photo
Tumblr media
Giant Map of the Sky Sets Stage for Ambitious DESI Survey
For millennia humans have used maps to understand and navigate our world and put ourselves in context: we rely on maps to show us where we are, where we came from, and where we’re going. Astronomical maps continue this tradition on a vast scale. They locate us within the cosmos and tell the story of the history and fate of the Universe: it will expand forever, the expansion currently accelerating because of an unknown quantity called dark energy. Astronomical maps may help explain what this dark energy is and why it exists.
Capitalizing on that possibility requires an unprecedented map — one that charts faint galaxies more uniformly and over a larger area of sky than ever before. To meet that challenge, astronomers have now created a new two-dimensional map of the sky that is the largest ever made in terms of sky coverage, sensitivity, and the total number of galaxies mapped.
From among the more than 1 billion galaxies in the map, astronomers will select tens of millions of galaxies for further study with the Dark Energy Spectroscopic Instrument (DESI), in order to construct the largest 3D map ever attempted. The results from the DESI survey, which will be carried out at Kitt Peak National Observatory (KPNO), a Program of National Science Foundation’s NOIRLab, will ultimately provide new insights into the nature of dark energy.
The new map is the result of the DESI Legacy Imaging Surveys, an ambitious 6-year effort involving 1405 observing nights at three telescopes, years of data from a space telescope, 150 observers and 50 other researchers from around the world, 1 petabyte of data (1000 trillion bytes), and 100 million CPU hours on one of the world’s most powerful computers. The images were taken at KPNO and Cerro Tololo Inter-American Observatory (CTIO), also a Program of NOIRLab, and supplemented by images from NASA’s Wide-field Infrared Survey Explorer (WISE) mission. The data were reduced at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC).
The map covers half of the sky, digitally sprawling over 10 trillion pixels, which is equivalent to a mosaic of 833,000 high-resolution smartphone photos, and is one of the most uniform, deep surveys of the sky ever undertaken. “This is the biggest map by almost any measure,” said David Schlegel, co-project scientist for DESI who also co-led the imaging project. Schlegel is an astrophysicist at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab), the lead institution for the international DESI collaboration.
Continue reading: https://noirlab.edu/public/news/noirlab2103/?fbclid=IwAR3KGQqgekDWGGthF3fH5uqtnyxAycQqtGRQOJXFl6MRPbMwcPYN-TdGJYc
4 notes · View notes
spacetimewithstuartgary · 7 months ago
Text
Tumblr media
Latest findings from the South Pole Telescope bolster standard cosmological model
Roughly 400,000 years after the Big Bang, the universe cooled just enough to allow photons to escape from the primordial cosmological soup. Over the next 14 billion years, these ancient photons—the universe's first light—continued traveling. This relic light is known as the Cosmic Microwave Background.
In a new study, scientists used observational data of this first light—collected from the South Pole Telescope located at the National Science Foundation's Amundsen-Scott South Pole Station in Antarctica—to explore the theoretical underpinnings of the standard cosmological model that describes the history of the universe over the past 14 billion years.
The study was conducted by UC Davis researchers and colleagues in the South Pole Telescope collaboration, which is led by the University of Chicago, and has been submitted to the journal Physical Review D. It is currently available on the arXiv preprint server.
The study, based on high-precision measurements of the cosmic microwave background and its polarized light, adds further support to the veracity of the standard cosmological model. It also makes a calculation of the Hubble constant—how fast the universe is expanding—with a new method, offering new insight on an ongoing scientific puzzle known as "the Hubble tension."
"We have a largely coherent, detailed, and successful model describing these 14 billion years of evolution," said Lloyd Knox, the Michael and Ester Vaida Endowed Chair in Cosmology and Astrophysics at UC Davis and one of the study's co-authors. "But we don't know what actually generated the initial departures from complete homogeneity that eventually led to all the structures in the universe including ourselves."
"This result is especially exciting, because it represents the first competitive constraints on cosmology using only the polarization of the CMB, making it almost 100% independent of previous results that relied mostly on the total intensity," said study co-author and University of Chicago research professor Tom Crawford.
A polarizing and winding journey across the universe
In the study, the researchers analyzed two years of polarized light data collected by the South Pole Telescope in 2019 and 2020. The study's observations cover 1,500 square degrees of sky and the collected data enabled the researchers to create a large-scale map of the mass in the universe.
Most natural light is unpolarized, composed of a random collection of light waves, each oscillating (waving up and down) with no preferred direction. But when light is reflected it can become polarized—meaning the light oscillates in a preferred direction.
This happens when sunlight reflects off water, or the ground, and is the reason polarized sunglasses can be so helpful for reducing glare. It also happened as the cosmic microwave background photons underwent their final scattering events in the primordial plasma as it began to disappear 14 billion years ago.
"The light from the cosmic microwave background is partially polarized," Knox said. "We're measuring at each location in our sky map the degree to which it's polarized and the orientation of the polarization."
After that last scattering, the slightly polarized light streamed across open space. Gravitational forces distort the paths of these light rays. Light from different regions is also distorted differently, resulting in a warped image—an effect called gravitational lensing.
To discover both what the polarized image would look like in the absence of gravitational lensing and also the map of the mass causing the gravitational lensing, the team used computers at the National Energy Research Scientific Computing Center (NERSC) in Berkeley.
"What we essentially do at a really high level is we have this data and we send it over to this supercomputer at NERSC," said Marius Millea, a project scientist with Knox's research group and the study's second author. "And the computers are testing this idea, "If this were how the real universe looked, would it produce a map that looks like what we saw?'"
"We have the data, but we also need to have a model that produces or predicts these kind of observables," added Fei Ge, a graduate student with Knox's research group and the study's first author.
TOP IMAGE: A new study uses data collected by the South Pole Telescope, above, to deepen our understanding of how the universe was born and evolved. Credit: Aman Chokshi
LOWER IMAGE: Researchers used the telescope data to create maps of the polarization of the cosmic microwave background light—which can be used to tease out clues about what happened in the very earliest years of the universe. The colors represent the amount of polarization in the cosmic microwave background. Credit: Ge et al
Tumblr media
0 notes
hackernewsrobot · 4 years ago
Text
Berkeley Lab Debuts Perlmutter, World’s Fastest AI Supercomputer
https://www.hpcwire.com/2021/05/27/nersc-debuts-perlmutter-worlds-fastest-ai-supercomputer/ Comments
1 note · View note
peterchiublack · 1 year ago
Text
對中(共)國禁運的超級計算機,2017年03月。是舊聞,撫今感昔的舊聞。仍有一看價值。。。。
2016年國際超級計算機TOP500組織榜單顯示,排在榜首的是由中國國家超級計算無錫中心研製的「神威·太湖之光」,原來排第一的中國天河二號屈居第二,中國入榜的超級計算機數量達到167台,美國則是165台,其他國家168台。(全球超級計算機500強中國雙第一)。
神威太湖之光與之前的超算相比,最大亮點是全部採用自主中國芯:「申威26010」眾核處理器。Peak performance 3TFLOPS DP per chip: (每秒雙精度浮點運算3x10^12次/每晶片),性能較高,與Intel的Knights Landing many core一個量級。
相關團隊基於全球最快超級計算機「神威·太湖之光」系統的三項全機應用入圍國際高性能計算應用領域最高獎——戈登貝爾獎。該獎設立30年以來,中國團隊從未入圍。此次,基於「神威·太湖之光」系統研發的三個課題入圍該獎項,占入圍總數一半。
1、太湖之光,Sunway MPP,SW26010,國家超級計算中心,中國無錫,1060萬個核心,93.01pf(每秒千萬億次運算)
2、天河二號,TH-IVB-FEP Cluster,國家超級計算機中心,中國廣州,312萬個核心,33.86pf
3、泰坦,Cray XK7系統,美國能源部,橡樹嶺國家實驗室,17.59pf
4、紅杉,IBM BlueGene/Q系統,美國能源部,勞倫斯利弗莫爾國家實驗室,加州,157萬個核心,16.32pf
5、Cori,Cray XC40,伯克利實驗室,美國國家能源研究科學計算中心(NERSC),14pf
6、Oakforest-PACs,富士通Primergy CX1640 M1 cluster,日本,先進高性能計算聯合中心,13.6pf
7、K Computer,SPARC64系統,70.5萬個核心,日本,Riken計算科學高級研究所,10.5pf
8、Piz Daint,Cray XC30,11.6萬個Xeon和英偉達核心,瑞士國家計算中心,瑞士,9.8pf
9、Mira,IBM BlueGene/Q系統,美國DOE/SC/Argonne國家實驗室,78.6萬個核心,8.6pf
10、Trinity,Cray CX40,美國DOE/NNSA/LANL/SNL,30.1萬個核心,8.1pf
檢測是由國際TOP500組織完成的,排行榜也是由該組織發布的。TOP500組織在聲明中表示:「除了超級計算系統數量上的對決之外,中國和美國在Linpack性能上也表現出並駕齊驅的態勢。」美國占到全球超級計算能力總和的33.9%,排名第一;中國以33.3%的份額緊隨其後。全球最大的500台超級計算機的運算能力總和為672pf,比去年的運算能力總和增長了60%。
IBM是超級計算機CPU的最大製造商,AMD排在第二位,但其市場份額僅為7%。上榜的所有超級計算機有96%都使用了配備英偉達GPU的核心加速器,它所占份額為60%。緊隨其後的是Xeon Phi,所占份額為21%。共有206台超級計算機系統使用了Gigabit Ethernet,187台超級計算機系統使用的是Infiniband,使用英特爾最新Omni-Path技術的超級計算機系統數量由去年的20個增加到28個。見不得好的美國在背地裡使壞,藉助與其點亮科技樹的優勢,決定對中國四家國家超級計算機中心禁售至強PHI計算卡,當時各種唱衰中國超算的聲音不絕於耳。即便天河2號在2015年國際超級計算機大會上蟬聯5連冠,但因使用美國Intel公司的CPU而飽受一些別有用心之人的詬病,不少人聲稱沒有美國的CPU,天河2號要完蛋了。
網友調侃,在超級計算機領域,中國超過了美國,不過還有一個其他國家還沒有超過,這真的是在中國人眼裡,只有兩個國家,那就是中國和外國!要知道,這次的太湖之光用的是國產晶片。中國的「天河二號」超級計算機自2013年誕生以來,就一直雄居全球超級計算機500強榜單之首,這弄得美國很沒面子,很不淡定,於是在2015年美國發飆了,那就是美國禁止Intel向中國發售高端伺服器晶片(美國禁售CPU未能阻止中國在超級計算機上稱王)。沒想到,中國第二年就出自主晶片了,還是得了第一,看來深得獨立自主,自力更生發展精髓,能買也要能造,這也是國人對航空發動機的要求
可以說,隨著中國經濟和科技的發展,中國打破西方壟斷的技術會越來越多!這也是中國經濟發展,要提高人均GDP,達到中等發達國家水平所必須要走的一條路。另一方面,中國也不會滿足於跟隨西方的步伐,而是追求在一些領域處於世界領先地位,引領世界潮流。比如量子通信領域,中國就處於世界領先地位,發射了首顆量子通信衛星。感謝那些為中華民族復興而默默做出貢獻的科學家,正是因為你們的辛勤奉獻,才有我們的闔家安康。
【科技評論】西方對中國禁運的先進設備和技術來源:中國鋼研戰略所,2017/03/21。
https://read01.com/Jgna2R.html
0 notes
dailytechnologynews · 4 years ago
Photo
Tumblr media
6,000 GPUs: NERSC Says Perlmutter Delivers 4 Exaflops, Claims Top Spot in AI Supercomputing - insideHPC https://ift.tt/3vtTfKq
1 note · View note
bm2ab · 5 years ago
Photo
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Arrivals & Departures 09 December 1906 – 01 January 1992 Grace Brewster Murray Hopper
Grace Brewster Murray Hopper (née Murray) was an American computer scientist and United States Navy rear admiral. One of the first programmers of the Harvard Mark I computer, she was a pioneer of computer programming who invented one of the first linkers. Hopper was the first to devise the theory of machine-independent programming languages, and the FLOW-MATIC programming language she created using this theory was later extended to create COBOL, an early high-level programming language still in use today.
Prior to joining the Navy, Hopper earned a Ph.D. in mathematics from Yale University and was a professor of mathematics at Vassar College. Hopper attempted to enlist in the Navy during World War II but was rejected because she was 34 years old. She instead joined the Navy Reserves. Hopper began her computing career in 1944 when she worked on the Harvard Mark I team led by Howard H. Aiken. In 1949, she joined the Eckert–Mauchly Computer Corporation and was part of the team that developed the UNIVAC I computer. At Eckert–Mauchly she began developing the linker. She believed that a programming language based on English was possible. Her linker converted English terms into machine code understood by computers. By 1952, Hopper had finished her program linker (originally called a compiler), which was written for the A-0 System. During her wartime service, she co-authored three papers based on her work on the Harvard Mark 1.
In 1954, Eckert–Mauchly chose Hopper to lead their department for automatic programming, and she led the release of some of the first compiled languages like FLOW-MATIC. In 1959, she participated in the CODASYL consortium, which consulted Hopper to guide them in creating a machine-independent programming language. This led to the COBOL language, which was inspired by her idea of a language being based on English words. In 1966, she retired from the Naval Reserve, but in 1967 the Navy recalled her to active duty. She retired from the Navy in 1986 and found work as a consultant for the Digital Equipment Corporation, sharing her computing experiences.
The U.S. Navy Arleigh Burke-class guided-missile destroyer USS Hopper was named for her, as was the Cray XE6 "Hopper" supercomputer at NERSC. During her lifetime, Hopper was awarded 40 honorary degrees from universities across the world. A college at Yale University was renamed in her honor. In 1991, she received the National Medal of Technology. On November 22, 2016, she was posthumously awarded the Presidential Medal of Freedom by President Barack Obama.
2 notes · View notes