#AmazonEC2instance
Explore tagged Tumblr posts
govindhtech · 8 months ago
Text
Intel And AWS Deepen Chip Manufacturing Partnership In U.S.
Tumblr media
US-Based Chip Manufacturing Advances as Intel and AWS Deepen Their Strategic Partnership
Intel produces custom chips on Intel 18A for AI Fabric and custom Xeon 6 processors on Intel 3 for AWS in a multi-billion dollar deal to accelerate Ohio-based chip manufacturing.
AWS and Intel
Intel and Amazon Web Services(AWS), announced a custom chip design investment . The multi-year, multi-billion dollar deal covers Intel’s wafers and products. This move extends the two companies’ long-standing strategic cooperation, helping clients power practically any workload and improve AI applications.
AWS will receive an AI fabric chip from Intel made on the company’s most advanced process node, Intel 18A, as part of the expanded partnership. Expanding on their current collaboration whereby they manufacture Xeon Scalable processors for AWS, Intel will also create a customized Xeon 6 chip on Intel 3.
“As the CEO of AWS, Matt Garman stated that the company is dedicated to providing its customers with the most advanced and potent cloud infrastructure available.” Our relationship dates back to 2006 when we launched the first Amazon EC2 instance with their chips. Now, we are working together to co-develop next-generation AI fabric processors on Intel 18A. We can enable our joint customers to handle any workload and unlock new AI capabilities thanks to our ongoing partnership.
Through its increased cooperation, Intel and AWS reaffirm their dedication to growing Ohio’s AI ecosystem and driving semiconductor manufacturing in the United States. With its aspirations to establish state-of-the-art semiconductor production, Intel is committed to the New Albany region. AWS has invested $10.3 billion in Ohio since 2015; now, it plans to invest an additional $7.8 billion to expand its data center operations in Central Ohio.
In addition to supporting businesses of all sizes in reducing costs and complexity, enhancing security, speeding up business outcomes, and scaling to meet their present and future computing needs, Intel and AWS have been collaborating for more than 18 years to help organizations develop, build, and deploy their mission-critical workloads in the cloud. Moreover, Intel and AWS plan to investigate the possibility of producing additional designs based on Intel 18A and upcoming process nodes, such as Intel 18AP and Intel 14A, which are anticipated to be produced in Intel’s Ohio facilities, as well as the migration of current Intel designs to these platforms.
Forward-Looking Statements
This correspondence includes various predictions about what Intel anticipates from the parties’ co-investment framework, including claims about the framework’s timeliness, advantages, and effects on the parties’ business and strategy. These forward-looking statements are identified by terms like “expect,” “plan,” “intend,” and “will,” as well as by words that are similar to them and their variations.
These statements may result in a significant difference between its actual results and those stated or indicated in its forward-looking statements.
They are based on management’s estimates as of the date they were originally made and contain risks and uncertainties, many of which are outside of its control.
Among these risks and uncertainties are the possibility that the transactions covered by the framework won’t be executed at all or in a timely manner;
Failure to successfully develop, produce, or market goods under the framework;
Failure to reap anticipated benefits of the framework, notably financial ones;
Delays, hiccups, difficulties, or higher building expenses at Intel or manufacturing expansion of fabs, whether due to events within or outside of Intel’s control;
The complexities and uncertainties in developing and implementing new semiconductor products and manufacturing process technologies;
Implementing new business strategies and investing in new businesses and technologies;
Litigation or disputes related to the framework or otherwise;
Unanticipated costs may be incurred;
Potential adverse reactions or changes to commercial relationships including those with suppliers and customers resulting from the transaction’s announcement;
Macroeconomic factors, such as the overall state of the semiconductor industry’s economy;
Regulatory limitations, and the effect of competition products and pricing;
International conflict and other risks and uncertainties described in Intel’s Form 10-K and other filings with the SEC.
It warn readers not to rely unduly on these forward-looking statements because of these risks and uncertainties. The different disclosures made in the documents Intel occasionally files with the SEC that reveal risks and uncertainties that could affect its company are brought to the attention of readers, who are advised to analyze and weigh them carefully.
Read more on Govindhtech.com
0 notes
govindhtech · 8 months ago
Text
PCS AWS: AWS Parallel Computing Service For HPC workloads
Tumblr media
PCS AWS
AWS launching AWS Parallel Computing Service (AWS PCS), a managed service that allows clients build up and maintain HPC clusters to execute simulations at nearly any scale on AWS. The Slurm scheduler lets them work in a familiar HPC environment without worrying about infrastructure, accelerating outcomes.
AWS Parallel Computing
Run HPC workloads effortlessly at any scale.
Why AWS PCS?
AWS Parallel Computing Service (AWS PCS) is a managed service that simplifies HPC workloads and Slurm-based scientific and engineering model development on AWS. PCS AWS lets you create elastic computing, storage, networking, and visualization environments. Managed updates and built-in observability features make cluster management easier with AWS PCS. You may focus on research and innovation in a comfortable environment without worrying about infrastructure.
Benefits
Focus on labor, not infrastructure
Give users comprehensive HPC environments that scale to run simulations and scientific and engineering modeling without code or script changes to boost productivity.
Manage, secure, and scale HPC clusters
Build and deploy scalable, dependable, and secure HPC clusters via the AWS Management Console, CLI, or SDK.
HPC solutions using flexible building blocks
Build and maintain end-to-end HPC applications on AWS using highly available cluster APIs and infrastructure as code.
Use cases
Tightly connected tasks
At almost any scale, run concurrent MPI applications like CAE, weather and climate modeling, and seismic and reservoir simulation efficiently.
Faster computing
GPUs, FPGAs, and Amazon-custom silicon like AWS Trainium and AWS Inferentia can speed up varied workloads like creating scientific and engineering models, protein structure prediction, and Cryo-EM.
Computing at high speed and loosely linked workloads
Distributed applications like Monte Carlo simulations, image processing, and genomics research can run on AWS at any scale.
Workflows that interact
Use human-in-the-loop operations to prepare inputs, run simulations, visualize and evaluate results in real time, and modify additional trials.
AWS ParallelCluster
In November 2018, AWS launched AWS ParallelCluster, an AWS-supported open-source cluster management tool for AWS Cloud HPC cluster deployment and maintenance. Customers can quickly design and deploy proof of concept and production HPC computation systems with AWS ParallelCluster. Open-source AWS ParallelCluster Command-Line interface, API, Python library, and user interface are available. Updates may include cluster removal and reinstallation. To eliminate HPC environment building and operation chores, many clients have requested a completely managed AWS solution.
AWS Parallel Computing Service (AWS PCS)
PCS AWS simplifies AWS-managed HPC setups via the AWS Management Console, SDK, and CLI. Your system administrators can establish managed Slurm clusters using their computing, storage, identity, and job allocation preferences. AWS PCS schedules and orchestrates simulations using Slurm, a scalable, fault-tolerant work scheduler utilized by many HPC clients. Scientists, researchers, and engineers can log into AWS PCS clusters to conduct HPC jobs, use interactive software on virtual desktops, and access data. Their workloads can be swiftly moved to PCS AWS without code porting.
Fully controlled NICE DCV remote desktops allow specialists to manage HPC operations in one place by accessing task telemetry or application logs and remote visualization.
PCS AWS uses familiar methods for preparing, executing, and analyzing simulations and computations for a wide range of traditional and emerging, compute or data-intensive engineering and scientific workloads in computational reservoir simulations, electronic design automation, finite element analysis, fluid dynamics, and weather modeling.
Starting AWS Parallel Computing Service
AWS documentation article for constructing a basic cluster lets you try AWS PCS. First, construct a VPC with an AWS CloudFormation template and shared storage in Amazon EFS in your account for the AWS Region where you will try PCS AWS. AWS literature explains how to create a VPC and shared storage.
Cluster
Select Create cluster in the PCS AWS console to manage resources and run workloads.
Name your cluster and select your Slurm scheduler controller size. Cluster workload limits are Small (32 nodes, 256 jobs), Medium (512 nodes, 8,192 tasks), and Large (2,048 nodes, 16,384 jobs). Select your VPC, cluster launch subnet, and cluster security group in Networking.
A resource selection method parameter, an idle duration before compute nodes scale down, and a Prolog and Epilog scripts directory on launched compute nodes are optional Slurm configurations.
Create cluster. Provisioning the cluster takes time.
Form compute node groupings
After constructing your cluster, you can create compute node groups, a virtual grouping of Amazon EC2 instances used by PCS AWS to enable interactive access to a cluster or perform processes in it. You define EC2 instance types, minimum and maximum instance counts, target VPC subnets, Amazon Machine Image (AMI), purchasing option, and custom launch settings when defining a compute node group. Compute node groups need an instance profile to pass an AWS IAM role to an EC2 instance and an EC2 launch template for AWS PCS to configure EC2 instances.
Select the Compute node groups tab and the Create button in your cluster to create a compute node group in the console.
End users can login to a compute node group, and HPC jobs run on a job node group.
Use a compute node name and a previously prepared EC2 launch template, IAM instance profile, and subnets to launch compute nodes in your cluster VPC for HPC jobs.
Next, select your chosen EC2 instance types for compute node launches and the scaling minimum and maximum instance count.
Select Create. Provisioning the computing node group takes time.
Build and run HPC jobs
After building compute node groups, queue a job to run. Job queued until PCS AWS schedules it on a compute node group based on provisioned capacity. Each queue has one or more computing node groups that supply EC2 instances for processing.
Visit your cluster, select Queues, and click Create queue to create a queue in the console.
Select Create and wait for queue creation.
AWS Systems Manager can connect to the EC2 instance it creates when the login compute node group is active. Select your login compute node group EC2 instance in the Amazon EC2 console. The AWS manual describes how to create a queue to submit and manage jobs and connect to your cluster.
Create a submission script with job requirements and submit it to a queue with the sbatch command to perform a Slurm job. This is usually done from a shared directory so login and compute nodes can access files together.
Slurm may perform MPI jobs in PCS AWS. See AWS documents Run a single-node job with Slurm or Run a multi-node MPI task with Slurm for details.
Visualize with a fully managed NICE DCV remote desktop. Start with the HPC Recipes for AWS GitHub CloudFormation template.
After HPC jobs using your cluster and node groups, erase your resources to minimize needless expenses. See AWS documentation Delete your AWS resources for details.
Know something
Some things to know about this feature:
Slurm versions – AWS PCS initially supports Slurm 23.11 and enables tools to upgrade major versions when new versions are added. AWS PCS also automatically patches the Slurm controller.
On-Demand Capacity Reservations let you reserve EC2 capacity in a certain Availability Zone and duration to ensure you have compute capacity when you need it.
Network file systems Amazon FSx for NetApp ONTAP, OpenZFS, File Cache, EFS, and Lustre can be attached to write and access data and files. Self-managed volumes like NFS servers are possible.
Now available
US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm) now provide AWS Parallel Computing Service.
Read more on govindhtech.com
0 notes
govindhtech · 1 year ago
Text
New myApplications in AWS Management Console
Tumblr media
AWS are pleased to announce today the general availability of myApplications enabling application operations, a new suite of features designed to make it easier for you to launch, manage, and grow your apps on AWS more quickly. You can more easily manage and keep an eye on the price, performance, security posture, and overall health of your AWS applications with the help of myApplication in the AWS Management Console.
You may access an Applications widget that lists the applications in an account through the Console Home, where you can also access the myApplications experience. With the new construct application wizard, connecting resources in your AWS account from a single console view makes it easier than ever to construct applications. You can interact with your generated applications by having them show up automatically in myApplications.
Key application metrics widgets are displayed in an overview format on the applications dashboard when you select your application from the Applications widget in the console. You can search, troubleshoot, and optimize your applications here.
You can take precise action on resources in the related services, including Amazon CloudWatch for application performance, AWS Cost Explorer for cost and use, and AWS Security Hub for security findings, with only one click on the apps dashboard.
How to use myApplications
To begin, select Create application from the Applications widget on the AWS Management Console Home. Enter the name and description of your application in the first stage.
You can add your resources in the following step. AWS Resource Explorer is a managed capability that makes it easier to search for and find your AWS resources across AWS Regions. You should turn it on and set it up before you can search and add resources.
To add resources to your applications, select them by choosing Add resources. Additionally, you have the option to search by tag, phrase, or Amazon CloudFormation stack to integrate resource groups and control your application’s whole lifetime.
Following confirmation, the myApplications dashboard will immediately be created, your resources will be uploaded, and new awsApplication tags will be applied.
Let’s now examine the widgets that may be helpful
You can identify which application you are working on by looking at the name, description, and tag displayed in the Application summary widget. The application’s current and projected month-end costs, the top five billed services, and a monthly application resource cost trend chart are all displayed by the Cost and consumption widget, which displays your AWS resource costs and consumption via AWS Cost Explorer. You can keep an eye on your spending, search for irregularities, and click to act when necessary.
The Compute widget displays basic metrics like Amazon EC2 instance CPU utilization and AWS Lambda invocations, together with trend charts from CloudWatch that aggregate application compute resources and information about which is alarming. You can also examine how the application functions, search for irregularities, and take appropriate action.
The Monitoring and Operations widget shows service level objectives (SLOs), standardized application performance indicators from CloudWatch Application Signals, and alarms and notifications for resources related to your application. You are able to keep an eye on current problems, evaluate patterns, and promptly locate and investigate any problems that could affect your application.
The highest priority security findings found by AWS Security Hub are displayed in the Security widget. Findings are arranged according to service and severity, allowing you to keep an eye on their security posture and click to intervene as necessary.
In order to help you determine compliance and take appropriate action, the DevOps widget compiles operational data from AWS System Manager Application Manager, including fleet management, state management, patch management, and configuration management status.
To help you with the process of reviewing and adding tags to your application, you can also utilize the Tagging widget.
Currently accessible
With the new myApplications feature, you can simply manage and monitor apps on AWS with a new application-centric interface.
The following AWS Regions have myApplications capability: Europe (Frankfurt, Ireland, London, Paris, Stockholm), Middle East (Bahrain), Asia Pacific (Hyderabad, Jakarta, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo), South America (São Paulo), US East (Ohio, N. Virginia), US West (N. California, Oregon), and South America (São Paulo).
Read more on Govindhtech.com
0 notes
govindhtech · 10 months ago
Text
Graviton4-based Amazon EC2 R8g Instances: Best Price Quality
Tumblr media
AWS Graviton4
After being in preview since re:Invent 2023, the new AWS Graviton4-based Amazon Elastic Compute  Cloud (Amazon EC2) R8g instances are now generally available to everyone. Having created over 2 million Graviton processors, AWS offers more than 150 distinct AWS Graviton-powered Amazon EC2 instance types globally at scale. Additionally, over 50,000 customers use AWS Graviton-based instances to get the greatest pricing performance for their applications.
Amazon EC2 R8g Instances
Amazon’s most powerful and energy-efficient  processor, the AWS Graviton4, runs many Amazon EC2 applications. AWS Graviton4 employs 64-bit Arm instruction set architecture like other Graviton  CPUs. Graviton4-based Amazon EC2 R8g instances outperform R7g instances by 30%. This speeds up your hardest workloads, such as real-time big data analytics, in-memory caches, and high-performance databases.
Amazon Elastic Compute  Cloud (Amazon EC2) R8g instances with the latest AWS Graviton4 processors offer the best pricing performance for memory-optimized workloads. Databases, in-memory caches, and real-time big data analytics run well on Amazon EC2 R8g instances. Comparing R8g instances to the seventh-generation AWS Graviton3-based R7g instances, the former offer up to 30% greater performance and larger instance sizes with up to three times more vCPUs and memory.
Advantages
Amazon EC2’s best price-performance ratio for workloads with memory optimisation
Compared to R7g instances built on Graviton3, R8g instances perform up to 30% better. These instances are perfect for many applications, including databases, in-memory caches, and real-time big data analytics, and they come with DDR5-5600 memory.
Increased effectiveness of resources
Built on the AWS Nitro System are R8g instances. Fast local storage, private networking, and isolated multitenancy are all provided by the AWS Nitro System, which combines specialised hardware with a lightweight hypervisor.
Broad software support
The majority of widely used Linux operating systems are compatible with AWS Graviton-based instances. AWS Graviton-based instances are also supported by a large number of well-known security, monitoring and management, container, and continuous integration and delivery (CI/CD) apps and services from AWS and software partners. Utilising AWS Graviton-based instances, the AWS Graviton Ready programme provides approved software solutions from AWS Partner suppliers.
Qualities
Fueled by AWS Graviton4 chips
The most recent server  processor generation from AWS, called AWS Graviton4, offers workloads in Amazon EC2 the highest performance and energy efficiency. Compared to Graviton3 processors, AWS Graviton4 processors offer compute performance that is up to 30% higher.
Increased safety
With separate caches for each virtual  CPU and support for pointer authentication, AWS Graviton4 processors provide improved security. Amazon Elastic Block Store (Amazon EBS) encryption is also supported by EC2 R8g instances.
Based on the AWS Nitro Framework
Many of the typical virtualization tasks are delegated to specialised hardware and software via the AWS Nitro System, a comprehensive collection of building components. It lowers the overhead associated with virtualization by providing high performance, high availability, and high security.
EC2 R8g instances are powered by Arm-based AWS Graviton4 processors. They deliver the best price performance in Amazon EC2 for memory-intensive applications.
EC2 R8g
More than 100 customers, including Epic Games, SmugMug, Honeycomb, SAP, and ClickHouse, have tested their workloads on AWS Graviton4-based EC2 R8g instances since the preview announcement at re:Invent 2023 and have seen a notable performance boost over comparable instances. When it came to their picture and data compression activities, SmugMug found that utilising AWS Graviton4-based instances outperformed AWS Graviton3-based instances by 20–40%. In comparison to the non-Graviton based instances they utilised four years ago, Epic Games discovered that AWS Graviton4 instances are the fastest EC2 instances they have ever tested, and Honeycomb.io obtained more than twice the throughput per vCPU.
Now let’s have a look at some of the enhancements we’ve included to our new instances. In comparison to R7g instances, EC2 R8g instances offer bigger instance sizes with up to 3x more vCPUs (up to 48xl), 3x more memory (up to 1.5TB), 75% more memory bandwidth, and 2x more L2 cache. This facilitates processing bigger data sets, increasing workloads, speeding up outcomes, and reducing total cost of ownership. In comparison to Graviton3-based instances, which have maximum network bandwidth of 30 Gbps and maximum EBS bandwidth of 20 Gbps, R8g instances offer up to 50 Gbps network bandwidth and up to 40 Gbps EBS bandwidth.
The first Graviton instances to offer two bare metal sizes (metal-24xl and metal-48xl) are R8g instances. You can deploy workloads that gain from direct access to real resources and appropriately scale your instances. The specifications for EC2 R8g instances are as follows:
EC2 R8g instances offer the best energy efficiency for memory-intensive workloads in EC2 if you’re searching for more energy-efficient computing options to help you meet your sustainability goals and lessen your carbon footprint. Furthermore, these instances are based on the AWS Nitro System, which improves workload speed and security by delegating networking, storage, and  CPU virtualization tasks to specialised hardware and software. All high-speed physical hardware interfaces are securely encrypted by the Graviton4 processors, providing you with increased security.
EC2 R8g instances are best for Linux-based workloads like Docker, Kubernetes, and popular programming languages like C/C++, Rust, Go, Java, Python,.NET Core, Node.js, Ruby, and PHP. AWS Graviton4 processors execute web applications 30% faster, databases 40% faster, and large Java programmes 45% faster than AWS Graviton3 processors. Check out the AWS Graviton Technical Guide for additional information.
To begin moving your applications to Graviton instance types, have a look at the assortment of Graviton resources. To get started with Graviton adoption, you may also go to the AWS Graviton Fast Start programme.
R8g EC2
Availability
The US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Frankfurt) AWS Regions currently offer R8g instances.
Pricing
EC2 R8g instances can be bought through savings plans, on-demand, spot, and reserved instances. Go to Amazon EC2 pricing for further details.
Read more on Govindhtech.com
0 notes