#cloudshell
Explore tagged Tumblr posts
Text
Aurora DSQL: Amazon’s Fastest Serverless SQL Solution

Amazon Aurora DSQL
Availability of Amazon Aurora DSQL is announced. As the quickest serverless distributed SQL database, it provides high availability, almost limitless scalability, and low infrastructure administration for always-accessible applications. Patching, updates, and maintenance downtime may no longer be an operational burden. Customers were excited to get a preview of this solution at AWS re:Invent 2024 since it promised to simplify relational database issues.
Aurora DSQL architecture controlled complexity upfront, according to Amazon.com CTO Dr. Werner Vogels. Its architecture includes a query processor, adjudicator, journal, and crossbar, unlike other databases. These pieces grow independently to your needs, are cohesive, and use well-defined APIs. This architecture supports multi-Region strong consistency, low latency, and global time synchronisation.
Your application can scale to meet any workload and use the fastest distributed SQL reads and writes without database sharding or instance upgrades. Aurora DSQL's active-active distributed architecture provides 99.999 percent availability across many locations and 99.99 percent in one. An application can read and write data consistently without a Region cluster endpoint.
Aurora DSQL commits write transactions to a distributed transaction log in a single Region and synchronously replicates them to user storage replicas in three Availability Zones. Cluster storage replicas are distributed throughout a storage fleet and scale automatically for best read performance. One endpoint per peer cluster region Multi-region clusters boost availability while retaining resilience and connection.
A peered cluster's two endpoints perform concurrent read/write operations with good data consistency and provide a single logical database. Third regions serve as log-only witnesses without cluster resources or endpoints. This lets you balance connections and apps by speed, resilience, or geography to ensure readers always see the same data.
Aurora DSQL benefits event-driven and microservice applications. It builds enormously scalable retail, e-commerce, financial, and travel systems. Data-driven social networking, gaming, and multi-tenant SaaS programs that need multi-region scalability and reliability can use it.
Starting Amazon Aurora DSQL
Aurora DSQL is easy to learn with console expertise. Programmable ways with a database endpoint and authentication token as a password or JetBrains DataGrip, DBeaver, or PostgreSQL interactive terminal are options.
Select “Create cluster” in the console to start an Aurora DSQL cluster. Single-Region and Multi-Region setups are offered.
Simply pick “Create cluster” for a single-Region cluster. Create it in minutes. Create an authentication token, copy the endpoint, and connect with SQL. CloudShell, Python, Java, JavaScript, C++, Ruby,.NET, Rust, and Golang can connect. You can also construct example apps using AWS Lambda or Django and Ruby on Rails.
Multi-region clusters need ARNs to peer. Open Multi-Region, select Witness Region, and click “Create cluster” for the first cluster. The ARN of the first cluster is used to construct a second cluster in another region. Finally, pick “Peer” on the first cluster page to peer the clusters. The “Peers” tab contains peer information. AWS SDKs, CLI, and Aurora DSQL APIs allow programmatic cluster creation and management.
In response to preview user comments, new features were added. These include easier AWS CloudShell connections and better console experiences for constructing and peering multi-region clusters. PostgreSQL also added views, Auto-Analyze, and unique secondary indexes for tables with existing data. Integration with AWS CloudTrail for logging, Backup, PrivateLink, and CloudFormation was also included.
Aurora DSQL now supports natural language communication between the database and generative AI models via a Model Context Protocol (MCP) server to boost developer productivity. Installation of Amazon Q Developer CLI and MCP server allows the CLI access to the cluster, allowing it to investigate schema, understand table structure, and conduct complex SQL queries without integration code.
Accessibility
As of writing, Amazon Aurora DSQL was available for single- and multi-region clusters (two peers and one witness region) in AWS US East (N. Virginia), US East (Ohio), and US West (Oregon) Regions. It was available for single-Region clusters in Ireland, London, Paris, Osaka, and Tokyo.
Aurora DSQL bills all request-based operations, such as read/write, monthly using a single normalised billing unit, the Distributed Processing Unit. Total database size, in gigabytes per month, determines storage costs. You pay for one logical copy of your data in a single- or multi-region peered cluster. Your first 100,000 DPUs and 1 GB of storage per month are free with AWS Free Tier. Find pricing here.
Console users can try Aurora DSQL for free. The Aurora DSQL User Guide has more information, and you may give comments via AWS re:Post or other means.
#AuroraDSQL#AmazonAuroraDSQL#AuroraDSQLcluster#DistributedProcessingUnit#AWSservices#ModelContextProtocol#technology#technews#technologynews#news#govindhtech
0 notes
Text
Service Virtualization Market Size, Share, Scope, Analysis, Forecast, Growth and Industry Report 2032 – SWOT and PESTLE Analysis
TheService Virtualization Market Share was valued at USD 745.8 Million in 2023 and is expected to reach USD 2853.1 Million by 2032, growing at a CAGR of 16.1% over the forecast period 2024-2032.
The Service Virtualization Market is witnessing rapid adoption across various sectors. It is enabling faster software development and better testing environments. Organizations are increasingly using it to simulate service behavior in complex systems.
The Service Virtualization Market continues to grow as businesses demand more agile and cost-effective development processes. With the rising pressure to deliver high-quality applications at speed, service virtualization is becoming essential to support continuous integration, DevOps, and automated testing workflows.
Get Sample Copy of This Report: https://www.snsinsider.com/sample-request/4624
Market Keyplayers:
CA Technologies (Broadcom) – (CA Service Virtualization, CA DevTest)
IBM Corporation – (Rational Test Virtualization Server, IBM Rational Integration Tester)
Micro Focus – (Service Virtualization, LoadRunner Professional)
Parasoft – (Parasoft Virtualize, Parasoft SOAtest)
SmartBear Software – (ReadyAPI Virtualization, TestComplete)
Cavisson Systems – (Cavisson Service Virtualization, NetStorm)
Tricentis – (Tosca, Tricentis Virtualize)
Broadcom Inc. – (Broadcom DevTest, Broadcom Service Virtualization)
Maveric Systems – (Maveric Service Virtualization, Maveric Continuous Testing)
Wipro Limited – (Wipro HOLMES™, Wipro Virtualization Solutions)
Cognizant Technology Solutions – (Cognizant Testing Services, Cognizant Virtualization)
Sogeti (Capgemini) – (Sogeti Testing Services, Virtualization Platform)
Infosys Limited – (Infosys Virtualization Service, Infosys Test Automation)
Accenture – (Accenture Cloud Virtualization, Accenture Service Testing)
Tata Consultancy Services (TCS) – (TCS Service Virtualization, TCS Testing Services)
Delphix – (Delphix Data Platform, Delphix Virtualization)
Quali Systems – (CloudShell, Quali Service Virtualization)
QASymphony – (qTest, Service Virtualization)
Vector Software – (VectorCAST Virtualization, VectorCAST Test)
Trends in the Service Virtualization Market
Increased Adoption in DevOps: Companies are integrating service virtualization into DevOps pipelines to accelerate development and testing cycles.
Cloud-Based Solutions: There is a rising demand for cloud-native virtualization tools, offering flexibility and scalability across distributed teams.
AI and Automation Integration: Vendors are embedding AI-driven analytics and automation features to enhance test coverage and efficiency.
Focus on API Testing: With APIs becoming central to modern applications, service virtualization tools are now tailored to mimic complex API interactions.
Enquiry of This Report: https://www.snsinsider.com/enquiry/4624
Market Segmentation:
By Component
Software
Service
By Enterprise Size
Large Enterprise
SMEs
By Deployment
Cloud
On-premise
By End Use
BFSI
Healthcare
IT & Telecommunication
Automotive
Retail & E-Commerce
Market Analysis
Growing Demand Across Industries: BFSI, healthcare, retail, and telecom sectors are adopting service virtualization to reduce time-to-market and improve software quality.
Cost and Resource Efficiency: It minimizes the need for setting up complex test environments, saving costs and development time.
Support for Agile & Continuous Testing: Service virtualization plays a crucial role in enabling agile methodologies by providing early and continuous testing capabilities.
Rising Competition Among Vendors: Key players like Broadcom, IBM, Micro Focus, and SmartBear are enhancing their offerings with next-gen capabilities such as cloud compatibility and AI integration.
Future Prospects
The future of the Service Virtualization Market looks promising as digital transformation initiatives continue to gain momentum. As companies adopt microservices and cloud-native architectures, the need for simulating complex, distributed systems will increase. Service virtualization will become even more vital in testing environments where real services are either unavailable or costly to access.
Advancements in AI, machine learning, and automation will further enhance service virtualization tools, enabling intelligent test data generation, dynamic behavior simulation, and real-time analytics. Additionally, integration with CI/CD pipelines and containerized environments like Kubernetes will expand the scope and flexibility of service virtualization across development ecosystems.
With organizations aiming for faster releases and higher-quality software, service virtualization will continue to evolve as a foundational technology in modern application development and delivery.
Access Complete Report: https://www.snsinsider.com/reports/service-virtualization-market-4624
Conclusion
The Service Virtualization Market is set for steady growth, backed by digital transformation, DevOps adoption, and the demand for rapid, reliable software delivery. As development environments become more complex and interconnected, service virtualization offers the scalability, flexibility, and cost-efficiency that modern enterprises require.
Going forward, businesses that leverage advanced service virtualization tools will not only reduce development costs but also gain a competitive edge by accelerating innovation and improving software quality. The market is expected to thrive, playing a key role in shaping the future of agile and efficient software development.
About Us:
SNS Insider is one of the leading market research and consulting agencies that dominates the market research industry globally. Our company's aim is to give clients the knowledge they require in order to function in changing circumstances. In order to give you current, accurate market data, consumer insights, and opinions so that you can make decisions with confidence, we employ a variety of techniques, including surveys, video talks, and focus groups around the world.
Contact Us:
Jagney Dave - Vice President of Client Engagement
Phone: +1-315 636 4242 (US) | +44- 20 3290 5010 (UK)
#Service Virtualization Market#Service Virtualization Market Scope#Service Virtualization Market Trends
0 notes
Text
Virtual It Labs Software Market Analysis and Growth Projections, 2025-2033
Virtual It Labs Software Market
The latest study released on the Global Virtual It Labs Software Market by Market Strides, Research evaluates market size, trend, and forecast to 2033. The Virtual It Labs Software Market consider covers noteworthy inquire about information and proofs to be a convenient asset record for directors, investigators, industry specialists and other key people to have ready-to-access and self-analysed study to help understand market trends, growth drivers, openings and up and coming challenges and approximately the competitors.
Some of the key players profiled in the study are:
Azure
Cloud Customer Certification Lab (Cloud CCL)
Oracle (Ravello)
Appsembler
HPE vLabs
AWS
Skytap Agile Development
CBT Nuggets
CloudShare
MeasureUp
Strigo
CloudShell
Get Free Sample Report PDF @ https://marketstrides.com/request-sample/virtual-it-labs-software-market
Scope of the Report of Virtual It Labs Software Market :
The report also covers several important factors including strategic developments, government regulations, market analysis, and the profiles of end users and target audiences. Additionally, it examines the distribution network, branding strategies, product portfolios, market share, potential threats and barriers, growth drivers, and the latest industry trends.
Keep yourself up-to-date with latest market trends and changing dynamics due to COVID Affect and Economic Slowdown globally. Keep up a competitive edge by measuring up with accessible commerce opportunity in Virtual It Labs Software Market different portions and developing territory.
The titled segments and sub-section of the market are illuminated below:
By Type
Cloud Based
Web Base
By Application
Large Enterprises
SMEs
Get Detailed@ https://marketstrides.com/report/virtual-it-labs-software-market
Geographically, the detailed analysis of consumption, revenue, market share, and growth rate of the following regions:
• The Middle East and Africa (South Africa, Saudi Arabia, UAE, Israel, Egypt, etc.)
• North America (United States, Mexico & Canada)
• South America (Brazil, Venezuela, Argentina, Ecuador, Peru, Colombia, etc.)
• Europe (Turkey, Spain, Turkey, Netherlands Denmark, Belgium, Switzerland, Germany, Russia UK, Italy, France, etc.)
• Asia-Pacific (Taiwan, Hong Kong, Singapore, Vietnam, China, Malaysia, Japan, Philippines, Korea, Thailand, India, Indonesia, and Australia).
Objectives of the Report:
•To carefully analyze and forecast the size of the Virtual It Labs Software Market by value and volume.
• To estimate the market shares of major segments of the Virtual It Labs Software Market
• To showcase the development of the Virtual It Labs Software Market in different parts of the world.
• To analyze and study micro-markets in terms of their contributions to the Virtual It Labs Software Market, their prospects, and individual growth trends.
• To offer precise and useful details about factors affecting the growth of the Virtual It Labs Software Market
• To provide a meticulous assessment of crucial business strategies used by leading companies operating in the Virtual It Labs Software Market, which include research and development, collaborations, agreements, partnerships, acquisitions, mergers, new developments, and product launches.
Key questions answered:
• How feasible is Virtual It Labs Software Market for long-term investment?
• What are influencing factors driving the demand for Virtual It Labs Software Market near future?
• What is the impact analysis of various factors in the Global Virtual It Labs Software Market growth?
• What are the recent trends in the regional market and how successful they are?
Buy Virtual It Labs Software Market Research Report @ https://marketstrides.com/buyNow/virtual-it-labs-software-market
The market research report on the Global Virtual It Labs Software Market has been thoughtfully compiled by examining a range of factors that influence its growth, including environmental, economic, social, technological, and political conditions across different regions. A detailed analysis of data related to revenue, production, and manufacturers provides a comprehensive view of the global landscape of the Virtual It Labs Software Market. This information will be valuable for both established companies and newcomers, helping them assess the investment opportunities in this growing market.
Region Included are: Global, North America, Europe, APAC, South America, Middle East & Africa, LATAM.
Country Level Break-Up: United States, Canada, Mexico, Brazil, Argentina, Colombia, Chile, South Africa, Nigeria, Tunisia, Morocco, Germany, United Kingdom (UK), the Netherlands, Spain, Italy, Belgium, Austria, Turkey, Russia, France, Poland, Israel, United Arab Emirates, Qatar, Saudi Arabia, China, Japan, Taiwan, South Korea, Singapore, India, Australia and New Zealand etc.
At long last, Virtual It Labs Software Market is a important source of direction for people and companies.
Thanks for reading this article; you can also get region wise report version like Global, North America, Europe, APAC, South America, Middle East & Africa, LAMEA) and Forecasts, 2025-2033
About Us:
Market Strides, a leading strategic market research firm, makes a difference businesses unquestionably explore their strategic challenges, promoting informed decisions for economical development. We give comprehensive syndicated reports and customized consulting services. Our bits of knowledge a clear understanding of the ever-changing dynamics of the global demand-supply gap across various markets.
Contact Us:
Email: [email protected]
#Virtual It Labs Software Market Size#Virtual It Labs Software Market Share#Virtual It Labs Software Market Growth#Virtual It Labs Software Market Trends#Virtual It Labs Software Market Players
0 notes
Video
youtube
Prof.J.A.Illik: Vid-4-AWS-EC2-CloudShell. Die CloudShell ist einfach zu erreichen.
0 notes
Text
AWS 推出 CloudShell
AWS 推出了 CloudShell,讓使用者可以繼承 IAM 的權限,在瀏覽器裡面用 command line 操作 AWS 資源:「AWS CloudShell – Command-Line Access to AWS Resources」。 使用方式很簡單,在 web console 上方的 icon 點下去就可以用了,只是第一次使用的時候會看到需要建立環境的訊息,會等比較久: 連進去後測了一下,看起來是跑一個 30GB Disk 與 4GB RAM 的 container 起來,/dev/cpuinfo 裡面可以看到是 Intel E5-2676 v3 的機器,以這個資訊來查,看起來可能是 m4 系列的機器。 網路的部份基本上對 internet 的 TCP 與 UDP 都可以通,但需要操作 raw socket 丟 ICMP 的 ping 與 mtr…
View On WordPress
#amazon#aws#browser#cloud#cloudshell#command#console#container#interface#line#linux#service#shell#web
0 notes
Photo
Reading Notes #379 http://bit.ly/2WWp510
0 notes
Text
[Media] AzureAttackKit
AzureAttackKit Collection of Azure Tools to Pull down for Attacking an Env from a windows machine or Cloudshell. https://github.com/ZephrFish/AzureAttackKit t.me/hackgit

0 notes
Text
What you'll learn In this course we would explore various Cloud services available on Microsoft Azure.Learn about Cloud Storage services like- Blob, File Storage, and QueueLearn about Database Services like CosmosDB and SqlDBLearn about Azure Command Line Interface (AZ CLI), Cloudshell, Storage ExplorerDeploy projects on - blob, file, host static & dynamic website, host wordpress site, integrate cosmodb and sql db into applicationLearn about Networking tools like- DNS, Traffic Manager, Firewall, ExpressRoute, Virtual WAN, Vnet, NSGLearn about Containers like- Docker, Kubernetes, Service Fabric Cluster, AKS, Container Instance, Container RegistryLearn about DevOps features like- Boards, Artifacts, Repos, Pipelines, TestPlan, Tool Integration, DevTest labsMicrosoft Azure the fastest growing cloud computing platform, which is currently the second most popular just after Amazon Web Services. You could be familiar with AWS or any other cloud computing platform, or you could be a someone interested to learn professional level skills on Microsoft Azure, this comprehensive tutorial would provide you sufficient knowledge with hands-on practical demonstrations of wide range of cloud services on Microsoft Azure.The course curriculum is designed in a way that it would be helpful for Azure Solution Architect- AZ 300 or AZ 303 Certification exam. Before getting this certification you must have subject matter expertise in designing and implementing solutions that run on Microsoft Azure, including aspects like compute, network, storage, and security.A Cloud Solution Architect is person who is expected to perform various roles that include advising stakeholders and translating business requirements into secure, scalable, and reliable cloud solutions. An Azure Solution Architect could partner with cloud administrators, cloud DBAs, and clients to implement solutions.In this course you would be learning various concepts and skills on Microsoft Azure, including-Cloud Storage services like- Blob, File Storage, and Queue. Database Services like CosmosDB and SqlDB.Azure Command Line Interface (AZ CLI), Cloudshell, Storage Explorer.Projects like- Storage, Host a Website on Cloud, Wordpress, etcNetworking tools like- DNS, Traffic Manager, Firewall, ExpressRoute, Virtual WAN, Vnet, NSG.Containers like- Docker, Kubernetes, Service Fabric Cluster, AKS, Container Instance, Container Registry.DevOps features like- Boards, Artifacts, Repos, Pipelines, TestPlan, Tool Integration, DevTest labs.You will also learn Project on CosmosDB, and SQL Database.Moreover, if you utilize your time precisely for learning Azure cloud services, you could prepare for Azure Solutions Architect exam easily and learn Azure in 30 days.Who this course is for:Anyone who would be interested to learn Cloud Computing with Microsoft Azure.Would be interested in learning various cloud services and options available on Microsoft Azure.Want to make a career with Cloud Computing.Would be interested in Microsoft Azure Solution Architect Certification.
0 notes
Text
GDSC is glad to announce its Cloud Study Jam on Google Cloud Platform, this workshop is scheduled to give you details about “Kubernetes Engine in Google Cloud Platform"
Kubernetes Engine provides a managed environment for deploying, managing, and scaling your containerized applications using Google infrastructure. Kubernetes provides the mechanisms through which you interact with your container cluster.
We are looking forward to seeing you at the Cloud Study Jam session 02. We will use a tool called Qwiklabs, a Google training product. For the workshop, we’ll give you a link and token for free Qwiklabs access.
Don’t miss the chance of gaining hands-on experience on Kubernetes Engine as well as badge for your online profile.
Happy study jamming!
🕘 Thursday, December 2, 2021, from 10:00 AM – 11:00 PM GMT+0500.
Venue: Lab 11 (Main Campus - FAST NUCES, Karachi)
Register here: https://tinyurl.com/GDSCCL02
Stream the session live at: http://meet.google.com/rrq-qqjv-oyk
#cloud #cloudcomputing #cloudshell #gcloud #googlecloudshell #Qwilabs #GoogleCloud #googledevelopersstudentclub #gdsc #googledevelopers #googledeveloper #GoogleDevelopersGroup #FASTNUCES #FASTNU #NationalUniversity #DSCNUCES #DSCNUCESKHI #community

0 notes
Text
Điểm mặt những dịch vụ miễn phí của AWS
Có bài viết học luyện thi AWS mới nhất tại https://cloudemind.com/aws-free-services/ - Cloudemind.com
Điểm mặt những dịch vụ miễn phí của AWS
Như các bạn đã biết mô hình tính tiền của các dịch vụ đám mây tiên tiến nói chung và AWS nói riêng đều là chi trả theo thực tế sử dụng (pay as you go). Và để các công ty cung cấp dịch vụ đám mây có thể sống được, phát triển đội ngũ của mình được thì bạn phải trả tiền cho họ khi sử dụng, phần lớn là như thế. Nhưng bên cạnh các dịch vụ trả phí cũng có những dịch vụ miễn phí khi sử dụng giúp công việc mình tiện lợi hơn khi sử dụng Cloud. Bài này Kevin chia sẻ những dịch vụ miễn phí của AWS.
Bài này Kevin sẽ update theo thời gian (living post) vì ở AWS mọi thứ thay đổi hàng ngày và theo chiều hướng có lợi cho người dùng nhiều hơn. Nếu có thiếu xót nào cũng mong nhận được chia sẻ từ mọi người nhé.
Nào chúng ta hãy điểm qua các dịch vụ này nào:
1. AWS Computing Services
EC2 Auto Scaling: Cung cấp khả năng giãn nở (elasticity) trong việc thêm và loại bỏ những tài nguyên EC2 để đáp ứng nhu cầu workload của ứng dụng một cách thông minh và tiết kiệm nhất.
AWS Batch: Là một tiện ích để quản lý các batch chương trình cần chạy (computing jobs). AWS Batch tự động provisioned tài nguyên cho phù hợp các job chạy. AWS là dịch vụ miễn phí, tuy nhiên các tài nguyên mà AWS Batch provisioned thì bạn sẽ tính phí theo mô hình Pay As You Go như: EC2, Lambda, hay Fargate.
AWS Elastic Beanstalk: Là một dịch vụ dễ sử dụng để triển khai các web application trên AWS mà người dùng lại chưa có kiến thức nhiều về AWS. Elastic Beanstalk hỗ trợ nhiều loại nền tảng ứng dụng sử dụng ngôn ngữ như: Python, Go, Ruby, Node.js, Java, .NET, và cả docker. Elastic Beanstalk hỗ trợ những nền tảng server phổ biến như: nginx, apache, Passenger, IIS. Elastic Beanstalk là dịch vụ miễn phí tuy nhiên bạn vẫn trả phí cho các tài nguyên mà ứng dụng bạn sử dụng như EC2, RDS, S3…
AWS Serverless Application Repository: Đây là một dịch vụ quản lý các ứng dụng serverless. Cho phép team quản lý, chia sẻ các ứng dụng serverless.
Developer Tools
CloudShell: Đây xem như là một CLI (Command Line Interface) để truy cập các dịch vụ AWS từ browser thay vì từ terminal trên máy tính của bạn.
AWS CodeStar: cho phép bạn nhanh chóng phát triển, xây dựng và triển khai các ứng dụng trên AWS (Manage software development activities in one place). CodeStar có thể tích hợp với JIRA để tracking các issue trong quá trình dev hay vận hành. Ngoài ra CodeStar còn cung cấp dashboard để theo dõi quy trình triển khai phần mềm trên AWS từ backlog đến những code gần đây mình deploy. Đây là một dịch vụ rất hay ho và này nọ nhưng lại… miễn phí. Thật là diệu kỳ phải ko nào?
Management & Governance
AWS Auto Scaling – Đây là một dịch vụ giúp kiến trúc cloud tăng tính elasticity, có tài nguyên đúng lúc khi cần và tự động lợi bỏ khi workload giảm. AWS Auto Scaling hỗ trợ scaling trên nhiều dịch vụ còn EC2 Auto Scaling sẽ tập trung vào Amazon EC2.
AWS Compute Optimizer – Dịch vụ này sẽ thu thập nhu cầu sử dụng Amazon Ec2, Lambda, EBS thực tế của tài khoản bạn sau đó dùng Machine Learning để đưa ra các recommendation sử dụng tối ưu về compute. Đây là một dịch vụ rất hay và khuyến nghị các bạn dùng thử. Đây cũng là một dịch vụ miễn phí nên cũng ko phải đắn đo gì lắm.
AWS Control Tower (CT) – Đây là dịch vụ thường dành cho các tổ chức lớn khi có rất nhiều AWS Account và muốn triển khai nhanh chóng apply các best practices trên Cloud giúp tiết kiệm thời gian và tối ưu khi sử dụng. CT là dịch vụ miễn phí, bạn sẽ chỉ phải trả những tài nguyên liên quan khi provisioned giải pháp như: AWS Service Catalog, AWS CloudTrail, Amazon CloudWatch, SNS, S3… Có một dịch vụ ra đời trước AWS Control Tower đó là AWS Landing Zone (LZ) hiện đã ngừng phát triển các tính năng mới, khách hàng mới của AWS nếu muốn quản trị đa tài khoản thì khuyến nghị sử dụng AWS Control Tower (AWS Landing Zone is currently in Long-term Support and will not receive any additional features. Customers interested in setting up a new landing zone should check out AWS Control Tower and Customizations for AWS Control Tower)
AWS License Manager – giúp bạn quản lý license từ các nhà cung cấp khác như Microsoft, SAP, Oracle, IBM không chỉ trên AWS mà còn cả trên hạ tầng on-premise của các bạn. ISV company có thể dùng AWS License Manager để quản lý và track các license do phần mềm của mình làm ra.
AWS Organizations – Là dịch vụ giúp quản lý và apply các chính sách theo nhóm tài khoản AWS (SCP) và cũng có lợi ích consolidated billing khi mà các tài khoản AWS con sẽ được chi trả sử dụng và thống nhất bill tại payer account hay master account. Kỳ lại là dịch vụ hay vầy mà cũng miễn phí.
AWS Well-Architected Framework – Đây là công cụ giúp bạn review architect và tương thích với các best practices trên AWS Cloud.
Security, Identity & Compliance
AWS IAM (Identity Access Management) – Đây là cổng chính, chìa khóa quản lý các người dùng và quyền hạn truy xuất AWS Resources. Đây cũng là dịch vụ mang tính toàn cầu (global specified).
AWS Artifact – là một portal để bạn truy xuất các báo cáo về AWS Compliance.
AWS SSO (Sing Sign On) – dịch vụ giúp quản lý tài khoản tập trung giữa nhiều AWS Account và business application từ một chỗ. AWS SSO hỗ trợ các plugin tích hợp với Salesforce, Box, Microsoft 365. AWS SSO cũng dễ dàng kết nối với các identity source như Microsoft AD, Okta Universal Directory, và Azure AD.
Analytics Services
AWS Lake Formation – Dịch vụ có thể triển khai nhanh chóng Data Lake trên AWS. Dịch vụ này build dựa trên AWS Glue và tích hợp với các dịch vụ như CloudTrail, AWS IAM, Amazon CloudWatch, Athena, EMR, Redshift…
Ngoài ra sẽ còn có nhiều dịch vụ / sản phẩm miễn phí khác, trên đây là một số cái phổ biến mà Kevin hay gặp. Chúc các bạn học tập và làm việc vui.
Have fun!
Xem thêm: https://cloudemind.com/aws-free-services/
0 notes
Text
Use AWS Console-to-Code To Turn Console Actions Into Code

AWS Console to Code
AWS Console-to-Code, which makes it simple to turn AWS console actions into reusable code, is now generally available (GA). When you launch an Amazon Elastic Compute Cloud (Amazon EC2) instance, for example, you can record your actions and workflows in the console using AWS Console-to-Code. You can then evaluate the AWS Command Line Interface (AWS CLI) commands for your console actions.
Using the infrastructure-as-code (IaC) format of your choosing, such as the AWS CloudFormation template (YAML or JSON) or the AWS Cloud Development Kit (AWS CDK) (TypeScript, Python, or Java), Amazon Q can generate code for you with a few clicks. This may be included into pipelines, further tailored for your production workloads, and utilized as a foundation for infrastructure automation.
Customers have responded well to AWS Console-to-Code since it revealed the preview last year. Since it has been working backwards from customer feedback, it has been significantly enhanced in this GA version.
GA’s new features
Encouragement of additional services Amazon EC2 was the sole service that was supported during the preview. Amazon Relational Database Service (RDS) and Amazon Virtual Private Cloud (Amazon VPC) are now supported by AWS Console-to-Code at GA.
Simplified experience: Customers can now more easily handle the workflows for prototyping, recording, and code creation with the updated user interface.
Preview code: Customers can now create code for EC2 instances and Auto Scaling groups without actually establishing them with updates to the launch wizards.
Advanced code generation: Amazon Q machine learning models power the creation of code for AWS CDK and CloudFormation.
How to begin using AWS Console-to-Code
Let’s start with a straightforward example of starting an Amazon EC2 instance. Open the Amazon EC2 console first. To begin recording, find the AWS Console-to-Code widget on the right and select Start recording.
Next, use the Amazon EC2 console’s launch instance wizard to start an Amazon EC2 instance. To finish the recording, select Stop once the instance has been launched.
Examine the recorded actions in the Recorded actions table. To filter by write actions (Write), utilize the Type dropdown list. Select the action RunInstances. To copy the relevant AWS CLI command, select Copy CLI.
It is simple to change this command. For this example, You can modify it to start two instances (–count 2) of type t3.micro (–instance-type). Although this is a simplified example, other workflows can use the same method.
You can use AWS CloudShell to run the command, and it launched two t3.micro EC2 instances as planned:
The API commands used when actions were conducted (during the EC2 instance launch) form the basis of the single-click CLI code creation experience. It’s intriguing to see that when you finish operations in the console, the companion screen displays the recorded actions. Additionally, it is simple to clearly scope activities for prototyping because of the interactive user interface’s start and stop capabilities.
Creating IaC with AWS CDK
An open-source framework called AWS CDK is used to specify cloud architecture in code and provision it using AWS CloudFormation. You may create AWS CDK code for your infrastructure workflows with AWS Console-to-Code. Currently available in Java, Python, and TypeScript.
Let’s move on to the use case for the EC2 launch instance. Locate the AWS Console-to-Code widget on the right side of the Amazon EC2 console, select Start recording, and then start an EC2 instance if you haven’t already. Once the instance has started, select the RunInstances action from the Recorded actions table and select Stop to finish the recording.
Select the Generate CDK Python button from the dropdown menu to begin generating AWS CDK Python code.
You can modify the code to make it production-ready for your particular use case, or you can use it as a starting point.
You made a new Python CDK project since you already had the AWS CDK installed:
Then entered the code that was generated into the Python CDK project. To make sure the code was correct for this example, modified the EC2 instance type, refactored the code into an AWS CDK Stack, and made a few other little adjustments. Using CDK deploy, was able to deploy it successfully.
You can replicate the same outcome by launching an EC2 instance from the console action and then going all the way to the AWS CDK.
Additionally, you can create CloudFormation templates in JSON or YAML format:
Preview code
Additionally, you may use the Preview code capability in Amazon EC2 and the Amazon EC2 Auto Scaling group launch experience to directly access AWS Console-to-Code. This implies that you can obtain the infrastructure code without actually creating the resource.
Try this out by following the instructions to use a launch template to build an Auto Scaling group. But click Preview code rather than Create Auto Scaling group. The options to replicate the AWS CLI command or produce infrastructure code should now be visible to you.
Things to be aware of
When using AWS Console-to-Code, keep the following points in mind:
To create AWS CLI commands for their infrastructure operations, anyone can use AWS Console-to-Code. There is a free monthly limit of 25 generations for the code generation feature for AWS CDK and CloudFormation formats; after that, an Amazon Q Developer subscription is required.
Prior to deployment, it is advised that you test and validate the generated IaC code.
Only actions in the Amazon EC2, Amazon VPC, and Amazon RDS consoles are recorded by AWS Console-to-Code at GA.
Actions from past sessions or other tabs are not retained by the Recorded actions table in AWS Console-to-Code; it only shows actions made within the current browser tab. Keep in mind that all recorded actions will be lost once the browser tab is refreshed.
Currently accessible
All commercial regions offer AWS Console-to-Code. The Amazon Q Developer user manual contains additional information about it. Try it out in the Amazon EC2 console and email the AWS re:Post for Amazon EC2 or your regular AWS Support contacts with your thoughts.
Read more on Govindhtech.com
#AWSConsoletoCode#AWSCDK#AmazonQ#AmazonEC2#EC2instances#AWSCLI#Coludcomputing#News#Technews#Technology#Technologynews#Technologytrends#govindhtech
0 notes
Text
SCRAM Authentication in RDS for PostgreSQL 13
The Salted Challenge Response Authentication Mechanism (SCRAM) greatly improves the security of password-based user authentication by adding several key security features that prevent rainbow-table attacks, man-in-the-middle attacks, and stored password attacks, while also adding support for multiple hashing algorithms and passwords that contain non-ASCII characters. PostgreSQL 10 added support for SCRAM authentication, and AWS customers have been able to use SCRAM authentication since Amazon RDS for PostgreSQL 10 was launched. However, the use of SCRAM authentication was optional and could not be required for all users at a server level. With the launch of Amazon RDS for PostgreSQL 13, you now control whether or not SCRAM authentication is required for all database users. In this post we explain how to require SCRAM authentication on your RDS for PostgreSQL database instances. We use the AWS Command Line Interface (AWS CLI), which allows you to interact with the AWS control plane from a Linux, UNIX, or Macintosh shell, or the AWS CloudShell. You can also implement this solution on the AWS Management Console, if you prefer. Upgrading your client libraries All users and applications that access the database need to use client libraries that support SCRAM. The PostgreSQL Wiki has a list of client library versions and which ones support SCRAM. Applications and users who use client libraries that don’t support SCRAM can’t connect to the database once you require SCRAM authentication. Creating a database parameter group Database parameter groups are collections of database server settings that control how your RDS instance behaves. Each parameter group has a number of database settings, some of which are changeable and some of which are not. Database parameter groups allow you to have a single standard configuration for all your databases so that they behave in a uniform way. By default, RDS instances use a default parameter group, whose settings can’t be modified. Therefore, you can’t use the default parameter group for this procedure. Although sharing parameter groups across multiple database instances is helpful, there is also one drawback: any change made to the parameter group is applied to all database instances that use that parameter group. If you modify your shared parameter group to require SCRAM authentication, then SCRAM authentication is required on all database instances that use that parameter group. Therefore, you must create a separate parameter group for use during this procedure. PostgreSQL 10 introduced a parameter called password_encryption, which tells the PostgreSQL database which password encryption mechanism to use by default. The default is md5, but you can also choose scram-sha-256 to tell PostgreSQL that you want to use the newer SCRAM password hashing algorithm by default. To create a database parameter group and configure it to default to SCRAM password encryption, use the following commands: aws rds create-db-parameter-group --db-parameter-group-name 'scram-passwords' --db-parameter-group-family postgres13 --description 'Implements SCRAM passwords' aws rds modify-db-parameter-group --db-parameter-group-name 'scram-passwords' --parameters 'ParameterName=password_encryption,ParameterValue=scram-sha-256,ApplyMethod=immediate' Configuring the server password encryption setting Let’s assume that your database instance identifier is PG13DB. To attach the new parameter group to your database instance, use the following command: aws rds modify-db-instance --db-instance-identifier 'PG13DB' --db-parameter-group-name 'scram-passwords' aws rds wait db-instance-available --db-instance-identifier 'PG13DB' Changing to the new parameter group doesn’t re-encrypt any existing passwords, nor does it require that users authenticate using SCRAM. Instead, it instructs the server to use the SCRAM method when users change their passwords from this point on. You can run a server with a mix of MD5 and SCRAM passwords, and changing this parameter doesn’t affect any users who currently have MD5 passwords. Switching a database instance to a different parameter group requires a reboot. If you’re working with a production database, I recommend waiting until the next database maintenance window for the instance to be rebooted and for these changes to be applied. If you’re working with a development or test database, you can reboot the instance immediately using the following command: aws rds reboot-db-instance --db-instance-identifier 'PG13DB' Checking for users with non-SCRAM passwords Before you begin enforcing SCRAM authentication, you should determine which users (if any) currently have MD5 passwords by running the following commands in a PostgreSQL session: => CREATE EXTENSION rds_tools; => select * from rds_tools.role_password_encryption_type(); Role Name | Encryption Method ————————————————— mydbuser1 | md5 mydbuser2 | mydbuser3 | scram mydbuser4 | Users that show md5 in the Encryption Method column can’t authenticate after you modify the following parameter. Therefore, it’s important to update the password for each user that has an MD5 password before proceeding to the next step. You can change an individual user’s password using the following SQL command while connected to the server using the psql tool: password mydbuser1 This method is preferred to using ALTER ROLE, because ALTER ROLE statements might get logged or transmitted in plaintext, potentially exposing the user’s new plaintext password to anyone with access to those logs or network paths. According to the PostgreSQL documentation: Caution must be exercised when specifying an unencrypted password with this command. The password will be transmitted to the server in cleartext, and it might also be logged in the client’s command history or the server log. psql contains a command password that can be used to change a role’s password without exposing the cleartext password. Modifying the database parameter group to require SCRAM After all your users and applications are upgraded to use client libraries that support SCRAM, and all your passwords are updated to the SCRAM format, you can configure RDS for PostgreSQL to require SCRAM authentication on your database. The rds.accepted_password_auth_method parameter tells Amazon RDS for PostgreSQL to allow both MD5 and SCRAM passwords, or to only allow SCRAM passwords. The default setting of md5+scram lets users with either MD5 or SCRAM passwords authenticate. Setting this parameter to scram forces the PostgreSQL server to only permit SCRAM passwords. You can make this change by updating the parameter group using this command: aws rds modify-db-parameter-group --db-parameter-group-name 'scram-passwords' --parameters 'ParameterName=rds.accepted_password_auth_method,ParameterValue=scram,ApplyMethod=immediate' Because you’re modifying a dynamic parameter on a parameter group that’s already attached to the database instance, rather than attaching a different parameter group to the database instance, you don’t have to reboot for this change to take effect. After you update this parameter, new connections to the Amazon RDS for PostgreSQL server are required to use SCRAM authentication. New connections that attempt to use MD5-based authentication fail, even if the database user still has an MD5 password. Connections that are currently in flight aren’t affected by this change. Summary In this post, you learned how to improve the security posture of your Amazon RDS for PostgreSQL 13 server by requiring that all database users use SCRAM-based password hashing and authentication. When you implement this change on your RDS for PostgreSQL 13 servers, we would love to hear about how the transition went in the comments section. About the Authors Tim Gustafson is a Senior Database Specialist Solutions Architect working primarily with open-source database engines. Arun Bhati is Software Development Engineer for Amazon RDS. Prior to joining AWS, he worked on building document generation platform for Amazon retail. Outside of work, he likes chess, outdoors and spend time with family and friends in his free time. https://aws.amazon.com/blogs/database/scram-authentication-in-rds-for-postgresql-13/
0 notes
Link
原文(投稿日:2020/12/20)へのリンク AWSの主催で年次開催されるre:inventカンファレンスが、今年はバーチャルの無償カンファレンスとして、3週間にわたって行われた。いくつかの基調講演やセッションの中でAWSは、新機能や改善、クラウドサービスを発表した。以下はコンピューティング、データベース、ストレージ、ネットワーキング、マシンラーニング、開発に関連するおもな発表のレビューである。 コンピューティング カンファレンスの初日にAmazonは、EC2に対する久々の新オペレーティングシステム追加となるEC2 Mac instance for macOSを発表した。これは主としてiOSやmacOS、tvOS、Safari向けのアプリケーション開発やテストといった、MacOS上でのみ実行可能なプロセスをターゲットとするものだ。Andy Jassy氏による基調講演の冒頭では、コンピューティングオプションとサーバレステクノロジに関する発表に重きが置かれていた。AWSでは、Intel Xeon M5znインスタンス、Graviton2を使用したC6gnインスタンス、IntelによるD3/D3enインスタンス、メモリ最適化を施したR5bインスタンス、AMDのG4ad GPUインスタンスなど、さまざまなプロセッサ上の新たなインスタンスタイプとEC2ファミリが導入されている。InfoQの関連記事がこちらにある。 Lambdaおよびサーバレスデプロイメントに関する発表もあった。100msから1msへと課金単位が細分化されたkotode、すべてのLambda関数のコストが自動的に削減された。また関数に最大10GBのメモリと6vCPUのアベイラビリティが追加された。もうひとつの新機能は、パッケージフォーマットとしてのコンテナイメージのサポートだ。これにより、既存のコンテナベースのワークロードからサーバレス関数への移行が簡単になる。AWS Lambdaのアップデートに関しては、InfoQの記事でも取り上げている。 Amazon ECS AnywhereやAmazon EKS Anywhereに加えて、AWSでは、ECSとEKSで使用されているコンテナオーケストレーションソフトウェアを、他のクラウドプロバイダを含むAWS以外のデプロイメントでも無償で使用可能にする予定である。これによってインテグレーション拡大とレイテンシ低減が実現されると同時に、Azure AKSやGoogle Anthosをすでに無償提供しているMicrosoftとGoogleと同じ道を歩むことになった。 開幕基調講演では、コンテナとサーバレスアプリケーション用の新たなマネージドデプロイメントサービスであるAWS Protonの公開プレビューが発表された。AWS Protonを使えば、サーバレスおよびコンテナベースのアプリケーションを対象とした、インフラストラクチャプロビジョニングとコードデプロイメントの自動化と管理が可能になる。関連記事はこちらにある。ECR Public Repositoriesは、コンテナイメージを世界規模で格納し、管理し、共有し、デプロイするための公開コンテナレジストリである。 ストレージ ECSで使用するために設計されたブロックストレージサービスのEBSに関しては、おもに3つの発表があった。そのひとつは、従来のgp2タイプより20パーセント安価になった、新しいボリュームタイプのgp3である。さらにgp3では、ベースラインパフォーマンスも向上すると同時に、汎用目的のボリュームとして初めて、ディスクサイズとは独立的にIOPSを設定することが可能になった。改善内容とアップグレードの容易性から、The Duckbill GroupのクラウドエコノミストであるCorey Quinn氏は、新しいボリュームタイプに即時スイッチすることを推奨している。 EBS gp3はゲームチェンジであり、終止符です。コストはgp2の80パーセントで、そのままコンバート可能な上、デメリットは何もありません。すぐに乗り換えましょう。 新しいio2 Block Expressボリュームタイプがプレビュー提供された。サイズが小さく高IOPSのワークロードを支援すると同時に、io2ボリュームタイプのIOPSによる段階的な価格設定が実施されるようになる。 オブジェクトストレージのアップデートで注目されるのは、S3がすべてのアプリケーションに対して、自動的に強いリード・アフター・ライト一貫性を提供するようになったことだ。その他のS3に関する改善としては、複数のデスティネーションバケットを対象としたレプリケーション、リージョンを越えた2方向レプリケーションによるマルチマスタおよびマルチリージョンアプリケーションのサポートの改善、新しいバケットキーなどが発表されている。 データベース データベースにも重要な新ローンチがあった。プレビューとして発表されたBabelfish for Auroraは、Microsoft SQL Server用に記述されたアプリケーションのコマンドをAuroraが理解できるようにする、Amazon Aurora PostgreSQL用の変換レイヤである。Aurora Serverless v2は、MySQL互換の新しいサーバレスリレーショナルデータベースである。そしてAWS Glue Elastic Viewsは、複数のデータストアをまたいだデータのコンバインとレプリケーションを行うマテリアライズドビューを構築する。Aurora Serverless v2とBabelfish for Auroraについては、それぞれの記事で取り上げている。Amazon Aurora PostgreSQLは、現在はAWS Lambdaに統合されている。 データウェアハウスのAmazon Redshiftには、さまざまな改善や新機能が導入されている。アベイラビリティゾーン間でのクラスタの移動、テーブルの自動最適化、データ共有とネイティブJSONデータ処理サポートのプレビューなどがその例だ。 ネットワーキングとIoT 2019年に導入されたAWS Local Zonesは、人口密度の高い地域の近くにあるリージョンのシングルゾーンを拡張することにより、より低いレイテンシを提供するものだ。カンファレンス中にAWSは、Boston、Houston、Miamiの3リージョンで新たなLocal Zoneが一般供与されること、2021年にはさらにNew York CityとChicagoを含む12が追加されることを発表した。 従来より小規模なOutpostオプションが来年から提供されるようになり、小規模なオフィスや工場、制限されたスペースで低レイテンシのコンピューティング能力にアクセスする必要のあるサイトなどにもAWSハードウェアのデプロイが可能になる。 IoTに関してはAWS IoT Greengrass 2.0が、ソフトウェアのローカル開発と大規模なデバイスフリートのソフトウェア管理を行うための、オープンソースのエッジランタイムとツールを提供する。 マシンラーニング AWSでAIとマシンラーニングを担当するバイスプレジデントのSwami Sivasubramanian氏による基調講演では、マシンラーニングに関する機能やプロダクトが数多く論じられたが、中心となったのはSageMakerに関するものだった。新しいAmazon SageMaker Feature Storeは、マシンラーニング機能を格納し、更新し、取り出し、共有するための、完全マネージドな特定目的のリポジトリである。関連記事はこちらにある。 その他の新サービスや新機能としては、SageMaker Clarify、SageMaker Debugger、SageMaker Managed Data Parallelism、SageMaker Model Parallelismが発表された。多くの肯定的評価の中で、MinOps CEOのJeremy Edberg氏は、バイアスの検出と説明可能性(explainability)に注目したサービスであるAmazon SageMaker Clarifyのメリットを強調している。 データセットのバイアス検出を支援してくれます。この問題の存在を表面化させてくれたことだけでも素晴らしいと思います。多くの人た��は、これが問題であることさえ、まったく気付いていないのですから。まさに快挙です! 一方でConey Quinn氏は、新アプローチは分かりにくいと感じている。 ローンチ時のAmazon SageMakerは、データ科学に関する正式な教育を受けていない人たちを対象とした、マシンラーニングの入門という位置付けでした。現在のSageMaker Autopilot、SageMaker Studio、SageMaker Feature Store、SageMaker DataWrangler、SageMaker Ground Truth、SageMaker Notebook、SageMaker Neo、SageMaker RL、SageMaker Marketplace、SageMaker Experiments、SageMaker Debugger、SageMaker Model Monitor、さらには私がこの記事を書いた時から公開されるまでの間にリリースされるすべてのものは、サービスページを引っ張り出した入門者がびっくりして、ラップトップを閉めて退散するような類のものになっています。 カンファレンスの始めには、AWSが設計したマシンラーニングのトレーニング用チップであるAWS Trainiumと、マシンラーニング用に構築されたHabana GaudiベースのEC2が、産業機械の異常動作を検出するエンドツーエンドシステムのAmazon Monitron、マシンラーニングアプライアンスおよびSDKのAWS Panoramaとともに発表されている。 異常検出の分野では、Amazon Lookout for Equipment、Amazon Lookout for Visionに加えて、時系列分析のためのフレキシブルなサービスであるAmazon Lookout for Metricsが追加されている。 監視、アーキテクチャ、コーディング Werner Vogels氏の基調講演では、ログと監視、デプロイメントの改善に関する説明に多くの時間が割かれていた。この分野では、いくつかの新サービスと改善が提供されている。CloudTrailでは、データイベントのログをより詳細にコントロールすることが可能になる。その他にも、Amazon Managed Service for Premetheus(AMP)とAmazon Managed Service for Grafana(AMG)がプレビュー提供されている。 AWS Fault Injection Simulatorは2021年から提供予定のマネージドなカオスエンジニアリングサービスで、EC2、EKS、ECS、RDSといった幅広いAWSサービスを対象として、破壊的なイベントを導入するテストを行う。 AWSのリソースと対話するブラウザベースのシェルであるCloudshellはすでに提供中で、インスタンスを立ち上げて資格情報を処理しなくてもCLIによる操作が可能になる。コードレビューとアプリケーションパフォーマンスのリコメンデーションを自動的に行うマネージドサービスであるAmazon CodeGuruでPythonが新たにサポートされた。マップやロケーション認識などのロケーションベースの機能をWebおよびモバイルアプリケーションに統合するサービスであるAmazon Locationのプレビュー提供が、カンファレンスの最後に発表された。 AWSはre:Inventセッションをさらに追加しており、例えばS3に関する話題は、カンファレンスを新年まで延長した1月12~14日に実施される。
0 notes
Text
2021/02/08-21
# フロリダの水道局システムがハッキング、飲料水に100倍の薬剤投入 https://japanese.engadget.com/hackers-contaminate-florida-water-supply-033022797.html >水道局のコンピューターは、メンテナンスのためにTeamViewerと呼ばれる >リモートデスクトップソフトウェアによる操作を受け付けるようにセット >アップされていたため、
# シェルスクリプトを書くときにいつもやるやつを調べた https://please-sleep.cou929.nu/bash-strict-mode.html >#!/bin/bash >set -euxo pipefail
# AWS障害、5時間でほぼ復旧 気象庁Webサイトなどに影響【各サービス復旧状況を追記】 https://www.itmedia.co.jp/news/articles/2102/20/news021.html >2月20日午前0時ごろに障害が発生した。発生から約5時間がたった >午前5時9分に同社は、障害の大部分を解消したと発表した。
>障害が起き��のは、東京近郊にあるAWSのデータセンターの一つ >(apne1-az1)。サーバの冷却システムへの電力供給が正しく行 >われず、サーバルームの1区画の温度が上昇した結果、クラウド >計算環境「Amazon Elastic Compute Cloud」(EC2)の一部インス >タンスの電源が落ちたという。これに伴い、EC2で利用できるスト >レージ「EBSボリューム」の一部でもパフォーマンスの低下が >発生したとしている。
>同社は電源の復旧作業とサーバルームの冷却に取り組み、 >午前3時42分には多くの電源が回復。午前4時26分には室温が >通常レベルまで戻ったという。午前5時9分にEC2インスタンスの >大部分が回復し、EBSボリュームも一部を除いては回復したと >している。回復していない残りのインスタンスやボリュームに >ついても同社は復旧に取り組んでいる。
# 【復旧済み】2021年2月19日23時50分頃に発生したAWS障害について(2月20日7:00更新) https://classmethod.jp/news/info_20210220_failure/ ■リージョンおよびアベイラビリティゾーン ・ap-northeast-1(東京リージョン) ・「アベイラビリティゾーンID」が「apne1-az1」となっているサブネット
# AWS CLIで自アカウントのAZ名とAZ IDのマッピングを確認する https://dev.classmethod.jp/articles/confirm-your-az-name-mapping-using-cli/ >簡単にいうと、apne1-az1などがAZ IDと呼ばれるもので、AZに一意に >付けられた識別子です。一方、皆さんもよく知っている >ap-northeast-1aなどがAZ名で、AWS アカウントごとにどのAZと >マッピングされるかは異なる可能性があります。次
>$ aws ec2 describe-availability-zones --region ap-northeast-1
AWS CloudShell で実行可能
# RDS MySQLの監査ログがCloudWatch Logsにエクスポートされない場合の対処方法 https://dev.classmethod.jp/articles/tsnote-rds-cloudwatch-logs-audit-001/ >必要となる作業は以下の2点です。 > 1.オプショングループの作成・変更(デフォルトのオプショングループを > 使用している場合のみ) > 2.MariaDB 監査プラグインの追加
>MariaDB 監査プラグインはRDS MySQL バージョン 8.0 ではサポートされて >いません。 >サポートされているのはMySQL バージョン 5.5、5.6、および 5.7 です。
RDS Aurora (MySQL) はオプショングループ利用不可なので対象外
0 notes