#EC2Instance
Explore tagged Tumblr posts
Text
How To Create EMR Notebook In Amazon EMR Studio

How to Make EMR Notebook?
Amazon Web Services (AWS) has incorporated Amazon EMR Notebooks into Amazon EMR Studio Workspaces on the new Amazon EMR interface. Integration aims to provide a single environment for notebook creation and massive data processing. However, the new console's “Create Workspace” button usually creates notebooks.
Users must visit the Amazon EMR console at the supplied web URL and complete the previous console's procedures to create an EMR notebook. Users usually select “Notebooks” and “Create notebook” from this interface.
When creating a Notebook, users choose a name and a description. The next critical step is connecting the notebook to an Amazon EMR cluster to run the code.
There are two basic ways users associate clusters:
Select an existing cluster
If an appropriate EMR cluster is already operating, users can click “Choose,” select it from a list, and click “Choose cluster” to confirm. EMR Notebooks have cluster requirements, per documentation. These prerequisites, EMR release versions, and security problems are detailed in specialised sections.
Create a cluster
Users can also “Create a cluster” to have Amazon EMR create a laptop-specific cluster. This method lets users name their clusters. This workflow defaults to the latest supported EMR release version and essential apps like Hadoop, Spark, and Livy, however some configuration variables, such as the Release version and pre-selected apps, may not be modifiable.
Users can customise instance parameters by selecting EC2 Instance and entering the appropriate number of instances. A primary node and core nodes are identified. The instance type determines the maximum number of notebooks that can connect to the cluster, subject to constraints.
The EC2 instance profile and EMR role, which users can choose custom or default roles for, are also defined during cluster setup. Links to more information about these service roles are supplied. An EC2 key pair for cluster instance SSH connections can also be chosen.
Amazon EMR versions 5.30.0 and 6.1.0 and later allow optional but helpful auto-termination. For inactivity, users can click the box to shut down the cluster automatically. Users can specify security groups for the primary instance and notebook client instance, use default security groups, or use custom ones from the cluster's VPC.
Cluster settings and notebook-specific configuration are part of notebook creation. Choose a custom or default AWS Service Role for the notebook client instance. The Amazon S3 Notebook location will store the notebook file. If no bucket or folder exists, Amazon EMR can create one, or users can choose their own. A folder with the Notebook ID and NotebookName and.ipynb extension is created in the S3 location to store the notebook file.
If an encrypted Amazon S3 location is used, the Service role for EMR Notebooks (EMR_Notebooks_DefaultRole) must be set up as a key user for the AWS KMS key used for encryption. To add key users to key policies, see AWS KMS documentation and support pages.
Users can link a Git-based repository to a notebook in Amazon EMR. After selecting “Git repository” and “Choose repository”, pick from the list.
Finally, notebook users can add Tags as key-value pairs. The documentation includes an Important Note about a default tag with the key creatorUserID and the value set to the user's IAM user ID. Users should not change or delete this tag, which is automatically applied for access control, because IAM policies can use it. After configuring all options, clicking “Create Notebook” finishes notebook creation.
Users should note that these instructions are for the old console, while the new console now uses EMR Notebooks as EMR Studio Workspaces. To access existing notebooks as Workspaces or create new ones using the “Create Workspace” option in the new UI, EMR Notebooks users need extra IAM role rights. Users should not change or delete the notebook's default access control tag, which contains the creator's user ID. No notebooks can be created with the Amazon EMR API or CLI.
The thorough construction instructions in some current literature match the console interface, however this transition symbolises AWS's intention to centralise notebook creation in EMR Studio.
#EMRNotebook#AmazonEMRconsole#AmazonEMR#AmazonS3#EMRStudio#AmazonEMRAPI#EC2Instance#technology#technews#technologynews#news#govindhtech
0 notes
Photo

(via Fix Elastic IP Address Could not be Associated)
0 notes
Text
Unlock the full potential of #AWS with comprehensive courses at your fingertips. Stay updated on cloud advancements and harness the power of Amazon Web Services for your projects. https://www.dclessons.com/amazon-virtual-private-cloud
#aws#amazonwebservices#amazonvpc#virtualprivatecloud#cloudcomputing#networking#security#ec2#elasticcomputecloud#ec2instances#virtualmachines#linux#windows#ubuntu#centos#subnets#routetables#securitygroups#acls#natgateway#igw#vpn#directconnect#vpcpeering#dhcpoptionssets#dns#vpcflowlogs#availabilityzones#region#learnaws
0 notes
Link
Services Related Elastic Compute Cloud (EC2) in this article you will learn different types of EC2 Services like AWS Systems Manager, Placement Groups, AWS Elastic Beanstalk and Amazon Elastic Container Service and AWS Far gate etc.
Read More : https://www.info-savvy.com/services-related-elastic-compute-cloud-ec2/
#AmazonElasticContainerService#application#AutoScaling#aws#AWSElasticBeanstalk#EC2instances#awscertification#onlinetraining&certification#AWSOnlinetraining#AWSTraining
0 notes
Video
youtube
AWS EC2 VM Setup | Run Springboot Microservice and Postgres DB in EC2 Se...
Hello friends, a new #video on #aws #cloud #ec2 #server setup #springboot #microservice setup in #ec2server #postgres setup in #ec2instance is published on #codeonedigest #youtube channel. Learn #awsec2 #postgressetup #java #programming #coding with codeonedigest.
@java #java #awscloud @awscloud @AWSCloudIndia #Cloud #CloudComputing @YouTube #youtube #springbootmicroservices #springbootmicroservicesproject #springbootmicroservicestutorial #springbootmicroservicesfullcourse #springbootmicroservicesexample #springbootmicroservicesarchitecture #aws #awscloud #cloud #createawsec2server #createawsec2instance #createawsec2 #awsmanagementconsole #createec2instanceinaws #createec2 #createec2instanceandconnect #createec2instanceinawslinux #awsec2 #awsec2instance #awsec2interviewquestionsandanswers #awsec2instancecreation #awsec2deploymenttutorial #installpostgresec2install #installpostgresec2linux #awsec2connect #awsec2statuschecks #awsec2project #awsec2full #awsec2createinstance #awsec2interviewquestionsandanswersforfreshers #awsec2instancedeployment #awsec2 #awsec2serialconsole #awsec2consolewindows #awsec2serverrefusedourkey #awsec2serialconsolepassword #awsec2serviceinterviewquestions #awsec2serialconsoleaccess #awsec2serialrefusedourkeyputty #awsec2serverconfiguration #awsec2serialconnect #awsec2 #awsec2instance #awsec2instancecreation #awsec2instanceconnect #awsec2instancedeployment #awsec2instancelinux #awsec2instancelaunch #awsec2instanceconnectnotworking #awsec2instanceinterviewquestions #awsec2instancecreationubuntu #awstutorial #awsec2tutorial #ec2tutorial #postgresandpgadmininstall #postgresandpgadmininstallwindows #postgresandpgadmininstallubuntu #postgresandpgadmininstallwindows11 #postgresandpgadmininstallmacos #postgresandpgadmininstallwindows10 #postgrespasswordreset #postgrestutorial #postgresdocker #postgresinstallationerror #postgres #postgresdatabase #rdbms #postgresdatabasesetup #postgresdatabaseconfiguration #database #relationaldatabase #postgresconfiguration #postgresconfigurationfile #postgresconfigurationparameters #postgresconfigfilelocation #postgresconfigurationinspringboot #postgresconfigfilewindows #postgresconfigfilemax #postgresconfigfileubuntu #postgresconfigurereplication #postgresconfigurationsettings #postgresconnectiontoserver #postgresconnectioninjava #postgresconnectioncommandline #postgresconnectioninnodejs#postgrestutorial #postgresdocker #postgresinstallationerror #postgres #postgresdatabase #rdbms #postgresdatabasesetup #postgresdatabaseconfiguration #database #relationaldatabase #postgresconfiguration #postgresconfigurationfile #postgresconfigurationparameters #postgresconfigfilelocation #postgresconfigurationinspringboot #postgresconfigfilewindows #postgresconfigfilemax #postgresconfigfileubuntu #postgresconfigurereplication #postgresconfigurationsettings #postgresconnectiontoserver #postgresconnectioninjava #postgresconnectioncommandline #postgresconnectioninnodejs
Hello Friend, Thanks for following us here.
#youtube#aws#ec2#aws ec2#aws cloud#cloud#s3 bucket#aws s3 bucket#postgres db#postgresdb#postgres#rdbms#spring#springboot#microservice#springboot microservice#spring boot#microservices
2 notes
·
View notes
Video
youtube
🔶AWS EC2 INSTANCES TYPES
https://youtu.be/R4492RA5lV4 🔶AWS EC2 INSTANCES TYPES Amazon EC2 provides a wide selection of instance types optimized to fit different use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give you the flexibility to choose the appropriate mix of resources for your applications. So AWS EC2 instances of various types. Each instance type includes one or more instance sizes, allowing you to scale your resources to the requirements of your target workload. #awsec2instancestypes #typesofawsinstances #awsec2instance #ec2instance #aws
0 notes
Text
Vantage Has Acquired Ec2instances.info
https://www.vantage.sh/blog/vantage-has-acquired-ec2instances-info Comments
0 notes
Photo
Lecture 019- AWS EC2 Instance Types | AWS Cloud Practitioner Course http://ehelpdesk.tk/wp-content/uploads/2020/02/logo-header.png [ad_1] I hope everyone will be fine and... #amazon #amazonwebservices #aws #awscertification #awscertifiedcloudpractitioner #awscertifieddeveloper #awscertifiedsolutionsarchitect #awscertifiedsysopsadministrator #awscloudpractitionercourse #awsec2 #awsec2instancetypes #awsinstancetypes #awstraininginenglish #awstutorials #ciscoccna #cloud #cloudcomputing #comptiaa #comptianetwork #comptiasecurity #console #cybersecurity #ec2 #ec2instance #ec2instancetypes #ethicalhacking #instancetypes #it #kubernetes #linux #microsoftaz-900 #microsoftazure #networksecurity #software #theworldofit #theworldofit #windowsserver #worldofitech.com #www.worldofitech.com
0 notes
Photo
AWS CDKでAWS::CloudFormation::Init タイプを使ってEC2インスタンスの環境構築ができるようにしてみた https://ift.tt/2lkBYUK
前回、AWS Cloud Development Kit(AWS CDK)を利用してEC2インスタンスを立ち上げてみたのですが、AWS CDKでAWS::CloudFormation::Initタイプが利用できるのかも確認してみました。
AWS Cloud Development Kit(AWS CDK)でEC2インスタンスを立ち上げてみる – Qiita https://cloudpack.media/48912
AWS::CloudFormation::Init タイプについては下記をご参考ください。
AWS::CloudFormation::Init タイプを使ってEC2インスタンスの環境構築ができるようにしてみた – Qiita https://cloudpack.media/48540
前提
AWSアカウントがある
AWS CLIが利用できる
Node.jsがインストール済み
実装
前回記事の実装をベースにしてAWS::CloudFormation::Initタイプの定義を追加しました。
AWS Cloud Development Kit(AWS CDK)でEC2インスタンスを立ち上げてみる – Qiita https://cloudpack.media/48912
import cdk = require('@aws-cdk/core'); import ec2 = require('@aws-cdk/aws-ec2/lib'); export class UseCdkEc2Stack extends cdk.Stack { constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) { super(scope, id, props); let vpc = ec2.Vpc.fromLookup(this, 'VPC', { vpcId: this.node.tryGetContext('vpc_id') }); const cidrIp = this.node.tryGetContext('cidr_ip'); const securityGroup = new ec2.SecurityGroup(this, 'SecurityGroup', { vpc }); securityGroup.addEgressRule(ec2.Peer.anyIpv4(), ec2.Port.allTraffic()); securityGroup.addIngressRule(ec2.Peer.ipv4(cidrIp), ec2.Port.tcp(22)); let ec2Instance = new ec2.CfnInstance(this, 'myInstance', { imageId: new ec2.AmazonLinuxImage().getImage(this).imageId, instanceType: new ec2.InstanceType('t3.small').toString(), networkInterfaces: [{ associatePublicIpAddress: true, deviceIndex: '0', groupSet: [securityGroup.securityGroupId], subnetId: vpc.publicSubnets[0].subnetId }], keyName: this.node.tryGetContext('key_pair') }); ec2Instance.addOverride('Metadata', { 'AWS::CloudFormation::Init': { 'config': { 'commands': { 'test': { 'command': "echo $STACK_NAME test", 'env': { 'STACK_NAME': this.stackName } } }, } } }); let userData = ec2.UserData.forLinux(); userData.addCommands( '/opt/aws/bin/cfn-init', `--region ${this.region}`, `--stack ${this.stackName}`, `--resource ${ec2Instance.logicalId}` ); userData.addCommands('echo', 'hoge!'); ec2Instance.userData = cdk.Fn.base64(userData.render()); new cdk.CfnOutput(this, 'Id', { value: ec2Instance.ref }); new cdk.CfnOutput(this, 'PublicIp', { value: ec2Instance.attrPublicIp }); } }
公式ドキュメントを漁ってみたものの良い情報が得られず、下記Issueを参考にしました。
Add support for AWS::CloudFormation::Init · Issue #777 · aws/aws-cdk https://github.com/aws/aws-cdk/issues/777
ec2: cfn-init support in ASGs · Issue #1413 · aws/aws-cdk https://github.com/aws/aws-cdk/issues/1413
feat(aws-ec2): add support for CloudFormation::Init by rix0rrr · Pull Request #792 · aws/aws-cdk https://github.com/aws/aws-cdk/pull/792
追加した実装は以下となります。 ポイントとしてec2Instance.addOverride()でメタデータを追加してAWS::CloudFormation::Initタイプで定義を追加します。 /opt/aws/bin/cfn-initの--resourceオプションでリソース名を指定するのにec2Instanceを作ってからuserDataを設定することで、ec2Instance.logicalIdが利用できるようにしています。ベタ書きでもいいっちゃいいですね。
ec2Instance.addOverride('Metadata', { 'AWS::CloudFormation::Init': { 'config': { 'commands': { 'test': { 'command': "echo $STACK_NAME test", 'env': { 'STACK_NAME': this.stackName } } }, } } }); let userData = ec2.UserData.forLinux(); userData.addCommands( '/opt/aws/bin/cfn-init', `--region ${this.region}`, `--stack ${this.stackName}`, `--resource ${ec2Instance.logicalId}` ); userData.addCommands('echo', 'hoge!'); ec2Instance.userData = cdk.Fn.base64(userData.render()); (略)
デプロイしてみる
> cdk deploy \ -c vpc_id=vpc-xxxxxxxx \ -c key_pair=cdk-test-ec2-key \ -c cidr_ip=xxx.xxx.xxx.xxx/32 This deployment will make potentially sensitive changes according to your current security approval level (--require-approval broadening). Please confirm you intend to make the following modifications: Security Group Changes ┌───┬──────────────────────────┬─────┬────────────┬────────────────────┐ │ │ Group │ Dir │ Protocol │ Peer │ ├───┼──────────────────────────┼─────┼────────────┼────────────────────┤ │ + │ ${SecurityGroup.GroupId} │ In │ TCP 22 │ xxx.xxx.xxx.xxx/32 │ │ + │ ${SecurityGroup.GroupId} │ Out │ Everything │ Everyone (IPv4) │ └───┴──────────────────────────┴─────┴────────────┴────────────────────┘ (NOTE: There may be security-related changes not in this list. See http://bit.ly/cdk-2EhF7Np) Do you wish to deploy these changes (y/n)? y UseCdkEc2Stack: deploying... useCdkEc2Stack: creating CloudFormation changeset... 0/4 | 14:30:29 | CREATE_IN_PROGRESS | AWS::CDK::Metadata | CDKMetadata 0/4 | 14:30:30 | CREATE_IN_PROGRESS | AWS::EC2::SecurityGroup | SecurityGroup (SecurityGroupDD263621) 0/4 | 14:30:32 | CREATE_IN_PROGRESS | AWS::CDK::Metadata | CDKMetadata Resource creation Initiated 1/4 | 14:30:32 | CREATE_COMPLETE | AWS::CDK::Metadata | CDKMetadata 1/4 | 14:30:35 | CREATE_IN_PROGRESS | AWS::EC2::SecurityGroup | SecurityGroup (SecurityGroupDD263621) Resource creation Initiated 2/4 | 14:30:37 | CREATE_COMPLETE | AWS::EC2::SecurityGroup | SecurityGroup (SecurityGroupDD263621) 2/4 | 14:30:39 | CREATE_IN_PROGRESS | AWS::EC2::Instance | myInstance 2/4 | 14:30:40 | CREATE_IN_PROGRESS | AWS::EC2::Instance | myInstance Resource creation Initiated`` 3/4 | 14:30:56 | CREATE_COMPLETE | AWS::EC2::Instance | myInstance 4/4 | 14:30:59 | CREATE_COMPLETE | AWS::CloudFormation::Stack | UseCdkEc2Stack
︎ UseCdkEc2Stack Outputs: UseCdkEc2Stack.PublicIp = xxx.xxx.xxx.xxx UseCdkEc2Stack.Id = i-xxxxxxxxxxxxxxxxx Stack ARN: arn:aws:cloudformation:us-east-1:xxxxxxxxxxxx:stack/UseCdkEc2Stack/72304c90-b41d-11e9-b604-129cd46a326a
デプロイできたらSSHアクセスして実行ログを確認してみます。
> ssh -i cdk-test-ec2-key \ [email protected] $ cat /var/log/cfn-init.log 2019-08-01 05:31:11,740 [INFO] -----------------------Starting build----------------------- 2019-08-01 05:31:11,740 [INFO] Running configSets: default 2019-08-01 05:31:11,741 [INFO] Running configSet default 2019-08-01 05:31:11,742 [INFO] Running config config 2019-08-01 05:31:11,746 [INFO] Command test succeeded 2019-08-01 05:31:11,746 [INFO] ConfigSets completed 2019-08-01 05:31:11,746 [INFO] -----------------------Build complete----------------------- $ cat /var/log/cfn-init-cmd.log 2019-08-01 05:31:11,742 P2090 [INFO] ************************************************************ 2019-08-01 05:31:11,742 P2090 [INFO] ConfigSet default 2019-08-01 05:31:11,743 P2090 [INFO] ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 2019-08-01 05:31:11,743 P2090 [INFO] Config config 2019-08-01 05:31:11,743 P2090 [INFO] ============================================================ 2019-08-01 05:31:11,743 P2090 [INFO] Command test 2019-08-01 05:31:11,746 P2090 [INFO] -----------------------Command Output----------------------- 2019-08-01 05:31:11,746 P2090 [INFO] UseCdkEc2Stack test 2019-08-01 05:31:11,746 P2090 [INFO] ------------------------------------------------------------ 2019-08-01 05:31:11,746 P2090 [INFO] Completed successfully. $ cat /var/log/cloud-init-output.log (略) Updated: bind-libs.x86_64 32:9.8.2-0.68.rc1.60.amzn1 bind-utils.x86_64 32:9.8.2-0.68.rc1.60.amzn1 kernel-tools.x86_64 0:4.14.133-88.105.amzn1 python27-jinja2.noarch 0:2.7.2-3.16.amzn1 vim-common.x86_64 2:8.0.0503-1.46.amzn1 vim-enhanced.x86_64 2:8.0.0503-1.46.amzn1 vim-filesystem.x86_64 2:8.0.0503-1.46.amzn1 vim-minimal.x86_64 2:8.0.0503-1.46.amzn1 Complete! Cloud-init v. 0.7.6 running 'modules:final' at Thu, 01 Aug 2019 05:31:11 +0000. Up 18.18 seconds. hoge! Cloud-init v. 0.7.6 finished at Thu, 01 Aug 2019 05:31:11 +0000. Datasource DataSourceEc2. Up 18.77 seconds
ユーザーデータの/opt/aws/bin/cfn-initコマンド実行でメタデータにAWS::CloudFormation::Initタイプで指定したコマンドが実行されました。やったぜ。
まとめ
メタデータの指定について、もっと良い実装ができそうですが、ひとまずAWS CDKでもAWS::CloudFormation::Initタイプを利用できるのが確認できたので満足です。
参考
AWS Cloud Development Kit(AWS CDK)でEC2インスタンスを立ち上げてみる – Qiita https://cloudpack.media/48912
AWS::CloudFormation::Init タイプを使ってEC2インスタンスの環境構築ができるようにしてみた – Qiita https://cloudpack.media/48540
Add support for AWS::CloudFormation::Init · Issue #777 · aws/aws-cdk https://github.com/aws/aws-cdk/issues/777
ec2: cfn-init support in ASGs · Issue #1413 · aws/aws-cdk https://github.com/aws/aws-cdk/issues/1413
feat(aws-ec2): add support for CloudFormation::Init by rix0rrr · Pull Request #792 · aws/aws-cdk https://github.com/aws/aws-cdk/pull/792
元記事はこちら
「AWS CDKでAWS::CloudFormation::Init タイプを使ってEC2インスタンスの環境構築ができるようにしてみた」
September 02, 2019 at 04:00PM
0 notes
Text
Migrating a Neo4j graph database to Amazon Neptune with a fully automated utility
Amazon Neptune is a fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets. You can benefit from the service’s purpose-built, high-performance, fast, scalable, and reliable graph database engine when you migrate data from your existing self-managed graph databases, such as Neo4j. This post shows you how to migrate from Neo4j to Amazon Neptune by using an example AWS CDK app that utilizes the neo4j-to-neptune command-line utility from the Neptune tools GitHub repo. The example app completes the following tasks: Set up and configure Neo4j and Amazon Neptune databases Exports the movies graph from the example project on the Neo4j website as a CSV file Converts the exported data to the bulk load CSV format in Amazon Neptune by using the neo4j-to-neptune utility Imports the converted data into Amazon Neptune Architecture The following architecture shows the building blocks that you need to build a loosely coupled app for the migration. The app automates the creation of the following resources: An Amazon EC2 instance to download and install a Neo4j graph database, and Apache TinkerPop Gremlin console for querying Amazon Neptune. This instance acts both as the migration source and as a client to run AWS CLI commands, such as copying exported files to an Amazon S3 bucket and loading data into Amazon Neptune. An Amazon S3 bucket from which to load data into Neptune. An Amazon Neptune DB cluster with one graph database instance. Running the migration Git clone the AWS CDK app from the GitHub repo. After ensuring you meet the prerequisites, follow the instructions there to run the migration. The app automates the migration of the Neo4j movies graph database to Amazon Neptune. After you run the app successfully, you see an output similar to the following screenshot in your terminal: Record the values, such as NeptuneEndpoint, to use in later steps. The app provisions the Neo4j and Amazon Neptune databases and performs the migration. The following sections explain how the app provisions and runs the migration, and shows you how to use the Gremlin console on the EC2 instance to query Neptune to validate the migration. Migration overview The AWS CDK app automates three essential phases of the migration: Provision AWS infrastructure Prepare for the migration Perform the migration Provisioning AWS infrastructure When you run the app, it creates the following resources in your AWS account. Amazon VPC and subnets The app creates an Amazon VPC denoted by VPCID. You must create Neptune clusters in a VPC, and you can only access their endpoints within that VPC. To access your Neptune database, the app uses an EC2 instance that runs in the same VPC to load data and run queries. You create two /24 public subnets, one in each of two Availability Zones. EC2 instance A single EC2 instance denoted by EC2Instance performs the following functions: Download and install a Neo4j community edition graph database (version `4.0.0`), Runs AWS CLI commands to copy local files to Amazon S3 Runs AWS CLI commands to load data into Neptune Runs Apache TinkerPop Gremlin commands to query and verify the data migration to Neptune S3 bucket The app creates a single S3 bucket, denoted by S3BucketName, to hold data exported from Neo4j. The app triggers a bulk load of this data from the bucket into Neptune. Amazon S3 gateway VPC endpoint The app creates a Neptune database cluster in a public subnet inside the VPC. To make sure that Neptune can access and download data from Amazon S3, the app also creates a gateway type VPC endpoint for Amazon S3. For more information, see Gateway VPC Endpoints. A single-node Neptune cluster This is the destination in this migration—the target Neptune graph database denoted by NeptuneEndpoint. The app loads the exported data into this database. You can use the Gremlin console on the EC2 instance to query the data. Required AWS IAM roles and policies To allow access to AWS resources, the app creates all the required roles and policies necessary to perform the migration. Preparing for the migration After provisioning the infrastructure, the app automates the steps shown in the diagram below: Create a movie graph in Neo4j The app uses bootstrapping shell scripts to install and configure Neo4j community edition 4.0 on the EC2 instance. The scripts then load the Neo4j movies graph into this database. Export the graph data to a CSV file The app uses the following Neo4j Cypher script to export all nodes and relationships into a comma-delimited file: CALL apoc.export.csv.all('neo4j-export.csv', {d:','}); The following code shows the location of the saved exported file: /var/lib/neo4j/import/neo4j-export.csv As part of automating the Neo4j configuration, the app installs the APOC library, which contains procedures for exporting data from Neo4j, and edits the neo4j.conf file with the following code so that it can write to a file on disk: apoc.export.file.enabled=true The app also whitelists Neo4j’s APOC APIs in the neo4j.conf file to use them. See the following code: dbms.security.procedures.unrestricted=apoc.* Performing the migration In this phase, the app migrates the data to Neptune. This includes the following automated steps. Transform Neo4j exported data to Gremlin load data format The app uses the neo4j-to-neptune command-line utility to transform the exported data to a Gremlin load data format with a single command. See the following code: $ java -jar neo4j-to-neptune.jar convert-csv -i /var/lib/neo4j/import/neo4j-export.csv -d output --infer-types The neo4j-to-neptune utility creates an output folder and copies the results to separate files: one each for vertices and edges. The utility has two required parameters: the path to the Neo4j export file (/var/lib/neo4j/import/neo4j-export.csv) and the name of a directory (output) where the converted CSV files are written. There are also optional parameters that allow you to specify node and relationship multi-valued property policies and turn on data type inferencing. For example, the --infer-types flag tells the utility to infer the narrowest supported type for each column in the output CSV as an alternative to specifying the data type for each property. For more information, see Gremlin Load Data Format. The neo4j-to-neptune utility addresses differences in the Neo4j and Neptune property graph data models. Neptune’s property graph is very similar to Neo4j’s, including support for multiple labels on vertices, and multi-valued properties (sets but not lists). Neo4j allows homogeneous lists of simple types that contain duplicate values to store as properties on both nodes and edges. Neptune, on the other hand, provides for set and single cardinality for vertex properties, and single cardinality for edge properties. The neo4j-to-neptune utility provides policies to migrate Neo4j node list properties that contain duplicate values into Neptune vertex properties, and Neo4j relationship list properties into Neptune edge properties. For more information, see the GitHub repo. Copy the output data to Amazon S3 The export creates two files: edges.csv and vertices.csv. These files are located in the output folder. The app copies these files to the S3 bucket created specifically for this purpose. See the following code: $ aws s3 cp /output/ s3:///neo4j-data --recursive Load data into Neptune The final step of the automated migration uses the Neptune bulk load AWS CLI command to load edges and vertices into Neptune. See the following code: curl -X POST -H 'Content-Type: application/json' -d ' { "source": "s3:///neo4j-data", "format": "csv", "iamRoleArn": "arn:aws:iam:::role/", "region": "", "failOnError": "FALSE" }' For more information, see Loading Data into Amazon Neptune. Verifying the migration After the automated steps are complete, you are ready to verify that the migration was successful. Amazon Neptune is compatible with Apache TinkerPop3 and Gremlin 3.4.5. This means that you can connect to a Neptune DB instance and use the Gremlin traversal language to query the graph. To verify the migration, complete the following steps: Connect to the EC2 instance after it passes both the status checks. For more information, see Types of Status Checks. Use the value of NeptuneEndpoint to execute the following commands: $ docker run -it -e NEPTUNE_HOST= sanjeets/neptune-gremlinc-345:latest At the prompt, execute the following command to send all your queries to Amazon Neptune. :remote console Execute the following command to see the number of verticies migrated. g.v() .count() The following screenshot shows the output of the command g.V().count(). You can now, for example, run a simple query that gives you all the movies in which Tom Cruise acted. The following screenshot shows the intended output. Cleaning up After you run the migration, clean up all the resources the app created with the following code: npm run destroy Conclusion Neptune is a fully managed graph database service that makes it easy to focus on building great applications for your customers instead of worrying about database management tasks like hardware provisioning, software patching, setup, configuration, or backups. This post demonstrated how to migrate Neo4j data to Neptune in a few simple steps. About the Author Sanjeet Sahay is a Sr. Partner Solutions Architect with Amazon Web Services. https://probdm.com/site/MTg5MzY
0 notes
Text
Introducing Gen 2 AWS Outpost Racks with Improved Speed

Outpost Racks
Amazon's latest edge computing innovation, second-generation Outpost racks, are now available. This new version supports the latest x86-powered Amazon Elastic Compute Cloud (Amazon EC2) instances and features faster networking instances for ultra-low latency and high throughput applications and simpler network scalability and deployment. These enhancements boost on-premises workloads including telecom 5G Core and financial services core trading platforms.
For on-premises workloads. The second-generation at outpost racks process data locally and has low latency for multiplayer online gaming servers, consumer transaction data, medical records, industrial and manufacturing control systems, telecom BSS, edge inference of diverse applications, and machine learning (ML) models. Customers may now choose from the latest processor generation and Outposts rack configurations with faster processing, more memory, and more network bandwidth.
The latest EC2 instances
In AWS racks are compute-optimized C7i, general-purpose M7i, and memory-optimized R7i x86 instances. Older Outpost Rack C5, M5, and R5 instances had 40% less performance and double vCPU, RAM, and Internet bandwidth. Larger databases, real-time analytics, memory-intensive apps, on-premises workloads, CPU-based edge inference with complicated machine learning models. benefit tremendously from 4th Gen Intel Xeon Scalable CPUs. Newer EC2 instances, including GPU-enabled ones, will be supported.
Easy network scalability and configuration
Amazon has overhauled networking for its latest Outposts generation, making it easier and more scalable. This update centres on its new Outposts network rack, which centralises compute and storage traffic.
The new design has three key benefits. First, you may now grow compute capacity separately from networking infrastructure as workloads rise, increasing flexibility and lowering costs. Second, it started with network resiliency to keep your systems running smoothly. Network racks handle device failures automatically. Third, connecting to on-premises and AWS Regions is simple. You may configure IP addresses, VLANs, and BGP using a revamped console interface or simple APIs.
Amazon EC2 instances with faster networking
Enhanced Amazon EC2 instances with faster networking are being launched on Outpost racks. These instances are designed for mission-critical on-premises throughput, computation, and latency. A supplemental physical network with network accelerator cards attached to top-of-rack (TOR) switches is added to the Outpost logical network for best performance.
Bmn-sf2e instances, designed for ultra-low latency and predictable performance, are the first. The new instances use Intel's latest Sapphire Rapids processors (4th Gen Xeon Scalable) and 8GB of RAM per CPU core to sustain 3.9 GHz across all cores. Bmn-sf2e instances feature AMD Solarflare X2522 network cards that link to top-of-rack switches.
These examples provide deterministic networking for financial services customers, notably capital market companies, employing equal cable lengths, native Layer 2 (L2) multicast, and precision time protocol. Customers may simply connect to their trading infrastructure to meet fair trading and equitable access regulations.
The second instance type, Bmn-cx2, has low latency and high throughput. This example's NVIDIA ConnectX-7 400G NICs are physically coupled to fast top-of-rack switches, giving 800 Gbps bare metal network bandwidth at near line rate. This instance supports hardware PTP and native Layer 2 (L2) multicast, making it ideal for high-throughput workloads including risk analytics, real-time market data dissemination, and telecom 5G core network applications.
Overall, the next Outpost racks generation improves performance, scalability, and resilience for on-premises applications, particularly mission-critical workloads with rigorous throughput and latency constraints. AWS Management Console lets you pick and buy. The new instances preserve regional deployment consistency by supporting the same APIs, AWS Management Console, automation, governance policies, and security controls on-premises and in the cloud. improving IT and developer productivity.
Know something
Second-generation Outpost racks may be parented to six AWS regions: Asia Pacific (Singapore), US West (Oregon), US East (N. Virginia, and Ohio), and EU West (London, France).Support for more nations, territories, and AWS regions is coming. At launch, second-generation Outpost racks support several AWS services from first-generation racks. Support for more AWS services and EC2 instance types is coming.
#AmazonElasticComputeCloud#machinelearning#C7iinstances#AmazonEC2#EC2instance#AWSRegions#News#Technews#Technology#Technologynews#govindhtech
0 notes
Text
Amazon EC2 專家課程(使用自動擴展和負載平衡器)
課程簡介
成為 AWS EC2 專家。 了解自動擴展( AutoScaling ),AWS 負載平衡,EBS Volume,網路和安全組,EC2Instance 類型
從這 5 小時的課程,你會學到
你將能夠以最佳成本在 EC2 上完全部署應用程式
你將能夠為你的應用程式選擇完美的 EC2 Instance
你可以使用 Load Balancer 在 EC2 上完全部署應用程式
你將能夠在 EC2 上完全部署自動擴展( Auto Scaling ) 的應用程式
你將了解 EC2 的所有運作元件,並成為 EC2 專家
你將了解一些最新的 EC2 功能!
要求
! 本課程僅涉及運行 Linux 的 EC2,而不是Windows!
有 AWS 的基礎知識比較好,但不是必需的
電腦/應用程式的基本知識是必備的
Linux 的基本知識是必備的(我們將執行一些 Linux 命令)
任何 Mac /…
View On WordPress
0 notes
Photo

(via Convert a PEM Key to a PPK Key on a Linux and Windows)
0 notes
Text
Amazon CloudWatch: The Solution For Real-Time Monitoring

What is Amazon CloudWatch?
Amazon CloudWatch allows you to monitor your Amazon Web Services (AWS) apps and resources in real time. CloudWatch may be used to collect and track metrics, which are characteristics you can measure for your resources and apps.
Every AWS service you use has metrics automatically displayed on the CloudWatch home page. Additionally, you may design your own dashboards to show analytics about your own apps as well as bespoke sets of metrics of your choosing.
When a threshold is crossed, you may set up alerts that monitor metrics and send messages or that automatically modify the resources you are keeping an eye on. For instance, you may keep an eye on your Amazon EC2 instances‘ CPU utilization and disk reads and writes, and then use that information to decide whether to start more instances to accommodate the increasing strain. To save money, you may also utilize this data to halt instances that aren’t being used.
CloudWatch gives you system-wide insight into operational health, application performance, and resource usage.
How Amazon CloudWatch works
In essence, Amazon CloudWatch is a storehouse for measurements. Metrics are entered into the repository by an AWS service like Amazon EC2, and statistics are retrieved using those metrics. Statistics on your own custom metrics may also be retrieved if you add them to the repository.
Metrics may be used to compute statistics, and the CloudWatch interface can then display the data graphically.
When specific conditions are fulfilled, you may set up alert actions to stop, start, or terminate an Amazon EC2 instance. Additionally, you may set up alerts to start Amazon Simple Notification Service (Amazon SNS) and Amazon EC2 Auto Scaling on your behalf. See Alarms for further details on setting up CloudWatch alarms.
Resources for AWS Cloud computing are kept in highly accessible data center buildings. Each data center facility is situated in a particular geographic area, referred to as a Region, to offer extra scalability and dependability. To achieve the highest level of failure isolation and stability, each region is designed to be totally separated from the others. Although metrics are kept independently in Regions, you may combine information from many Regions using CloudWatch’s cross-Region feature.
Why Use CloudWatch?
A service called Amazon CloudWatch keeps an eye on apps, reacts to changes in performance, maximizes resource use, and offers information on the state of operations. CloudWatch provides a consistent picture of operational health, enables users to create alarms, and automatically responds to changes by gathering data from various AWS services.
Advantages Of Amazon CloudWatch
Visualize and analyze your data with end-to-end observability
Utilize robust visualization tools to gather, retrieve, and examine your resource and application data.
Operate efficiently with automation
Utilize automated actions and alerts that are programmed to trigger at preset thresholds to enhance operational effectiveness.
Quickly obtain an integrated view of your AWS or other resources
Connect with over 70 AWS services with ease for streamlined scalability and monitoring.
Proactively monitor and get actional insights to enhance end user experiences
Use relevant information from your CloudWatch dashboards’ logs and analytics to troubleshoot operational issues.
Amazon CloudWatch Use cases
Monitor application performance
To identify and fix the underlying cause of performance problems with your AWS resources, visualize performance statistics, generate alarms, and correlate data.
Perform root cause analysis
To expedite debugging and lower the total mean time to resolution, examine metrics, logs, log analytics, and user requests.
Optimize resources proactively
By establishing actions that take place when thresholds are reached according to your requirements or machine learning models, you may automate resource planning and save expenses.
Test website impacts
By looking at images, logs, and web requests at any moment, you can determine precisely when and how long your website is affected.
Read more on Govidhtech.com
#AmazonCloudWatch#CloudWatch#AWSservice#AmazonEC2#EC2instance#News#Technews#Technology#Technologynews#Technologytrends#Govindhtech
0 notes
Text
AWS Elastic Fabric Adapter Basics, Limitations And Use Cases

Run ML and HPC applications at scale with the Elastic Fabric Adapter.
What is Elastic Fabric Adapter?
Customers can execute applications that need high volumes of inter-node communications at scale on AWS by using the Elastic Fabric Adapter (EFA), a network interface for Amazon EC2 instances. The performance of inter-instance communications is improved by its specially designed operating system (OS) bypass hardware interface, which is essential for scaling these applications. EFA enables machine learning (ML) applications using the NVIDIA Collective Communications Library (NCCL) and high performance computing (HPC) applications utilizing the Message Passing Interface (MPI) to grow to thousands of CPUs or GPUs. This gives you the on-demand elasticity and flexibility of the AWS cloud along with the application performance of on-premises HPC clusters.
Any compatible EC2 instance can have EFA enabled as an optional EC2 networking feature at no extra cost. Additionally, you may move your HPC apps to AWS with minimal changes because it integrates with the most widely used interfaces, APIs, and libraries for inter-node interactions.
EFA supports Nvidia Collective Communications Library (NCCL) for AI and ML applications, Open MPI 4 and later, and Intel MPI 2019 Update 5 and later for HPC applications. It also interfaces with Libfabric 1.7.0 and later.
AWS Elastic Fabric Adapter
EFA fundamentals
There are two methods for connecting an EFA device to an EC2 instance:
Creating both an EFA device and an ENA device by using a conventional EFA interface, also known as EFA plus ENA.
By using an EFA-only interface, only the Elastic Fabric Adapter device is created.
Features like built-in OS-bypass and congestion control via the Scalable Reliable Datagram (SRD) protocol are offered by the Elastic Fabric Adapter device. The low-latency, dependable transport functionality made possible by the EFA device features enables the EFA interface to improve application performance for HPC and ML workloads running on Amazon EC2. On the other hand, the ENA device provides conventional IP networking.
Traditionally, HPC applications use the Message Passing connect (MPI) to connect with the system’s network transport, while AI/ML applications employ NCCL. Applications in the AWS cloud have to interact with NCCL or MPI, which subsequently leverages the TCP/IP stack of the operating system and the ENA device driver to allow network connection between instances.
AI/ML applications use NCCL and HPC programs use MPI to connect directly with the Libfabric API using a typical EFA (EFA with ENA) or EFA-only interface. To send packets to the network, the Libfabric API talks directly with the EFA device, avoiding the operating system kernel. This lowers overhead and improves the performance of HPC and AI/ML applications.
Amazon Elastic Fabric Adapter
ENA, EFA, and EFA-only network interface differences
There are two kinds of network interfaces offered by Amazon EC2:
All of the conventional IP networking and routing functionalities needed to provide IP networking for a VPC are offered via ENA interfaces.
Both the EFA device for low-latency, high-throughput communication and the ENA device for IP networking are provided by EFA (EFA with ENA) interfaces.
EFA-only interfaces do not support the ENA device for conventional IP networking; they only support the EFA device’s features.
Interfaces and libraries that are supported
The following libraries and interfaces are supported by Elastic Fabric Adapters:
Launch MPI 4 and beyond.
Take note: For Graviton-based instances, Open MPI 4.0 or later is recommended.
Update 5 and later for Intel MPI 2019.
2.4.2 or later of the NVIDIA Collective Communications Library (NCCL)
Version 2.3 or higher of the AWS Neuron SDK
Types of instances that are supported
To see the available instance types that support EFAs in a specific Region
The available instance types vary by Region. To see the available instance types that support Elastic Fabric Adapters in a Region, use the describe-instance-types command with the --region parameter. Include the --filters parameter to scope the results to the instance types that support EFA and the --query parameter to scope the output to the value of InstanceType.
aws ec2 describe-instance-types –region us-east-1 –filters Name=network-info.efa-supported,Values=true –query “InstanceTypes[*].[InstanceType]” –output text | sort
Operating systems that are supported
Depending on the CPU type, different operating systems are supported. The supported operating systems are displayed in the following table.
Note: Ubuntu 20.04 supports peer direct support when used with dl1.24xlarge instances.
Limitations of Elastic Fabric Adapter
The following are the limits of EFAs:
Note: Traffic sent through an EFA (EFA plus ENA) or EFA-only interface’s EFA device is referred to as EFA traffic.
There is currently no support for EFA traffic between P4d/P4de/DL1 instances and other instance types.
One EFA per network card can be set up for instance types that allow multiple network cards. Only one Elastic Fabric Adapter per instance is supported by the other supported instance types.
Dedicated instances and dedicated hosts for c7g.16xlarge, m7g.16xlarge, and r7g.16xlarge are not supported when an EFA is attached.
Availability Zones and VPCs cannot be traversed by EFA traffic. Normal IP traffic from an EFA interface’s ENA device is exempt from this.
Routable EFA traffic does not exist. Routable IP traffic from an EFA interface’s ENA device is still available.
AWS Outposts does not support Elastic Fabric Adapter.
Only applications based on the AWS Cloud Digital Interface Software Development Kit (AWS CDI SDK) can use the EFA device of an EFA (EFA with ENA) interface on Windows instances. Without the additional EFA device capabilities, an EFA (EFA with ENA) interface works as an ENA interface when it is connected to a Windows instance for programs that are not based on the CDI SDK. Windows and Linux applications built using AWS CDI do not support the EFA-only interface.
Advantages
Quicker outcomes
For inter-instance communications, EFA’s special OS bypass networking technology offers a low-latency, low-jitter connection. This makes it possible for your distributed machine learning or tightly coupled HPC systems to grow to thousands of cores, which speeds up their operation.
Adaptable setup
You have the freedom to select the best computing configuration for your workload by turning on Elastic Fabric Adapter support on an expanding list of EC2 instances. Just enable EFA support on your new compute machines and adjust your cluster configurations as your needs evolve. There is no need for advance planning or bookings.
Smooth migration
Elastic Fabric Adapter communicates via the libfabric interface and libfabric APIs. This interface is supported by nearly all HPC programming models, so you may move your current HPC apps to the cloud with minimal changes.
How it operates
Use cases
Fluid Dynamics Computation
Engineers can now model ever-more complicated flow phenomena because to developments in Computational Fluid Dynamics (CFD) methods, while HPC speeds up turnaround times. Design engineers may now experiment with more adjustable parameters by scaling out their simulation jobs using Elastic Fabric Adapter, which produces faster and more accurate results.
Modeling the weather
To get accurate results, complex weather models need fast interconnects, large memory bandwidth, and reliable parallel file systems. Results are more accurate when the model’s grid spacing is closer, but it also uses more processing power. With the help of EFA’s quick interconnect, weather modeling apps may benefit from the AWS cloud’s nearly infinite scalability and produce more precise forecasts faster.
Learning Machines
Distributed computing on GPUs can greatly speed up the training of deep learning models. NCCL has already been implemented into top deep learning frameworks like Caffe, Caffe2, Chainer, MxNet, TensorFlow, and PyTorch to utilize its multi-GPU collectives for communications between nodes. Because EFA is tailored for NCCL on AWS, these training models have higher throughput and scalability, which produces quicker outcomes.
Elastic Fabric Adapter pricing
EFA is a free optional networking capability offered by Amazon EC2 that you can activate on any supported instance.
Read more on Govindhtech.com
#AWSElastic#AWSElasticFabric#AmazonEC2#ElasticFabricAdapter#EC2instance#EFAdevice#News#Technews#Technology#Technologynews#Technologytrends#Govindhtech
0 notes
Text
Amazon DCV 2024.0 Supports Ubuntu 24.04 LTS With Security

NICE DCV is a different entity now. Along with improvements and bug fixes, NICE DCV is now known as Amazon DCV with the 2024.0 release.
The DCV protocol that powers Amazon Web Services(AWS) managed services like Amazon AppStream 2.0 and Amazon WorkSpaces is now regularly referred to by its new moniker.
What’s new with version 2024.0?
A number of improvements and updates are included in Amazon DCV 2024.0 for better usability, security, and performance. The most recent Ubuntu 24.04 LTS is now supported by the 2024.0 release, which also offers extended long-term support to ease system maintenance and the most recent security patches. Wayland support is incorporated into the DCV client on Ubuntu 24.04, which improves application isolation and graphical rendering efficiency. Furthermore, DCV 2024.0 now activates the QUIC UDP protocol by default, providing clients with optimal streaming performance. Additionally, when a remote user connects, the update adds the option to wipe the Linux host screen, blocking local access and interaction with the distant session.
What is Amazon DCV?
Customers may securely provide remote desktops and application streaming from any cloud or data center to any device, over a variety of network conditions, with Amazon DCV, a high-performance remote display protocol. Customers can run graphic-intensive programs remotely on EC2 instances and stream their user interface to less complex client PCs, doing away with the requirement for pricey dedicated workstations, thanks to Amazon DCV and Amazon EC2. Customers use Amazon DCV for their remote visualization needs across a wide spectrum of HPC workloads. Moreover, well-known services like Amazon Appstream 2.0, AWS Nimble Studio, and AWS RoboMaker use the Amazon DCV streaming protocol.
Advantages
Elevated Efficiency
You don’t have to pick between responsiveness and visual quality when using Amazon DCV. With no loss of image accuracy, it can respond to your apps almost instantly thanks to the bandwidth-adaptive streaming protocol.
Reduced Costs
Customers may run graphics-intensive apps remotely and avoid spending a lot of money on dedicated workstations or moving big volumes of data from the cloud to client PCs thanks to a very responsive streaming experience. It also allows several sessions to share a single GPU on Linux servers, which further reduces server infrastructure expenses for clients.
Adaptable Implementations
Service providers have access to a reliable and adaptable protocol for streaming apps that supports both on-premises and cloud usage thanks to browser-based access and cross-OS interoperability.
Entire Security
To protect customer data privacy, it sends pixels rather than geometry. To further guarantee the security of client data, it uses TLS protocol to secure end-user inputs as well as pixels.
Features
In addition to native clients for Windows, Linux, and MacOS and an HTML5 client for web browser access, it supports remote environments running both Windows and Linux. Multiple displays, 4K resolution, USB devices, multi-channel audio, smart cards, stylus/touch capabilities, and file redirection are all supported by native clients.
The lifecycle of it session may be easily created and managed programmatically across a fleet of servers with the help of DCV Session Manager. Developers can create personalized Amazon DCV web browser client applications with the help of the Amazon DCV web client SDK.
How to Install DCV on Amazon EC2?
Implement:
Sign up for an AWS account and activate it.
Open the AWS Management Console and log in.
Either download and install the relevant Amazon DCV server on your EC2 instance, or choose the proper Amazon DCV AMI from the Amazon Web Services Marketplace, then create an AMI using your application stack.
After confirming that traffic on port 8443 is permitted by your security group’s inbound rules, deploy EC2 instances with the Amazon DCV server installed.
Link:
On your device, download and install the relevant Amazon DCV native client.
Use the web client or native Amazon DCV client to connect to your distant computer at https://:8443.
Stream:
Use AmazonDCV to stream your graphics apps across several devices.
Use cases
Visualization of 3D Graphics
HPC workloads are becoming more complicated and consuming enormous volumes of data in a variety of industrial verticals, including Oil & Gas, Life Sciences, and Design & Engineering. The streaming protocol offered by Amazon DCV makes it unnecessary to send output files to client devices and offers a seamless, bandwidth-efficient remote streaming experience for HPC 3D graphics.
Application Access via a Browser
The Web Client for Amazon DCV is compatible with all HTML5 browsers and offers a mobile device-portable streaming experience. By removing the need to manage native clients without sacrificing streaming speed, the Web Client significantly lessens the operational pressure on IT departments. With the Amazon DCV Web Client SDK, you can create your own DCV Web Client.
Personalized Remote Apps
The simplicity with which it offers streaming protocol integration might be advantageous for custom remote applications and managed services. With native clients that support up to 4 monitors at 4K resolution each, Amazon DCV uses end-to-end AES-256 encryption to safeguard both pixels and end-user inputs.
Amazon DCV Pricing
Amazon Entire Cloud:
Using Amazon DCV on AWS does not incur any additional fees. Clients only have to pay for the EC2 resources they really utilize.
On-site and third-party cloud computing
Please get in touch with DCV distributors or resellers in your area here for more information about licensing and pricing for Amazon DCV.
Read more on Govindhtech.com
#AmazonDCV#Ubuntu24.04LTS#Ubuntu#DCV#AmazonWebServices#AmazonAppStream#EC2instances#AmazonEC2#News#TechNews#TechnologyNews#Technologytrends#technology#govindhtech
2 notes
·
View notes