#amazonlinux
Explore tagged Tumblr posts
Quote
AmazonLinuxはどうなるか
[B! linux] Red HatがクローンOSベンダを非難、「付加価値もなくコードをリビルドするだけなら、それはオープンソースに対する脅威だ」と
0 notes
Link

For years, one of our favorite Linux management tools has been Webmin. If you'd like to use it with AWS EC2 on Amazon Linux 2, here a quick tutorial:
Coffield Web - Install Webmin on AWS Amazon Linux 2
Today we're going to install one of our favorite Linux management tools. Webmin has been around for a very long time and is great if you're not a command-line guru. Or perhaps you just prefer a nice GUI to look at. Webmin has a great documentation wiki here. So let's...
#aws#amazonlinux#linux#webmin#webdev#webdevelopement#gui#admin#networkadministration#networkinfrastructure#business#appdev#technology#tech
0 notes
Text
Running amazonlinux image in docker and install php7.3 into its image
Running amazonlinux image in docker and install php7.3 into its image
First, grab docker image from docker hubs repository
docker pull amazonlinux
run and get into its bash :
docker run -it --rm amazonlinux /bin/bash
actually you can run have any other flavor as well, such as centos.
Then, once it is running, you can perform any package installation needed. i.e httpd and php7.3 for my case.
yum install httpd amazon-linux-extras install -y php7.3
N…
View On WordPress
0 notes
Text
Dockerを使ってAmazonLinux2にReactの開発環境を作る
参考にしようと思ったサイトがAlpineを使っていたので、AL2ベースで作りたいなぁとか適当に思い立って作ったメモ。
参考:Reactの開発環境をDockerで構築してみた AWS EC2 AmazonLinux2 Node.jsをインストールしてnpmコマンドを使用できる様にする
準備
・DockerとDocker-composeが動く環境を用意
・Dockerコマンド実行予定のディレクトリに以下のファイルを作成 dockerfile
FROM amazonlinux:2 RUN yum update \ && curl -sL https://rpm.nodesource.com/setup_12.x | bash - \ && yum -y install --enablerepo=nodesource nodejs WORKDIR /usr/src/app
docker-compose.yml
version: '3' services: node: build: context: . dockerfile: Dockerfile volumes: - ./:/usr/src/app command: sh -c "cd react-sample && npm start" ports: - "3000:3000"
・ファイルを設置したディレクトリに移動し、イメージのビルド
$ docker-compose build
・Reactのインストール
$ docker-compose run --rm node sh -c "npm install -g create-react-app && create-react-app react-sample"
・コンテナ起動
$ docker-compose up -d
localhost:3000にアクセスしてデモページが見れたらOK
0 notes
Text
Amazon Linux Vs. Ubuntu- Differences
Amazon Linux Vs. Ubuntu- Differences #AmazonLinux #Ubuntu #AmazonlinuxvsUbuntu #KaliLinux #LinuxOS #Linux #Ubuntu #UbuntuvsAmazon #Amazon #LinuxDIstros #
In this article, we will feature differences between Amazon Linux vs Ubuntu. You can read more about Ubuntu vs Debian from here. Amazon Linux is a Linux image produced by Amazon. The main purpose of its existence is its use in amazon web services on the amazon elastic compute cloud. It has two main versions Amazon Linux AMI and Amazon Linux 2. The latter of which is the latest version with a…
View On WordPress
0 notes
Text
#Get Ready With Me.
THIS JUST IN: YOU LOOK AMAZING. DON’T THINK TOO HARD. The world is burning, but here's what’s in my cart 💅 #GRWM During the Collapse 12 Looks That Survived the Fire Hauls More Explosive Than the Headlines Best Lip Glosses for Evacuation Day These Leggings Ended a Regime “This is Bisan from Gaza”
Wake up Wake up Wake up Scroll Scroll Scroll Scroll ScrollWake up ScrollWake up ScrollWake up ScrollMake your jaw look snatched Scroll - Scroll - Scroll Scroll Scroll Scroll Scroll - Scroll - Scroll Scroll Scroll Scroll Scroll - Scroll - Scroll Scroll Scroll Scroll Scroll#GRWM while ScrollWhile ScrollWhile ScrollWhile... Interact from the bottom up please See me Anyone See me Please see me Wake up Wake up This is Bisan from Gaza See me Anyone? Like comment share please Someone see me Scroll This can’t be the end Scroll
Source: #Get Ready With Me.
0 notes
Text
Impress Websites: DevOps/Developer - Europe only
Headquarters: Västerås, Sweden URL: http://impress.website
Impress is a website-as-a-service provider purpose-built for resellers. We host and fully manage websites for companies that sell website solutions, but do not want to technically manage their clients' websites.
Now, we are looking to expand our team with remote positions, starting with IT operations. Remote is not new to us as a company, we have always had offices in Serbia and Sweden. We also have several team members working completely or partly remote.
We offer a fast-paced and fun workplace with a friendly and open atmosphere. Our whole team meets in different European locations twice a year for a team-building/company meetup.
We are looking for someone that can work independently and with great communication skills. We expect our team members to be able to work in a self-driven mode while understanding and keeping track of their responsibilities.
Responsibilities:
Design, estimate, and code new features
Participate in software design discussions
Ability to work in a collaborative team environment
Coordination with team leads/managers and tester(s) during development
Skills:
Excellent verbal and written communication skills in English
Good knowledge of design and architectural patterns and development best practices
Familiar with Service-oriented architecture (SOA) / Microservices design patterns
Familiar with message brokers and event-based communication (RabbitMQ)
Designing Restful API
Extensive knowledge and experience of .netcore WebAPI framework
MySQL database schema design
Familiar with Linux
Huge plus if you know:
Understand and design system architecture
Solid networking knowledge: firewall, proxy, routing, ...
Solid Linux administration/operation knowledge: CentOS/AmazonLinux, Debian, Ubuntu...
Containers & Orchestration: Docker, Kubernetes
Amazon Web Services: RDS, S3, EFS, CloudFront, EKS, ECR, Route53...
Infrastructure as Code principles and tools: Ansible, Cloudformation, ...
Scripting: Bash, python...
What we offer:
Work where you’re most productive
Flexible working hours so you are free to plan the day
Whatever equipment you need to do great work
Gym or other sports/fitness contribution
Health and pension insurance
If you like working from a Coworking space, you can do it and we will pay (and yes it is allowed to just buy lattes so the cafe doesn’t kick you out)
Keep growing by attending a local paid conference a year
2 x annual teams retreats
If you feel that you are the person we are looking for and that you are up for a challenge, we are anxious to meet with you. Please send your CV on [email protected] and make sure to write a personal note specifically for this application. We value written English and we would love to hear why you are a great fit for us.
To apply: [email protected]
from We Work Remotely: Remote jobs in design, programming, marketing and more https://ift.tt/3bqemnh from Work From Home YouTuber Job Board Blog https://ift.tt/2vnYe5m
0 notes
Photo
Nginxとphp-fpmを用いてLaravalを表示する https://ift.tt/2ZiNpeO
どうも、若松です。
前回はLaravelをDockerで起動し、イメージを軽量化するところまで行いました。 https://cloudpack.media/48190
今回は、php artisanでのサーバ起動ではなく、Nginx+php-fpmでLaravelを表示するところまでを行います。
設定
ディレクトリ構造
docker/ ├─ docker-compose.yml ├─ nginx/ | ├─ Dockerfile | ��─ default.conf └─ laravel/ └─ Dockerfile
docker-compose.yml
version: '2' services: nginx: image: nginx ports: - "80:80" laravel: image: laravel
Dockerfile(nginx)
FROM nginx:1.17-alpine # ローカルから設定ファイルをコピー COPY default.conf /etc/nginx/conf.d/default.conf
default.conf(nginx)
server { listen 80; server_name localhost; location / { # ドキュメントルート設定 root /var/www/laravel/public; fastcgi_split_path_info ^(.+\.(?:php|phar))(/.*)$; fastcgi_intercept_errors on; fastcgi_index index.php; include fastcgi_params; # FastCGIの向き先をLaravelコンテナに設定 fastcgi_pass laravel:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } }
Dockerfile(laravel)
FROM amazonlinux:2 as vender # PHPインストール RUN amazon-linux-extras install -y php7.3 RUN yum install -y php-pecl-zip php-mbstring php-dom # Composerインストール RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" RUN php -r "if (hash_file('sha384', 'composer-setup.php') === '48e3236262b34d30969dca3c37281b3b4bbe3221bda826ac6a9a62d6444cdb0dcd0615698a5cbe587c3f0fe57a54d8f5') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" RUN php composer-setup.php RUN php -r "unlink('composer-setup.php');" RUN mv composer.phar /usr/local/bin/composer # 環境変数設定 ENV COMPOSER_ALLOW_SUPERUSER 1 ENV COMPOSER_HOME "/opt/composer" ENV PATH "$PATH:/opt/composer/vendor/bin" # Laravelインストール RUN composer global require "laravel/installer" # Laravelプロジェクト作成 WORKDIR /var/www RUN composer create-project laravel/laravel laravel FROM php:7.3-fpm-alpine # ビルド用コンテナから必要なコンテンツをコピー COPY --from=vender --chown=www-data:www-data /var/www/ /var/www/
操作
コマンドは全て最上位ディレクトリの dockerから行う想定です。
Nginxコンテナビルド
docker build nginx/. -t nginx --squash
Laravelコンテナビルド
docker build laravel/. -t laravel --squash
docker-composeで起動
docker-compose up
ブラウザで表示を確認
http://localhost:8000 にアクセスすることで以下のサンプルを表示します。
解説
Nginxコンテナ
ベースイメージ
FROM nginx:1.17-alpine
ベースイメージにはNginx公式リポジトリにあるnginx:1.17-alpineを使用しました。 2019/7/14現在のNginxの最新が1.17であり、軽量化を目的にAlpineLinux版を使いたかったためです。 80番ポートの開放やNginxの起動についてはベースイメージ内で既に設定されているため、今回のDokcerfileには記述していません。
default.conf
COPY default.conf /etc/nginx/conf.d/default.conf
設定はローカルに用意したdefault.confをイメージにコピーして配置します。 FastCGI設定のほとんどは一般的な設定のため、特徴的なものだけ解説します。
root
root /var/www/laravel/public;
ドキュメントルートはLaravelコンテナのアプリケーションが配置されているディレクトリを指定します。
fastcgi_pass
fastcgi_pass laravel:9000;
UnixソケットかTCPを指定できますが、Unixソケットではコンテナを越えられないため、TCPで設定します。 アドレスの指定にはDockerのNamespaceを利用します。
Laravelコンテナ
前回のDockerfileからの差異のみ解説します。
ベースイメージ
FROM php:7.3-fpm-alpine
実行用イメージのベースを php:7.3-alpineからphp:7.3-fpm-alpineに変更しました。 これによってデフォルトでphp-fpmがインストールされた状態から設定を行えばよくなります。 9000番ポートの開放やphp-fpmの起動についてはベースイメージ内で既に設定されているため、今回のDokcerfileには記述していません。
Laravelコンテンツのオーナー変更
COPY --from=vender --chown=www-data:www-data /var/www/ /var/www/
php:7.3-fpm-alpineのphp-fpm初期設定では、php-fpmのワーカ起動ユーザはwww-dataになっています。 COPYコマンドで配置したLaravelコンテンツはrootがオーナーになってしまうため、そのままだと権限エラーとなります。 そこで、--chownオプションを使用し、オーナーをwww-dataへ変更しています。 --chownオプションはBashでいうところのchown -Rとなるため、ディレクトリがあっても再帰的に処理してくれます。
docker-compose
今回からコンテナが2つになったため、操作簡略のためにdocker-composeを導入しました。 docker-composeには起動時のオプション設定や、複数コンテナのビルド、依存関係制御など様々な機能がありますが、ここではコンテナ起動/停止とポートオプションのみ使用しています。
複数コンテナのビルド���使用しない理由
本当であれば使用したかったのが本音です。 しかしながら2019/7/14現在、BuildKitやsquashオプションに対応していないため、あえてdockerコマンドでビルドを行っています。
Tips
コンテナイメージ内にあるファイルをローカルにコピーする
設定ファイルを作成する際に、デフォルトの設定をローカルにコピーし、それを改変して作成していくことはよくあると思います。 コンテナではSCPが使えないため、代わりにdocker cpコマンドを使用します。 今回のdefault.confの場合は、以下のようにしてコピーしました。
docker run -d --name nginx nginx:1.17-alpine docker cp $(docker ps --filter name=nginx -aq):/etc/nginx/conf.d/default.conf .
コンテナイメージの履歴を確認する
FROMで使用するベースイメージには予めポートの開放やデーモンの起動が設定されている場合があります。 今回でいうところのNginxやphp-fpmですね。 それを確認するにはdocker historyコマンドを使用します。 例としてphp-fpmのhistoryを確認してみます。
docker history --format php:7.3-fpm-alpine /bin/sh -c #(nop) CMD ["php-fpm"] /bin/sh -c #(nop) EXPOSE 9000 /bin/sh -c #(nop) STOPSIGNAL SIGQUIT /bin/sh -c set -eux; cd /usr/local/etc; if… /bin/sh -c #(nop) WORKDIR /var/www/html /bin/sh -c #(nop) ENTRYPOINT ["docker-php-e… /bin/sh -c docker-php-ext-enable sodium ...
このようにズラズラとコマンドが表示されるかと思います。 これは実行日次のtimestampが新しい順で上から並んでいます。 これを見るとベースイメージの最後に、9000番ポートの開放とphp-fpmの実行が行われているため、今回のDockerfileではポートの開放とデーモンの起動が不要なことがわかります。
まとめ
Nginx+php-fpmに加えて、docker-composeも導入してみました。 そんなに特殊な設定を行ったわけではありませんが、Dockerfileの書き方やコンテナ特有の設定等はお伝えできたかと思います。
元記事はこちら
「Nginxとphp-fpmを用いてLaravalを表示する」
July 29, 2019 at 04:00PM
0 notes
Text
Git Install on Ubuntu/Centos/AmazonLinux/Windows
This tutorial explains Git Install on Ubuntu/Centos/AmazonLinux/Windows
Git
Git is the most commonly used Distributed Version Control System these days.
To know about ” What is Distributed Version Control System and Git ?” in more details read about – Git Tutorial for Beginners
Git Install on Ubuntu
Here is the step by step guide to Install git on ubuntu.
Step 1- Update your Ubuntu Linux…
View On WordPress
#git install#git install amazon linux 2#git install centos#git install on ubuntu#install git#install git centos#install git on windows#install git ubuntu
0 notes
Text
How to run AWS CloudHSM workloads on Docker containers
AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to generate and use your own encryption keys on the AWS Cloud. With CloudHSM, you can manage your own encryption keys using FIPS 140-2 Level 3 validated HSMs. Your HSMs are part of a CloudHSM cluster. CloudHSM automatically manages synchronization, high availability, and failover within a cluster.
CloudHSM is part of the AWS Cryptography suite of services, which also includes AWS Key Management Service (KMS) and AWS Certificate Manager Private Certificate Authority (ACM PCA). KMS and ACM PCA are fully managed services that are easy to use and integrate. You’ll generally use AWS CloudHSM only if your workload needs a single-tenant HSM under your own control, or if you need cryptographic algorithms that aren’t available in the fully-managed alternatives.
CloudHSM offers several options for you to connect your application to your HSMs, including PKCS#11, Java Cryptography Extensions (JCE), or Microsoft CryptoNG (CNG). Regardless of which library you choose, you’ll use the CloudHSM client to connect to all HSMs in your cluster. The CloudHSM client runs as a daemon, locally on the same Amazon Elastic Compute Cloud (EC2) instance or server as your applications.
The deployment process is straightforward if you’re running your application directly on your compute resource. However, if you want to deploy applications using the HSMs in containers, you’ll need to make some adjustments to the installation and execution of your application and the CloudHSM components it depends on. Docker containers don’t typically include access to an init process like systemd or upstart. This means that you can’t start the CloudHSM client service from within the container using the general instructions provided by CloudHSM. You also can’t run the CloudHSM client service remotely and connect to it from the containers, as the client daemon listens to your application using a local Unix Domain Socket. You cannot connect to this socket remotely from outside the EC2 instance network namespace.
This blog post discusses the workaround that you’ll need in order to configure your container and start the client daemon so that you can utilize CloudHSM-based applications with containers. Specifically, in this post, I’ll show you how to run the CloudHSM client daemon from within a Docker container without needing to start the service. This enables you to use Docker to develop, deploy and run applications using the CloudHSM software libraries, and it also gives you the ability to manage and orchestrate workloads using tools and services like Amazon Elastic Container Service (Amazon ECS), Kubernetes, Amazon Elastic Container Service for Kubernetes (Amazon EKS), and Jenkins.
Solution overview
My solution shows you how to create a proof-of-concept sample Docker container that is configured to run the CloudHSM client daemon. When the daemon is up and running, it runs the AESGCMEncryptDecryptRunner Java class, available on the AWS CloudHSM Java JCE samples repo. This class uses CloudHSM to generate an AES key, then it uses the key to encrypt and decrypt randomly generated data.
Note: In my example, you must manually enter the crypto user (CU) credentials as environment variables when running the container. For any production workload, you’ll need to carefully consider how to provide, secure, and automate the handling and distribution of your HSM credentials. You should work with your security or compliance officer to ensure that you’re using an appropriate method of securing HSM login credentials for your application and security needs.
Figure 1: Architectural diagram
Figure 1: Architectural diagram
Prerequisites
To implement my solution, I recommend that you have basic knowledge of the below:
CloudHSM
Docker
Java
Here’s what you’ll need to follow along with my example:
An active CloudHSM cluster with at least one active HSM. You can follow the Getting Started Guide to create and initialize a CloudHSM cluster. (Note that for any production cluster, you should have at least two active HSMs spread across Availability Zones.)
An Amazon Linux 2 EC2 instance in the same Amazon Virtual Private Cloud in which you created your CloudHSM cluster. The EC2 instance must have the CloudHSM cluster security group attached—this security group is automatically created during the cluster initialization and is used to control access to the HSMs. You can learn about attaching security groups to allow EC2 instances to connect to your HSMs in our online documentation.
A CloudHSM crypto user (CU) account created on your HSM. You can create a CU by following these user guide steps.
Solution details
On your Amazon Linux EC2 instance, install Docker:
# sudo yum -y install docker
Start the docker service:
# sudo service docker start
Create a new directory and step into it. In my example, I use a directory named “cloudhsm_container.” You’ll use the new directory to configure the Docker image.
# mkdir cloudhsm_container
# cd cloudhsm_container
Copy the CloudHSM cluster’s CA certificate (customerCA.crt) to the directory you just created. You can find the CA certificate on any working CloudHSM client instance under the path /opt/cloudhsm/etc/customerCA.crt. This certificate is created during initialization of the CloudHSM Cluster and is needed to connect to the CloudHSM cluster.
In your new directory, create a new file with the name run_sample.sh that includes the contents below. The script starts the CloudHSM client daemon, waits until the daemon process is running and ready, and then runs the Java class that is used to generate an AES key to encrypt and decrypt your data.
#! /bin/bash
# start cloudhsm client
echo -n "* Starting CloudHSM client ... "
/opt/cloudhsm/bin/cloudhsm_client /opt/cloudhsm/etc/cloudhsm_client.cfg &> /tmp/cloudhsm_client_start.log &
# wait for startup
while true
do
if grep 'libevmulti_init: Ready !' /tmp/cloudhsm_client_start.log &> /dev/null
then
echo "[OK]"
break
fi
sleep 0.5
done
echo -e "\n* CloudHSM client started successfully ... \n"
# start application
echo -e "\n* Running application ... \n"
java -ea -Djava.library.path=/opt/cloudhsm/lib/ -jar target/assembly/aesgcm-runner.jar --method environment
echo -e "\n* Application completed successfully ... \n"
In the new directory, create another new file and name it Dockerfile (with no extension). This file will specify that the Docker image is built with the following components:
The AWS CloudHSM client package.
The AWS CloudHSM Java JCE package.
OpenJDK 1.8. This is needed to compile and run the Java classes and JAR files.
Maven, a build automation tool that is needed to assist with building the Java classes and JAR files.
The AWS CloudHSM Java JCE samples that will be downloaded and built.
Cut and paste the contents below into Dockerfile.
Note: Make sure to replace the HSM_IP line with the IP of an HSM in your CloudHSM cluster. You can get your HSM IPs from the CloudHSM console, or by running the describe-clusters AWS CLI command.
# Use the amazon linux image
FROM amazonlinux:2
# Install CloudHSM client
RUN yum install -y https://s3.amazonaws.com/cloudhsmv2-software/CloudHsmClient/EL7/cloudhsm-client-latest.el7.x86_64.rpm
# Install CloudHSM Java library
RUN yum install -y https://s3.amazonaws.com/cloudhsmv2-software/CloudHsmClient/EL7/cloudhsm-client-jce-latest.el7.x86_64.rpm
# Install Java, Maven, wget, unzip and ncurses-compat-libs
RUN yum install -y java maven wget unzip ncurses-compat-libs
# Create a work dir
WORKDIR /app
# Download sample code
RUN wget https://github.com/aws-samples/aws-cloudhsm-jce-examples/archive/master.zip
# unzip sample code
RUN unzip master.zip
# Change to the create directory
WORKDIR aws-cloudhsm-jce-examples-master
# Build JAR files
RUN mvn validate && mvn clean package
# Set HSM IP as an environmental variable
ENV HSM_IP <insert the IP address of an active CloudHSM instance here>
# Configure cloudhms-client
COPY customerCA.crt /opt/cloudhsm/etc/
RUN /opt/cloudhsm/bin/configure -a $HSM_IP
# Copy the run_sample.sh script
COPY run_sample.sh .
# Run the script
CMD ["bash","run_sample.sh"]
Now you’re ready to build the Docker image. Use the following command, with the name jce_sample_client. This command will let you use the Dockerfile you created in step 6 to create the image.
# sudo docker build -t jce_sample_client .
To run a Docker container from the Docker image you just created, use the following command. Make sure to replace the user and password with your actual CU username and password. (If you need help setting up your CU credentials, see prerequisite 3. For more information on how to provide CU credentials to the AWS CloudHSM Java JCE Library, refer to the steps in the CloudHSM user guide.)
# sudo docker run --env HSM_PARTITION=PARTITION_1 \
--env HSM_USER=<user> \
--env HSM_PASSWORD=<password> \
jce_sample_client
If successful, the output should look like this:
* Starting cloudhsm-client ... [OK]
* cloudhsm-client started successfully ...
* Running application ...
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors
to the console.
70132FAC146BFA41697E164500000000
Successful decryption
SDK Version: 2.03
* Application completed successfully ...
Conclusion
My solution provides an example of how to run CloudHSM workloads on Docker containers. You can use it as a reference to implement your cryptographic application in a way that benefits from the high availability and load balancing built in to AWS CloudHSM without compromising on the flexibility that Docker provides for developing, deploying, and running applications. If you have comments about this post, submit them in the Comments section below.[Source]-https://aws.amazon.com/blogs/security/how-to-run-aws-cloudhsm-workloads-on-docker-containers/
Beginners & Advanced level Docker Training Course in Mumbai. Asterix Solution's 25 Hour Docker Training gives broad hands-on practicals. For details, Visit :
0 notes
Text
Original Post from Amazon Security Author: Mohamed AboElKheir
AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to generate and use your own encryption keys on the AWS Cloud. With CloudHSM, you can manage your own encryption keys using FIPS 140-2 Level 3 validated HSMs. Your HSMs are part of a CloudHSM cluster. CloudHSM automatically manages synchronization, high availability, and failover within a cluster.
CloudHSM is part of the AWS Cryptography suite of services, which also includes AWS Key Management Service (KMS) and AWS Certificate Manager Private Certificate Authority (ACM PCA). KMS and ACM PCA are fully managed services that are easy to use and integrate. You’ll generally use AWS CloudHSM only if your workload needs a single-tenant HSM under your own control, or if you need cryptographic algorithms that aren’t available in the fully-managed alternatives.
CloudHSM offers several options for you to connect your application to your HSMs, including PKCS#11, Java Cryptography Extensions (JCE), or Microsoft CryptoNG (CNG). Regardless of which library you choose, you’ll use the CloudHSM client to connect to all HSMs in your cluster. The CloudHSM client runs as a daemon, locally on the same Amazon Elastic Compute Cloud (EC2) instance or server as your applications.
The deployment process is straightforward if you’re running your application directly on your compute resource. However, if you want to deploy applications using the HSMs in containers, you’ll need to make some adjustments to the installation and execution of your application and the CloudHSM components it depends on. Docker containers don’t typically include access to an init process like systemd or upstart. This means that you can’t start the CloudHSM client service from within the container using the general instructions provided by CloudHSM. You also can’t run the CloudHSM client service remotely and connect to it from the containers, as the client daemon listens to your application using a local Unix Domain Socket. You cannot connect to this socket remotely from outside the EC2 instance network namespace.
This blog post discusses the workaround that you’ll need in order to configure your container and start the client daemon so that you can utilize CloudHSM-based applications with containers. Specifically, in this post, I’ll show you how to run the CloudHSM client daemon from within a Docker container without needing to start the service. This enables you to use Docker to develop, deploy and run applications using the CloudHSM software libraries, and it also gives you the ability to manage and orchestrate workloads using tools and services like Amazon Elastic Container Service (Amazon ECS), Kubernetes, Amazon Elastic Container Service for Kubernetes (Amazon EKS), and Jenkins.
Solution overview
My solution shows you how to create a proof-of-concept sample Docker container that is configured to run the CloudHSM client daemon. When the daemon is up and running, it runs the AESGCMEncryptDecryptRunner Java class, available on the AWS CloudHSM Java JCE samples repo. This class uses CloudHSM to generate an AES key, then it uses the key to encrypt and decrypt randomly generated data.
Note: In my example, you must manually enter the crypto user (CU) credentials as environment variables when running the container. For any production workload, you’ll need to carefully consider how to provide, secure, and automate the handling and distribution of your HSM credentials. You should work with your security or compliance officer to ensure that you’re using an appropriate method of securing HSM login credentials for your application and security needs.
Figure 1: Architectural diagram
Prerequisites
To implement my solution, I recommend that you have basic knowledge of the below:
CloudHSM
Docker
Java
Here’s what you’ll need to follow along with my example:
An active CloudHSM cluster with at least one active HSM. You can follow the Getting Started Guide to create and initialize a CloudHSM cluster. (Note that for any production cluster, you should have at least two active HSMs spread across Availability Zones.)
An Amazon Linux 2 EC2 instance in the same Amazon Virtual Private Cloud in which you created your CloudHSM cluster. The EC2 instance must have the CloudHSM cluster security group attached—this security group is automatically created during the cluster initialization and is used to control access to the HSMs. You can learn about attaching security groups to allow EC2 instances to connect to your HSMs in our online documentation.
A CloudHSM crypto user (CU) account created on your HSM. You can create a CU by following these user guide steps.
Solution details
On your Amazon Linux EC2 instance, install Docker:
# sudo yum -y install docker
Start the docker service:
# sudo service docker start
Create a new directory and step into it. In my example, I use a directory named “cloudhsm_container.” You’ll use the new directory to configure the Docker image.
# mkdir cloudhsm_container # cd cloudhsm_container
Copy the CloudHSM cluster’s CA certificate (customerCA.crt) to the directory you just created. You can find the CA certificate on any working CloudHSM client instance under the path /opt/cloudhsm/etc/customerCA.crt. This certificate is created during initialization of the CloudHSM Cluster and is needed to connect to the CloudHSM cluster.
In your new directory, create a new file with the name run_sample.sh that includes the contents below. The script starts the CloudHSM client daemon, waits until the daemon process is running and ready, and then runs the Java class that is used to generate an AES key to encrypt and decrypt your data.
#! /bin/bash # start cloudhsm client echo -n "* Starting CloudHSM client ... " /opt/cloudhsm/bin/cloudhsm_client /opt/cloudhsm/etc/cloudhsm_client.cfg &> /tmp/cloudhsm_client_start.log & # wait for startup while true do if grep 'libevmulti_init: Ready !' /tmp/cloudhsm_client_start.log &> /dev/null then echo "[OK]" break fi sleep 0.5 done echo -e "n* CloudHSM client started successfully ... n" # start application echo -e "n* Running application ... n" java -ea -Djava.library.path=/opt/cloudhsm/lib/ -jar target/assembly/aesgcm-runner.jar --method environment echo -e "n* Application completed successfully ... n"
In the new directory, create another new file and name it Dockerfile (with no extension). This file will specify that the Docker image is built with the following components:
The AWS CloudHSM client package.
The AWS CloudHSM Java JCE package.
OpenJDK 1.8. This is needed to compile and run the Java classes and JAR files.
Maven, a build automation tool that is needed to assist with building the Java classes and JAR files.
The AWS CloudHSM Java JCE samples that will be downloaded and built.
Cut and paste the contents below into Dockerfile.
Note: Make sure to replace the HSM_IP line with the IP of an HSM in your CloudHSM cluster. You can get your HSM IPs from the CloudHSM console, or by running the describe-clusters AWS CLI command.
# Use the amazon linux image FROM amazonlinux:2 # Install CloudHSM client RUN yum install -y https://s3.amazonaws.com/cloudhsmv2-software/CloudHsmClient/EL7/cloudhsm-client-latest.el7.x86_64.rpm # Install CloudHSM Java library RUN yum install -y https://s3.amazonaws.com/cloudhsmv2-software/CloudHsmClient/EL7/cloudhsm-client-jce-latest.el7.x86_64.rpm # Install Java, Maven, wget, unzip and ncurses-compat-libs RUN yum install -y java maven wget unzip ncurses-compat-libs # Create a work dir WORKDIR /app # Download sample code RUN wget https://github.com/aws-samples/aws-cloudhsm-jce-examples/archive/master.zip # unzip sample code RUN unzip master.zip # Change to the create directory WORKDIR aws-cloudhsm-jce-examples-master # Build JAR files RUN mvn validate && mvn clean package # Set HSM IP as an environmental variable ENV HSM_IP <insert the IP address of an active CloudHSM instance here> # Configure cloudhms-client COPY customerCA.crt /opt/cloudhsm/etc/ RUN /opt/cloudhsm/bin/configure -a $HSM_IP # Copy the run_sample.sh script COPY run_sample.sh . # Run the script CMD ["bash","run_sample.sh"]
Now you’re ready to build the Docker image. Use the following command, with the name jce_sample_client. This command will let you use the Dockerfile you created in step 6 to create the image.
# sudo docker build -t jce_sample_client .
To run a Docker container from the Docker image you just created, use the following command. Make sure to replace the user and password with your actual CU username and password. (If you need help setting up your CU credentials, see prerequisite 3. For more information on how to provide CU credentials to the AWS CloudHSM Java JCE Library, refer to the steps in the CloudHSM user guide.)
# sudo docker run --env HSM_PARTITION=PARTITION_1 --env HSM_USER=<user> --env HSM_PASSWORD=<password> jce_sample_client
If successful, the output should look like this:
* Starting cloudhsm-client ... [OK] * cloudhsm-client started successfully ... * Running application ... ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. 70132FAC146BFA41697E164500000000 Successful decryption SDK Version: 2.03 * Application completed successfully ...
Conclusion
My solution provides an example of how to run CloudHSM workloads on Docker containers. You can use it as a reference to implement your cryptographic application in a way that benefits from the high availability and load balancing built in to AWS CloudHSM without compromising on the flexibility that Docker provides for developing, deploying, and running applications. If you have comments about this post, submit them in the Comments section below.
Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.
Mohamed AboElKheir
Mohamed AboElKheir joined AWS in September 2017 as a Security CSE (Cloud Support Engineer) based in Cape Town. He is a subject matter expert for CloudHSM and is always enthusiastic about assisting CloudHSM customers with advanced issues and use cases. Mohamed is passionate about InfoSec, specifically cryptography, penetration testing (he’s OSCP certified), application security, and cloud security (he’s AWS Security Specialty certified).
#gallery-0-5 { margin: auto; } #gallery-0-5 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-0-5 img { border: 2px solid #cfcfcf; } #gallery-0-5 .gallery-caption { margin-left: 0; } /* see gallery_shortcode() in wp-includes/media.php */
Go to Source Author: Mohamed AboElKheir How to run AWS CloudHSM workloads on Docker containers Original Post from Amazon Security Author: Mohamed AboElKheir AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to generate and use your own encryption keys on the AWS Cloud.
0 notes
Text
Favorite tweets
いつもdockerコマンド操作で迷うからメモ書いた。 「dockerコマンドでよく使うもの amazonlinuxベース」https://t.co/9ynf80Xfzl
— cakephper (@cakephper) June 13, 2018
from http://twitter.com/cakephper via IFTTT
0 notes
Text
QtWebkitをインストールする必要があるけどAmazonLinuxにすんなりインストール出来ないからUbuntu入れようと若者が言っていたので、何を見たのかと思ったら AmazonLinux 2015.09 に Qt5 WebKit をインストール で、RPMパッケージを1つずつダウンロードしてインストールしていてこれは辛いと思っった。 ので
root@suzuya /root 17:21:48 # cat /etc/yum.repos.d/cent6.repo [CentOS6riken] name=Extra Packages for Enterprise Linux 6 - $basearch baseurl=ftp://ftp.riken.jp/Linux/centos/6/os/x86_64/ failovermethod=priority enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6[/code]
とかして、
root@suzuya /root 17:19:04 # yum install qt5-qtwebkit-devel 読み込んだプラグイン:priorities, upgrade-helper Repository epel-debuginfo is listed more than once in the configuration Repository epel-source is listed more than once in the configuration CentOS6riken | 3.7 kB 00:00 CentOS6riken/group_gz | 226 kB 00:00 CentOS6riken/primary_db | 4.7 MB 00:00 amzn-main/2014.09 | 2.1 kB 00:00 amzn-updates/2014.09 | 2.3 kB 00:00 bintray--sbt-rpm | 1.3 kB 00:00 epel/x86_64/metalink | 3.8 kB 00:00 http://repos.fedorapeople.org/repos/sic/qt48/epel-2014.09/x86_64/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found" 他のミラーを試します。 4013 packages excluded due to repository priority protections 依存性の解決をしています --> トランザクションの確認を実行しています。 ---> パッケージ qt5-qtwebkit-devel.x86_64 0:5.6.0-3.el6 を インストール --> 依存性の処理をしています: qt5-qtwebkit(x86-64) = 5.6.0-3.el6 のパッケージ: qt5-qtwebkit-devel-5.6.0-3.el6.x86_64 --> 依存性の処理をしています: qt5-qtdeclarative-devel(x86-64) のパッケージ: qt5-qtwebkit-devel-5.6.0-3.el6.x86_64 --> 依存性の処理をしています: qt5-qtbase-devel(x86-64) のパッケージ: qt5-qtwebkit-devel-5.6.0-3.el6.x86_64 --> 依存性の処理をしています: pkgconfig(Qt5Widgets) のパッケージ: qt5-qtwebkit-devel-5.6.0-3.el6.x86_64 --> 依存性の処理をしています: pkgconfig(Qt5Network) のパッケージ: qt5-qtwebkit-devel-5.6.0-3.el6.x86_64 --> 依存性の処理をしています: pkgconfig(Qt5Gui) のパッケージ: qt5-qtwebkit-devel-5.6.0-3.el6.x86_64 --> 依存性の処理をしています: pkgconfig(Qt5Core) のパッケージ: qt5-qtwebkit-devel-5.6.0-3.el6.x86_64 --> 依存性の処理をしています: libQt5WebKitWidgets.so.5()(64bit) のパッケージ: qt5-qtwebkit-devel-5.6.0-3.el6.x86_64 --> 依存性の処理をしています: libQt5WebKit.so.5()(64bit) のパッケージ: qt5-qtwebkit-devel-5.6.0-3.el6.x86_64 --> トランザクションの確認を実行しています。 ---> パッケージ qt5-qtbase-devel.x86_64 0:5.6.0-13.el6 を インストール --> 依存性の処理をしています: qt5-qtbase(x86-64) = 5.6.0-13.el6 のパッケージ: qt5-qtbase-devel-5.6.0-13.el6.x86_64 --> 依存性の処理をしています: qt5-rpm-macros のパッケージ: qt5-qtbase-devel-5.6.0-13.el6.x86_64 --> 依存性の処理をしています: qt5-qtbase-gui(x86-64) のパッケージ: qt5-qtbase-devel-5.6.0-13.el6.x86_64 --> 依存性の処理をしています: pkgconfig(gl) のパッケージ: qt5-qtbase-devel-5.6.0-13.el6.x86_64 (snip) 検証中 : libXcomposite-0.4.3-4.6.amzn1.x86_64 32/32 インストール: qt5-qtwebkit-devel.x86_64 0:5.6.0-3.el6 依存性関連をインストールしました: atk.x86_64 0:1.30.0-1.el6 compat-libtiff3.x86_64 0:3.9.4-10.13.amzn1 gdk-pixbuf2.x86_64 0:2.24.1-6.el6_7 glx-utils.x86_64 0:10.1.2-2.32.amzn1 gtk2.x86_64 0:2.24.23-8.el6 hicolor-icon-theme.noarch 0:0.11-1.1.el6 libXcomposite.x86_64 0:0.4.3-4.6.amzn1 libXcursor.x86_64 0:1.1.14-2.1.9.amzn1 libXdamage-devel.x86_64 0:1.1.3-4.7.amzn1 libXfixes-devel.x86_64 0:5.0.1-2.1.8.amzn1 libXft.x86_64 0:2.3.1-2.7.amzn1 libXinerama.x86_64 0:1.1.2-2.7.amzn1 libXrandr.x86_64 0:1.4.1-2.1.8.amzn1 libXxf86vm-devel.x86_64 0:1.1.3-2.1.9.amzn1 libdrm-devel.x86_64 0:2.4.52-4.12.amzn1 libthai.x86_64 0:0.1.12-3.5.amzn1 mesa-libGL-devel.x86_64 0:10.1.2-2.32.amzn1 mesa-libGLU.x86_64 0:10.1.2-2.32.amzn1 pango.x86_64 0:1.28.1-10.11.amzn1 qt5-qtbase.x86_64 0:5.6.0-13.el6 qt5-qtbase-common.noarch 0:5.6.0-13.el6 qt5-qtbase-devel.x86_64 0:5.6.0-13.el6 qt5-qtbase-gui.x86_64 0:5.6.0-13.el6 qt5-qtdeclarative.x86_64 0:5.6.0-3.el6 qt5-qtdeclarative-devel.x86_64 0:5.6.0-3.el6 qt5-qtlocation.x86_64 0:5.6.0-3.el6 qt5-qtsensors.x86_64 0:5.6.0-3.el6 qt5-qtwebchannel.x86_64 0:5.6.0-3.el6 qt5-qtwebkit.x86_64 0:5.6.0-3.el6 qt5-qtxmlpatterns.x86_64 0:5.6.0-4.el6 qt5-rpm-macros.noarch 0:5.6.0-13.el6 完了しました!
0 notes
Photo
DockerでLaravelを起動する https://ift.tt/2St4L69
どうも。若松です。
前回の記事でLaravelの起動までをまとめました。 https://cloudpack.media/48181
今回DockerでLaravelの起動までをまとめます。
Dockerfile
FROM amazonlinux:2 # PHPインストール RUN amazon-linux-extras install -y php7.3 RUN yum install -y php-pecl-zip php-mbstring php-dom # Composerインストール RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" RUN php -r "if (hash_file('sha384', 'composer-setup.php') === '48e3236262b34d30969dca3c37281b3b4bbe3221bda826ac6a9a62d6444cdb0dcd0615698a5cbe587c3f0fe57a54d8f5') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" RUN php composer-setup.php RUN php -r "unlink('composer-setup.php');" RUN mv composer.phar /usr/local/bin/composer # 環境変数設定 ENV COMPOSER_ALLOW_SUPERUSER 1 ENV COMPOSER_HOME "/opt/composer" ENV PATH "$PATH:/opt/composer/vendor/bin" # Laravelインストール RUN composer global require "laravel/installer" # Laravelプロジェクト作成 WORKDIR /var/www RUN composer create-project laravel/laravel laravel # ポートを公開 EXPOSE 8000 WORKDIR /var/www/laravel CMD ["php","artisan","serve","--host","0.0.0.0"]
Dockerfile詳細
イメージ
FROM amazonlinux:2
前回AmazonLinux2を使用していたこともあり、今回のDockerでもAmazonLinux2をベースにDockerイメージを構築します。
PHPインストール
RUN amazon-linux-extras install -y php7.3 RUN yum install -y php-pecl-zip php-mbstring php-dom
前回同様、extrasリポジトリからインストールします。
Composerインストール
RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" RUN php -r "if (hash_file('sha384', 'composer-setup.php') === '48e3236262b34d30969dca3c37281b3b4bbe3221bda826ac6a9a62d6444cdb0dcd0615698a5cbe587c3f0fe57a54d8f5') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" RUN php composer-setup.php RUN php -r "unlink('composer-setup.php');" RUN mv composer.phar /usr/local/bin/composer
こちらも前回同様に、Composerのインストールは公式のコマンドをそのまま使用します。 https://getcomposer.org/download/
環境変数設定
ENV COMPOSER_ALLOW_SUPERUSER 1 ENV COMPOSER_HOME "/opt/composer" ENV PATH "$PATH:/opt/composer/vendor/bin"
環境変数はRUN ではなくENVを使用します。 RUN exportで環境変数を定義するとそのレイヤーのコンテナ内部のみに適用されてしまうため、他のレイヤーで環境変数が効かなくなってしまうためです。
Laravelインストール
RUN composer global require "laravel/installer"
こちらも前回同様、Composerを使用してLaravelをインストールします。
Laravelプロジェクト作成
WORKDIR /var/www RUN composer create-project laravel/laravel laravel
前回はlaravel newを使用してLaravelプロジェクトを作成しましたが、 Laravel newでは対話型のターミナルが前提になっているようで、Dockerfileからビルドするとエラーになってしまいます。 そのためComposerを用いてLaravelプロジェクトを作成します。 ※なぜComposerを用いるのか、なぜComposerでも作成できるかは深く追求していません。
ポートを公開
EXPOSE 8000
LaravelではデフォルトでTCP8000をListenするため、8000番を公開します。
Laravelサーバー起動
WORKDIR /var/www/laravel CMD ["php","artisan","serve","--host","0.0.0.0"]
CMDで記述することで、コンテナ起動時にこのコマンドが実行され、サーバーが起動します。
コンテナビルド
docker build -t laravel .
Dockerfileに従ってコンテナをビルドします。 Dockerfileはカレントディレクトリに配置されている想定です。 成功するとlaravel:latestという名前のコンテナイメージが作成されます。
コンテナ実行
docker run -p 8000:8000 laravel
作成したコンテナイメージを実行します。 -p 8000:8000オプションでローカルの8000番とコンテナの8000番を繋ぎ、ローカル8000番でコンテナへアクセスできるようにします。
ブラウザでサンプルを表示
http://localhost:8000にアクセスすることで以下のサンプルを表示します。
まとめ
DockerでLaravelのサンプルを表示するまでをまとめました。 次回はマルチステージビルドでコンテナサイズのスリム化について書きたいと思います。
元記事はこちら
「DockerでLaravelを起動する」
July 22, 2019 at 02:00PM
0 notes
Photo
AWS LambdaとHyperledger Fabric SDK for Node.jsを利用してAmazon Managed Blockchainのブロックチェーンネットワークにアクセスする http://bit.ly/2WH8ORP
Amazon Managed BlockchainのブロックチェーンネットワークはVPC内に構築されるため、VPC外からブロックチェーンネットワークへアクセスするにはクライアントとなるアプリなりサービスを開発して経由する必要があります。 AWS LambdaでVPC内に関数を配置するとアクセス可能になるはずなので試してみました。
Amazon VPC 内のリソースにアクセスできるように Lambda 関数を構成する – AWS Lambda https://docs.aws.amazon.com/ja_jp/lambda/latest/dg/vpc.html
前提
Dockerを利用して開発環境を構築します。
AWS Lambdaへのデプロイにはserverlessを利用します。
Serverless – The Serverless Application Framework powered by AWS Lambda, API Gateway, and more https://serverless.com/
AWS Lambdaを利用するのでAWSアカウントや権限も必要となります。
> docker --version Docker version 18.09.2, build 6247962 > docker-compose --version docker-compose version 1.23.2, build 1110ad01 > sls --version 1.43.0
Amazon Managed Blockchainでブロックチェーンネットワークが構築済み
下記の2記事の手順でブロックチェーンネットワークが構築済みでfabcarのサンプルが動作する環境がある前提です。
Amazon Managed BlockchainでHyperledger Fabricのブロックチェーンネットワークを構築してみた – Qiita https://cloudpack.media/46963
Amazon Managed Blockchainで作成したブロックチェーンネットワークにHyperledger Fabric SDK for Node.jsでアクセスしてみる – Qiita https://cloudpack.media/47382
開発環境を構築する
Hyperledger FabricのSDKをAWS Lambda上で利用するにはLinuxでnpm installする必要があったのでDockerを利用して開発環境を構築します。
Dockerコンテナの立ち上げ
> mkdir 任意のディレクトリ > cd 任意のディレクトリ > touch Dockerfile > touch docker-compose.yml
AWS Lambdaで利用できるNode.jsのバージョンは8.10 と10.xとなります。 Hyperledger Fabric SDK for Node.jsは8.x系で動作するのでDockerにも8.xをインストールします。
AWS Lambda ランタイム – AWS Lambda https://docs.aws.amazon.com/ja_jp/lambda/latest/dg/lambda-runtimes.html
Dockerfile
FROM amazonlinux RUN yum update && \ curl -sL https://rpm.nodesource.com/setup_8.x | bash - && \ yum install -y gcc-c++ make nodejs && \ npm i -g serverless
docker-compose.yml
version: '3' services: app: build: . volumes: - ./:/src working_dir: /src tty: true
> docker-compose build > docker-compose run app bash
Node.jsのプロジェクト作成
コンテナが立ち上がったらserverlessでNode.jsのテンプレートでプロジェクトを作成します。
コンテナ内
$ sls create \ --template aws-nodejs \ --path fablic-app $ cd fablic-app $ npm init
Hyperledger Fabric SDK for Node.jsが利用できるようにpackage.jsonを編集してnpm installを実行します。
package.json
{ "name": "fabcar", "version": "1.0.0", "description": "Hyperledger Fabric Car Sample Application", "main": "fabcar.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "dependencies": { "fabric-ca-client": "~1.2.0", "fabric-client": "~1.2.0", "fs-extra": "^8.0.1", "grpc": "^1.6.0" }, "author": "", "license": "Apache-2.0", "keywords": [ ] }
コンテナ内
$ npm install
証明書の用意
Hyperledger Fabric SDK for Node.jsでブロックチェーンネットワークへアクセスするのに各種証明書が必要となるためプロジェクトに含めます。こちらはDockerコンテナ外���行います。
コンテナ外
> cd fablic-app > aws s3 cp s3://us-east-1.managedblockchain-preview/etc/managedblockchain-tls-chain.pem ./managedblockchain-tls-chain.pem # EC2インスタンスからhfc-key-storeフォルダを取得 > scp -r -i [EC2インスタンス用のpemファイルパス] [email protected]:/home/ec2-user/fabric-samples/fabcar/hfc-key-store ./hfc-key-store
実装
今回はブロックチェーンネットワークのステートDBから情報を取得する実装を行います。 下記記事でも利用しているquery.jsをAWS Lambdaで実行できるように編集しました。
Amazon Managed Blockchainで作成したブロックチェーンネットワークにHyperledger Fabric SDK for Node.jsでアクセスしてみる – Qiita https://cloudpack.media/47382
handler.jsはquery.jsのメソッドを呼び出し結果を返す実装にしました。
> cd fabric-app > tree -F -L 1 . . ├── handler.js ├── hfc-key-store/ ├── managedblockchain-tls-chain.pem ├── node_modules/ ├── package-lock.json ├── package.json ├── query.js └── serverless.yml
handler.js
var query = require('./query'); module.exports.hello = async (event) => { var result = await query.run(); return { statusCode: 200, body: JSON.stringify({ message: JSON.parse(result), input: event, }, null, 2), }; };
実装のポイントは下記となります。
証明書フォルダhfc-key-store を/tmp にコピーして利用
module.exports.run = async () => {} でhandler.jsから呼び出し可能にする
async/await で同期的に実行する
query.js
'use strict'; /* * Copyright IBM Corp All Rights Reserved * * SPDX-License-Identifier: Apache-2.0 */ /* * Chaincode query */ var Fabric_Client = require('fabric-client'); var path = require('path'); var util = require('util'); var os = require('os'); var fs = require('fs-extra'); // var fabric_client = new Fabric_Client(); // setup the fabric network var channel = fabric_client.newChannel('mychannel'); var peer = fabric_client.newPeer('grpcs://nd-xxxxxxxxxxxxxxxxxxxxxxxxxx.m-xxxxxxxxxxxxxxxxxxxxxxxxxx.n-xxxxxxxxxxxxxxxxxxxxxxxxxx.managedblockchain.us-east-1.amazonaws.com:30003', { pem: fs.readFileSync('./managedblockchain-tls-chain.pem').toString(), 'ssl-target-name-override': null}); channel.addPeer(peer); // var member_user = null; var store_base_path = path.join(__dirname, 'hfc-key-store'); var store_path = path.join('/tmp', 'hfc-key-store'); console.log('Store path:'+store_path); var tx_id = null; // create the key value store as defined in the fabric-client/config/default.json 'key-value-store' setting module.exports.run = async () => { // 証明書ファイルを/tmp ディレクトリにコピーして利用する fs.copySync(store_base_path, store_path); console.log('Store copied!'); return await Fabric_Client.newDefaultKeyValueStore({ path: store_path }).then((state_store) => { // assign the store to the fabric client fabric_client.setStateStore(state_store); var crypto_suite = Fabric_Client.newCryptoSuite(); // use the same location for the state store (where the users' certificate are kept) // and the crypto store (where the users' keys are kept) var crypto_store = Fabric_Client.newCryptoKeyStore({path: store_path}); crypto_suite.setCryptoKeyStore(crypto_store); fabric_client.setCryptoSuite(crypto_suite); // get the enrolled user from persistence, this user will sign all requests return fabric_client.getUserContext('user1', true); }).then((user_from_store) => { if (user_from_store && user_from_store.isEnrolled()) { console.log('Successfully loaded user1 from persistence'); member_user = user_from_store; } else { throw new Error('Failed to get user1.... run registerUser.js'); } // queryCar chaincode function - requires 1 argument, ex: args: ['CAR4'], // queryAllCars chaincode function - requires no arguments , ex: args: [''], const request = { //targets : --- letting this default to the peers assigned to the channel chaincodeId: 'fabcar', fcn: 'queryAllCars', args: [''] }; // send the query proposal to the peer return channel.queryByChaincode(request); }).then((query_responses) => { console.log("Query has completed, checking results"); // query_responses could have more than one results if there multiple peers were used as targets if (query_responses && query_responses.length == 1) { if (query_responses[0] instanceof Error) { console.error("error from query = ", query_responses[0]); } else { console.log("Response is ", query_responses[0].toString()); return query_responses[0].toString(); } } else { console.log("No payloads were returned from query"); } }).catch((err) => { console.error('Failed to query successfully :: ' + err); }); };
serverlessの設定
AWS LambdaでVPC内配置されるようにserverless.yml を編集します。 vpc やiamRoleStatements の定義については下記が参考になりました。 セキュリティグループとサブネットはAmazon Managed Blockchainで構築したブロックチェーンネットワークと同じものを指定します。
ServerlessでLambdaをVPC内にデプロイする – Qiita https://qiita.com/70_10/items/ae22a7a9bca62c273495
serverless.yml
service: fabric-app provider: name: aws runtime: nodejs8.10 iamRoleStatements: - Effect: "Allow" Action: - "ec2:CreateNetworkInterface" - "ec2:DescribeNetworkInterfaces" - "ec2:DeleteNetworkInterface" Resource: - "*" vpc: securityGroupIds: - sg-xxxxxxxxxxxxxxxxx subnetIds: - subnet-xxxxxxxx - subnet-yyyyyyyy functions: hello: handler: handler.hello events: - http: path: hello method: get
AWS Lambdaにデプロイする
> sls deploy Serverless: Packaging service... Serverless: Excluding development dependencies... Serverless: Uploading CloudFormation file to S3... Serverless: Uploading artifacts... Serverless: Uploading service fabric-app.zip file to S3 (40.12 MB)... Serverless: Validating template... Serverless: Updating Stack... Serverless: Checking Stack update progress... .............. Serverless: Stack update finished... Service Information service: fabric-app stage: dev region: us-east-1 stack: fabric-app-dev resources: 10 api keys: None endpoints: GET - https://xxxxxxxxxx.execute-api.us-east-1.amazonaws.com/dev/hello functions: hello: fabric-app-dev-hello layers: None Serverless: Removing old service artifacts from S3...
デプロイができたらエンドポイントにアクセスしてみます。
> curl https://xxxxxxxxxx.execute-api.us-east-1.amazonaws.com/dev/hello { "message": [ { "Key": "CAR0", "Record": { "make": "Toyota", "model": "Prius", "colour": "blue", "owner": "Tomoko" } }, { "Key": "CAR1", "Record": { "make": "Ford", "model": "Mustang", "colour": "red", "owner": "Brad" } }, (略) ], "input": { "resource": "/hello", "path": "/hello", "httpMethod": "GET", (略) } }
はい。 無事にAWS Lambda関数からHyperledger Fabric SDK for Node.jsを利用してブロックチェーンネットワークにアクセスすることができました。
VPC内にLamnbda関数を配置する必要があるため、ENI(仮想ネットワークインターフェース)の利用に伴う制限や起動速度に課題が発生するかもしれませんので、実際に利用する際には負荷検証などしっかりと行う必要がありそうです。
AWS LambdaをVPC内に配置する際の注意点 | そるでぶろぐ https://devlog.arksystems.co.jp/2018/04/04/4807/
参考
Amazon VPC 内のリソースにアクセスできるように Lambda 関数を構成する – AWS Lambda https://docs.aws.amazon.com/ja_jp/lambda/latest/dg/vpc.html
Serverless – The Serverless Application Framework powered by AWS Lambda, API Gateway, and more https://serverless.com/
Amazon Managed BlockchainでHyperledger Fabricのブロックチェーンネットワークを構築してみた – Qiita https://cloudpack.media/46963
Amazon Managed Blockchainで作成したブロックチェーンネットワークにHyperledger Fabric SDK for Node.jsでアクセスしてみる – Qiita https://cloudpack.media/47382
AWS Lambda ランタイム – AWS Lambda https://docs.aws.amazon.com/ja_jp/lambda/latest/dg/lambda-runtimes.html
ServerlessでLambdaをVPC内にデプロイする – Qiita https://qiita.com/70_10/items/ae22a7a9bca62c273495
AWS LambdaをVPC内に配置する際の注意点 | そるでぶろぐ https://devlog.arksystems.co.jp/2018/04/04/4807/
元記事はこちら
「AWS LambdaとHyperledger Fabric SDK for Node.jsを利用してAmazon Managed Blockchainのブロックチェーンネットワークにアクセスする」
June 11, 2019 at 02:00PM
0 notes