cloud-ops
cloud-ops
Cloud-Ops
55 posts
Blog by Jayan Kandathil on Linux, Amazon Web Services (AWS), and Microsoft Azure cloud operations  plus occasional Statistics, and PostgreSQL . I work for Adobe's AEM/Campaign Managed Services team.
Don't wanna be here? Send us removal request.
cloud-ops · 7 years ago
Text
How to Query PostgreSQL Data from R Studio
You can populate an R dataframe with the results of a SQL query against a PostgreSQL database.
1) Install the “RPostgreSQL” package in R Studio
2) Load the PostgreSQL R library
library(RPostgreSQL)
3) Establish a connection to PostgreSQL (running locally in this case)
conn=dbConnect(PostgreSQL(),user="postgres",dbname="aemlogdb",password="password",host="localhost",port=5432)
4) Populate the data frame named “sqlresultsdataframe” with data from a PostgreSQL table named “accesslog” (”aem” is the name of the database schema)
sqlresultsdataframe=dbGetQuery(conn,"SELECT * FROM aem.accesslog")
5) Print the number of records (tuples) in the dataframe
nrow(sqlresultsdataframe)
6) Print the values for the column http_response_bytes
table(sqlresultsdataframe$http_response_bytes)
7) Calculate the significant quantiles (25th, 50th (median), 75th, 90th, 95th, 99th etc.) for the  column http_response_bytes
quantile(sqlresultsdataframe$http_response_bytes,c(.25, .5, .75, .90, .95, 0.99, 0.999, 0.9999))
8) Disconnect from the database
dbDisconnect(conn)
9) Unload the DBI driver
dbUnloadDriver(PostgreSQL())
Reference : this YouTube video
1 note · View note
cloud-ops · 8 years ago
Text
Instance Metadata Commands for an Azure Linux Instance
If you only have SSH access to a Microsoft Azure Linux instance, there are commands you can run to learn more about the instance from the [Azure Instance Metadata Service].  More details here.  Example:
curl -H Metadata:true "http://169.254.169.254/metadata/instance?api-version=2017-08-01"
You can also use a command-line JSON parser such as jq to extract specific information from the returned JSON.
Get public IP address
curl -H Metadata:true "http://169.254.169.254/metadata/instance?api-version=2017-08-01" | /tmp/jq-linux64 '.network.interface[0].ipv4.ipAddress[0].publicIpAddress'
Get subnet details
curl -H Metadata:true "http://169.254.169.254/metadata/instance?api-version=2017-08-01" | /tmp/jq-linux64 '.network.interface[0].ipv4.subnet'
Get MAC address
curl -H Metadata:true "http://169.254.169.254/metadata/instance?api-version=2017-08-01" | /tmp/jq-linux64 '.network.interface[0].macAddress'
Get VM size
curl -H Metadata:true "http://169.254.169.254/metadata/instance?api-version=2017-08-01" | /tmp/jq-linux64 '.compute.vmSize'
0 notes
cloud-ops · 8 years ago
Text
Best Practices for Creating a JMeter Load Test Script for Use With BlazeMeter
1) Using the BlazeMeter Google Chrome extension, record your test scenario
Ensure that all calls are recorded.  See below.
Tumblr media
2) Export the script as a JMX script.  Make sure that all domains are selected.  You can selectively delete them later, if required.
3) Start Apache JMeter on your local machine, and import the JMX script
4) Organize the requests into “Transactions”
5) Run a 5-minute, 5 user exploratory load test
6) Delete calls that report 100% errors
7) Save the script
8) Login to your BlazeMeter account and create new test in BlazeMeter (need the PRO account - USD $650 per month)
9) Make sure you choose multiple “engines” (load generator VMs) - maximum is five for the PRO account.
10) Ensure those engines are in the right Region you desire (you can choose from AWS, Azure, Google etc.)
11) Run tests with at least 10 minutes of ramp-up time.  If there are errors, it’s possible that the IP addresses of the BlazeMeter engines are not white-listed.  Click on the [Engine Health] tab, and obtain the IP addresses of all the engines from the [Hosts] field.  Then white-list those in your security groups BEFORE the ramp-up time ends
11) Try not to go beyond 200 VUs per engine.  This means that for loads beyond 1,000 VUs, you need the ENTERPRISE account.  To get that, you have to call BlazeMeter Support
Tumblr media
12) During the test, verify engine health.  CPU can be 90-95% but make sure memory use stays below 60%.  In others words, make sure your engines don’t skew the test result!  They totally can if you don’t pay attention.
13) Always perform two tests, of duration of at least 1 hour, with 5 minutes of warm-up
a) With all requests, to all domains including those not under your control, like Google, DoubleClick etc.
b) With requests that ONLY go against your domain, the infrastructure that you control
c) Compare the difference in TRANSACTION performance between the two tests
2 notes · View notes
cloud-ops · 8 years ago
Text
How to Determine Round Trip Time Between AEM Author and Publish
Using Wireshark or tshark (Linux), capture network traffic from either the AUTHOR or the PUBLISH instance during a page activation
Once the capture is loaded in Wireshark, identify a particular packet between AEM AUTHOR and AEM PUBLISH
Apply it as a “conversation filter”.  For example ip.addr eq 10.20.30.41 and ip.addr eq 10.20.30.42
In the menu, choose Statistics->TCP Stream Graphs->Round Trip Time
For Throughput (bits/sec), choose  Statistics->TCP Stream Graphs->Throughput
0 notes
cloud-ops · 8 years ago
Text
How to Capture Network Traffic using TShark on Linux
Install Wireshark
yum install wireshark
Capture packets to file /tmp/tshark.pcap
tshark -w /tmp/tshark.pcap
Ctrl-C to stop capture.
0 notes
cloud-ops · 8 years ago
Text
How to Configure iPerf3 on RHEL 7.3
1) Install iPerf3
yum install iperf3
2) Open firewall port for 5201 (on both instances)
firewall-cmd \-\-zone=public \-\-add-port=5201/tcp \-\-permanent
3) Re-start firewall (on both instances)
firewall-cmd \-\-reload
4) Run iPerf3 in server mode
iperf3 -s
5) On the other machine, run it as client, in debug and verbose mode with 5 threads, and test duration 60 seconds
iperf3 -d -V -c <ip address of server> -p 5201 -t 60 -P 5
Repeat the test by switching roles
If required, remove the firewall config changes
firewall-cmd \-\-zone=public \-\-remove-port=5201/tcp \-\-permanent firewall-cmd \-\-reload
0 notes
cloud-ops · 8 years ago
Text
Java 9 Module List
Java 9 comes with 98 modules of four types, java, javafx, jdk and oracle.
java \-\-list\-modules
java.activation@9
java.base@9
java.compiler@9
java.corba@9
java.datatransfer@9
java.desktop@9
java.instrument@9
java.jnlp@9
java.logging@9
java.management@9
java.management.rmi@9
java.naming@9
java.prefs@9
java.rmi@9
java.scripting@9
java.se@9
java.se.ee@9
java.security.jgss@9
java.security.sasl@9
java.smartcardio@9
java.sql@9
java.sql.rowset@9
java.transaction@9
java.xml@9
java.xml.bind@9
java.xml.crypto@9
java.xml.ws@9
java.xml.ws.annotation@9
javafx.base@9
javafx.controls@9
javafx.deploy@9
javafx.fxml@9
javafx.graphics@9
javafx.media@9
javafx.swing@9
javafx.web@9
jdk.accessibility@9
jdk.attach@9
jdk.charsets@9
jdk.compiler@9
jdk.crypto.cryptoki@9
jdk.crypto.ec@9
jdk.crypto.mscapi@9
jdk.deploy@9
jdk.deploy.controlpanel@9
jdk.dynalink@9
jdk.editpad@9
jdk.hotspot.agent@9
jdk.httpserver@9
jdk.incubator.httpclient@9
jdk.internal.ed@9
jdk.internal.jvmstat@9
jdk.internal.le@9
jdk.internal.opt@9
jdk.internal.vm.ci@9
jdk.jartool@9
jdk.javadoc@9
jdk.javaws@9
jdk.jcmd@9
jdk.jconsole@9
jdk.jdeps@9
jdk.jdi@9
jdk.jdwp.agent@9
jdk.jfr@9
jdk.jlink@9
jdk.jshell@9
jdk.jsobject@9
jdk.jstatd@9
jdk.localedata@9
jdk.management@9
jdk.management.agent@9
jdk.management.cmm@9
jdk.management.jfr@9
jdk.management.resource@9
jdk.naming.dns@9
jdk.naming.rmi@9
jdk.net@9
jdk.pack@9
jdk.packager@9
jdk.packager.services@9
jdk.plugin@9
jdk.plugin.dom@9
jdk.plugin.server@9
jdk.policytool@9
jdk.rmic@9
jdk.scripting.nashorn@9
jdk.scripting.nashorn.shell@9
jdk.sctp@9
jdk.security.auth@9
jdk.security.jgss@9
jdk.snmp@9
jdk.unsupported@9
jdk.xml.bind@9
jdk.xml.dom@9
jdk.xml.ws@9
jdk.zipfs@9
oracle.desktop@9
oracle.net@9
0 notes
cloud-ops · 8 years ago
Text
How to Update Windows PowerShell with the Latest AzureRM Modules
1) Run PowerShell ISE as Administrator
2) Go to the PowerShell Gallery to determine the latest stable release of the AzureRM Module
3) Install that latest version (currently 4.2.0) Install-Module -Name AzureRM -RequiredVersion 4.2.0 -AllowClobber
4) Load the AzureRM modules Import-Module AzureRM
5) Verify the updated AzureRM version (Get-Module AzureRM).Version
6) Verify the updated versions of all Cmdlets Get-Command -Module AzureRM
7) Update Help documentation Update-Help -force
0 notes
cloud-ops · 8 years ago
Text
Optimal Pairing of Azure Instances with Disks
Tumblr media
Based on the table above, pairing of disks to select instance types can be as follows (other combinations are possible).
Tumblr media
0 notes
cloud-ops · 8 years ago
Link
0 notes
cloud-ops · 8 years ago
Link
1 note · View note
cloud-ops · 8 years ago
Text
Instance Metadata Commands for an AWS Instance
If you only have SSH access to an AWS Linux instance, there are commands you can run to learn more about the instance.  More details here.  Example:
curl http://169.254.169.254/latest/dynamic/instance-identity/document
The following command gives you all the options:
curl http://169.254.169.254/latest/meta-data/
Example commands:
curl http://169.254.169.254/latest/meta-data/ami-id curl http://169.254.169.254/latest/meta-data/block-device-mapping/ curl http://169.254.169.254/latest/meta-data/hostname curl http://169.254.169.254/latest/meta-data/instance-id curl http://169.254.169.254/latest/meta-data/instance-type curl http://169.254.169.254/latest/meta-data/local-ipv4 curl http://169.254.169.254/latest/meta-data/public-ipv4 curl http://169.254.169.254/latest/meta-data/profile curl http://169.254.169.254/latest/meta-data/security-groups curl http://169.254.169.254/latest/meta-data/iam/info curl http://169.254.169.254/latest/meta-data/placement/availability-zone
1 note · View note
cloud-ops · 8 years ago
Text
How to Configure SELinux for MongoDB on RHEL 7+ that uses systemd
If you use Red Hat Enterprise Linux 7.0 or newer, you will be using the new [systemd] to start and stop services, such as MongoDB.  This works fine if you leave MongoDB’s /data and /log folder in default locations (/var/lib/mongo and /var/log/mongodb respectively).
If you change the default locations, then SELinux will block you from starting MongoDB via [systemd] until you put the proper SELinux labels on the folders and files.  Official MongoDB documentation does not call this out explicitly, and I had to engage MongoDB Tech Support.
Here are the SELinux labels required:
For the parent folder and the data folder : mongod_var_lib_t For the log folder : mongod_log_t
Assuming your data folder is in /opt/mongodb/data and the log folder is /opt/mongodb/log, here are the commands you’d need - RHEL 7.3 example
semanage fcontext -a -t mongod_var_lib_t \'/opt/mongodb\' restorecon -v /opt/mongodb semanage fcontext -a -t mongod_var_lib_t \'/opt/mongodb/data\' restorecon -v /opt/mongodb/data semanage fcontext -a -t mongod_log_t \'/opt/mongodb/log\' restorecon -v /opt/mongodb/log
Verify this with:
ls -Z /opt/mongodb
If startup still fails, install SELinux troubleshooting tools and check their output
yum install setroubleshoot setools sealert -a /var/log/audit/audit.log > /tmp/readable_audit.log
Good luck!
0 notes
cloud-ops · 8 years ago
Text
How to Layout a Microsoft Azure [Managed Disk] for Cloning When Using RHEL 7.3
The new “Managed Disk” feature in Microsoft Azure lets you clone disks and attach copies to the same or other VMs.  In order for this to work reliably, the disk needs to be laid out properly
a) It should have at least one partition
b) The cloned disk should have its UUID changed so that it is different from its source disk
c) The /etc/fstab entry auto-mounting it on reboot should use UUID instead of the disk partition’s device name (/dev/sdd1).  See this for more on this.
Here are the detailed steps (specific to RHEL 7.3)
1) Using the Azure Portal, create a new disk
2) In VMs->YourVM->Disks, attach the new disk to the VM
3) SSH into the VM, sudo and and verify this with
lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT fd0      2:0    1    4K  0 disk sda      8:0    0   30G  0 disk ├─sda1   8:1    0  500M  0 part /boot └─sda2   8:2    0 28.8G  0 part / sdc      8:32   0  256G  0 disk
/dev/sdc is the new disk
4) Assuming the new device is /dev/sdc, start the parted utility to create a partition on it
parted /dev/sdc
5) Create a disk label (this has to be msdos)
mklabel msdos
6) Create a primary partition that uses all of the disk’s storage
mkpart primary 0% 100%
7) Quit the parted utility
quit
8) Verify the new disk layout (the new partition will be /dev/sdc1)
lsblk
sdc      8:32   0  256G  0 disk └─sdc1   8:33   0  256G  0 part /mnt
9) Put an ext4 filesystem on it
mkfs.ext4 /dev/sdc1
10) Verify
blkid -o list
11) Note down it UUID
blkid
/dev/sdc1: UUID="dec87457-492d-461a-91b6-531dacafee19" TYPE="ext4"
11) Create a mountpoint
mkdir /mnt2
12) Mount the new disk partition to that mountpoint
mount /dev/sdc1 /mnt2
13) For test purposes, move a known file to /mnt2
14) Edit /etc/fstab so that this mounting happens automatically on re-boot.  Start the vi editor
vi /etc/fstab
14) Add a new line in /etc/fstab
UUID= dec87457-492d-461a-91b6-531dacafee19  /mnt2  ext4  defaults  0 2
15) Save the file and exit vi
:x
16) Stop, wait, then start the VM using the Azure Portal
17) Verify that the partition is mounted, and that your file can be read
18) In Azure Portal, in Snapshots, create a snapshot of the disk
19) Once done, in Disks, create a new disk from that snapshot
20) In VMs->YourVM->Disks, attach the new disk to the VM
21) SSH into the VM, sudo and and verify this with
lsblk
sdc      8:32   0  256G  0 disk └─sdc1   8:33   0  256G  0 part /mnt sdd      8:48   0  256G  0 disk └─sdd1   8:49   0  256G  0 part
/dev/sdd is the new disk
22) Create new UUID, and note it down
uuidgen
23) Apply the newly-generated UUID to the new disk
tune2fs /dev/sdd1 -U 5b7c2442-f0dd-46fd-9c45-ggb1550fabac
24) Verify it new UUID
lsblk
25) Create a mountpoint
mkdir /mnt3
26) Edit /etc/fstab so that this mounting happens automatically on re-boot.
27) Start the vi editor
vi /etc/fstab
28) Add a new line in /etc/fstab
UUID= 5b7c2442-f0dd-46fd-9c45-ggb1550fabac  /mnt3  ext4  defaults  0 2
29) Save the file and exit vi
:x
30) Stop, wait, then start the VM using the Azure Portal
31) Verify that the /dev/sdd1 partition is mounted, and that your file can be read from /mnt2
0 notes
cloud-ops · 9 years ago
Link
Test your egress firewall rules.
0 notes
cloud-ops · 9 years ago
Link
1 note · View note
cloud-ops · 9 years ago
Link
0 notes