#Cloud Security Wazuh
Explore tagged Tumblr posts
virtualizationhowto · 2 years ago
Text
Wazuh Open Source SIEM: XDR for Enterprise and Home Lab
Wazuh Open Source SIEM: XDR for Enterprise and Home Lab @wazuh #homelab #selfhosted #WazuhSIEM #OpenSourceSecurity #IntrusionDetection #WazuhFileIntegrity #SecurityAnalytics #VulnerabilityDetection #WazuhComplianceStandards #CloudSecurity #WazuhInstall
The cybersecurity landscape is evolving. Many commercial security platforms offer value, including SIEMs and others. However, an open-source solution called Wazuh stands out as a powerful open-source security platform, offering tools for threat detection, regulatory compliance, and much more. Let’s look at Wazuh and better understand its components and features that help everyone, from a chief…
Tumblr media
View On WordPress
0 notes
hackeocafe · 2 years ago
Video
youtube
This Cybersecurity Platform is FREE
0 notes
dixmata · 2 years ago
Text
Wazuh Install Ubuntu 22.04
Install Wazuh Ubuntu merupakan sebuah Platform Open Source yang digunakan untuk deteksi keamanan, pemantauan keamanan, dan peringatan informasi keamanan. Wazuh termasuk kedalam IDS (Intrusion Detection System),  IDS sangat berperan penting dalam sebuah pertahanan Sistem, untuk menangkal berbagai jenis ancaman berbahaya terkait dengan akses ilegal.
Wazuh ini menyatukan keamanan dalam metode XDR dan SIEM, untuk melindungi beban kerja di Jaringan Lokal, Virtualisasi, atau berbasis Cloud. Wazuh sangat banyak digunakan oleh Individu, perusahaan, atau Lembaga Pemerintahan untuk melindungin asset atau data mereka dari ancaman keamanan, atau akses ilegal.
Dalam implementasinya wazuh dibagi dua yaitu Wazuh Server dan Wazuh Agent. Wazuh Server sebagai pusat untuk manajemen semuah Log yang akan dikirim dari Client menggunakan Wazuh Agent. Pada Dashboard Wazuh Server akan ditampilan Sistem Monitoring baik berupa file integrity, intrusion, maupun log. Sedangkan Wazuh Agent sebagai pengirim dari Client atau perangkat Endpoint untuk membaca Sistem, mengumpulkan log yang akan dinantinya akan diterima oleh Wazuh Server.
BACA JUGA : Install Wazuh Ubuntu | Medium
Pre-requisites
Persyaratan hardware minimum adalah sebagai berikut:
RAM 4 GB
CPU 2 Core
Spesifikasi yang disarankan adalah:
RAM 16 GB
CPU 4 Core
Langkah - Langkah Install Wazuh Ubuntu
Update System Linux dan Install paket  - paket yang diperlukan untuk menjalankna Wazuh Manager
apt update apt install vim curl apt-transport-https unzip wget libcap2-bin software-properties-common lsb-release gnupg2
Automated Install Wazuh Ubuntu
Disini kita akan menginstall Wazuh dengan menggunakan Script, sehingga semua paket akan diinstall secara otomatis, Skript juga akan otomatis mendeteksi jenis OS, dan memeriksa Health Check pada Server, apakah sistem memenuhi persyarat minimal.
Download Script ke Sistem Server pada Wazuh Server
curl -sO https://packages.wazuh.com/4.3/wazuh-install.sh
Jika Script sudah selesai di install, jalankan (Run), dengan menggunakan perintah berikut
bash ./wazuh-install.sh -a
Proses Installasi Wazuh akan berjalan, harap jangan menutup Remote SSH dan jaga agar jaringan stabil
Output
15/02/2023 02:56:57 INFO: Starting Wazuh installation assistant. Wazuh version: 4.3.10 15/02/2023 02:56:57 INFO: Verbose logging redirected to /var/log/wazuh-install.log 15/02/2023 02:57:06 INFO: Wazuh repository added. 15/02/2023 02:57:06 INFO: --- Configuration files --- 15/02/2023 02:57:06 INFO: Generating configuration files. 15/02/2023 02:57:09 INFO: Created wazuh-install-files.tar. It contains the Wazuh cluster key, certificates, and passwords necessary for installation. 15/02/2023 02:57:09 INFO: --- Wazuh indexer --- 15/02/2023 02:57:09 INFO: Starting Wazuh indexer installation. 15/02/2023 02:58:18 INFO: Wazuh indexer installation finished. 15/02/2023 02:58:19 INFO: Wazuh indexer post-install configuration finished. 15/02/2023 02:58:19 INFO: Starting service wazuh-indexer. 15/02/2023 02:58:33 INFO: wazuh-indexer service started. 15/02/2023 02:58:33 INFO: Initializing Wazuh indexer cluster security settings.
Jika proses Installasi sudah selesai, lihat pada Output nya akan di tampilkan Username dan Passwod untuk masuk ke Wazuh Dashboard, seperti berikut.
15/02/2023 03:01:32 INFO: Wazuh dashboard web application initialized. 15/02/2023 03:01:32 INFO: --- Summary --- 15/02/2023 03:01:32 INFO: You can access the web interface https://<wazuh-dashboard-ip>    User: admin    Password: QHqDmh1q1h89CbD6WfYRYE+xYCwlis20 15/02/2023 03:01:32 INFO: Installation finished.
Sekarang untuk Proses Installasi Wazuh Manager sudah selesai, sekarang kita sudah dapat untuk mengakases Wazuh Dasshboard sebagai manajemen Keamanan pada Sistem.
Wazuh Dashboard
Untuk mengakses Wazuh Dashboard kita hanya perlu membukan Hostname atau IP Address pada Server, dengan menggunakan https, pada Browser kita.
https://wazuh-ip-address
Output, maka akan tampil Wazuh Dashboar seperti berikut
Masukan Username dan Password, yang ada pada Output proses Installasi yang sudah selesai sebelumnya.
Seharusnya kita sudah masuk ke Dashboard Manajemen Wazuh, dan sudah dapat untuk mengakases dan melakukan konfigurasi Agent di dalam Wazuh Manager.
Add Agent Install Wazuh Ubuntu
Sekarang kita akan mencoba untuk menambahkan Agent Wazuh, yang digunakan untuk memonitoring Keamanan Server, Monitoring Log Acces dan Log Error pada Server.
Menu Wazuh > Agent
Pada Deploy a new agent, sesuaikan jenis OS  yang digunakan untuk Agent atau server yang akan di monitoring. Lihat contoh dibawah ini
Setelah itu, pada bagian ke 5 akan tampil Script untuk Installasi Agent. Seperti berikut ini
Copy Script tersebut, dan jalankan pada Server yang akan di Monitoring
curl -so wazuh-agent-4.3.10.deb https://packages.wazuh.com/4.x/apt/pool/main/w/wazuh-agent/wazuh-agent_4.3.10-1_amd64.deb && sudo WAZUH_MANAGER='ip-server-wazuh' WAZUH_AGENT_GROUP='default' dpkg -i ./wazuh-agent-4.3.10.deb
Lalu Jalankan Service Wazuh
sudo systemctl daemon-reload sudo systemctl enable wazuh-agent sudo systemctl start wazuh-agent
Pastikan wazuh agent bejalan dengan baik, verify dengan perintah berikut ini
systemctl status wazuh-agent
Output
● wazuh-agent.service - Wazuh agent     Loaded: loaded (/lib/systemd/system/wazuh-agent.service; enabled; vendor preset: enabled)     Active: active (running) since Wed 2023-02-15 10:37:32 UTC; 3s ago    Process: 4334 ExecStart=/usr/bin/env /var/ossec/bin/wazuh-control start (code=exited, status=0/SUCCESS)      Tasks: 30 (limit: 4611)     Memory: 63.3M     CGroup: /system.slice/wazuh-agent.service             ├─4377 /var/ossec/bin/wazuh-execd             ├─4388 /var/ossec/bin/wazuh-agentd             ├─4401 /var/ossec/bin/wazuh-syscheckd             ├─4414 /var/ossec/bin/wazuh-logcollector             └─4430 /var/ossec/bin/wazuh-modulesd Feb 15 10:37:24 server1 systemd[1]: Starting Wazuh agent... Feb 15 10:37:24 server1 env[4334]: Starting Wazuh v4.3.10... Feb 15 10:37:25 server1 env[4334]: Started wazuh-execd... Feb 15 10:37:27 server1 env[4334]: Started wazuh-agentd... Feb 15 10:37:28 server1 env[4334]: Started wazuh-syscheckd... Feb 15 10:37:29 server1 env[4334]: Started wazuh-logcollector... Feb 15 10:37:30 server1 env[4334]: Started wazuh-modulesd... Feb 15 10:37:32 server1 env[4334]: Completed. Feb 15 10:37:32 server1 systemd[1]: Started Wazuh agent.
Lalu izinkan Port 1514 untuk bisa masuk ke dalam Server
ufw allow 1514/tcp ufw allow 1515/tcp
Lihat pada Wazuh Dashboard, maka akan tampil hostname dari Agent yang telah kita tambahkan
Konfigurasi Wazuh Agent
Sekarang kita akan menambahkan sebauh konfigurasi untuk Recent Event pada Wazuh, Recent Event ini digunakan untuk melihat semua perubahan file yang terjadi pada Directory yang ditentukan pada Konfigurasi ini. Ikuti langkah berikut ini
SSH ke Agent Wazuh, Buka file konfigurasi ossec.conf, jalankan perintah berikut ini
nano /var/ossec/etc/ossec.conf
Lalu tambahkan script berikut pada bagian File Integrity Monitoring
<directories check_all="yes" realtime="yes">/var/www/html</directories>
Contoh
<!-- Frequency that syscheck is executed default every 12 hours -->    <frequency>43200</frequency>    <scan_on_start>yes</scan_on_start>    <!-- Directories to check  (perform all possible verifications) -->    <directories>/etc,/usr/bin,/usr/sbin</directories>    <directories>/bin,/sbin,/boot</directories>    <directories check_all="yes" realtime="yes">/var/www/html</directories>
Note : Untuk directory /var/www/html, dapat disesuaikan dengan kebutuhan server kita.
Simpan, lalu restart pada service Wazuh Agent.
systemctl restart wazuh-agent service wazuh-agent restart
Pengujian Wazuh
Install Wazuh sudah selesai, makan sekarang kita akan mencoba untuk melakukan pengujian pada Wazuh. Pada tahap pengujian pertama kita akan mencoba untuk pengujian Recent Events, dimana kita akan mencoba untuk menambahak sebuah directory dan file di dalam Wazuh agent
Pengujian Recent Events (Install Wazuh Ubuntu)
Masuk ke Direcory  /var/ww/html
cd /var/www/html
Tambahakan directory test
mkdir test
Lalu didalam directory test tambahkan file
cd test touch file.txt
Lihat pada Wazuh Dashboard pada Agent
Detail Recent Events
Pengujian SQL Injection
Pengujian selanjutnya adalah SQL Injection, kita akan mencoba SQL Injection pada Wazuh agent, lihat apakah terdeteksi oleh Wazuh di Wazuh Manager
Gunakan perintah berikut untuk uji coba SQL Injection
curl -XGET "http://<IP-ADDRESS>/users/?id=SELECT+*+FROM+users";
Lalu buka Security Event
Terlihat pada Pengujian diatas, Wazuh mendeteksi adanya serangan berupa SQL Injection pada Server. Pada Event diatas, dapat terlihat Source IP, dan IP Server yang diserang. Semua detail serangan dapat terlihat pada Wazuh Manager, dengan mudah kita menganalisa lalu mengatasi serangan tersebut.
Change Password Wazuh
SSH ke Wazuh Server , download tools wazuh-password, dengan perintah berikut ini:
curl -so wazuh-passwords-tool.sh https://packages.wazuh.com/4.3/wazuh-passwords-tool.sh
Lalu jalan kan tool tersebut
bash wazuh-passwords-tool.sh -u admin -p password
Note: Ukuran password minimal 8 karaketr dan maksimal 64 karakter yang terdiri dari Huruf Besar, Huruf Kecil, Angka dan Simbol
Output
15/02/2023 15:53:08 INFO: Generating password hash 15/02/2023 15:53:14 WARNING: Password changed. Remember to update the password in the Wazuh dashboard and Filebeat nodes if necessary, and restart the services.
Update Admin Password
nano /etc/filebeat/filebeat.yml
Lalu restart Wazuh Manager
systemctl restart wazuh-manager
Selamat sekarang kita sudah dapat menggunakan Wazuh sebagai alat Monitorin Keamana Sistem, Sangat banyak Fitur - Fitur dan Rule yang dapat digunakan pada Wazuh tersebut termasuk Brute Forces, XSS, SQL Map dan lainnya. Semoga dengan Dokumentasi ini dapat membantu Anda untuk belajar Wazuh Manager dengan Mudah. :)
Source : Installasi Dixmata Studio
0 notes
computingpostcom · 3 years ago
Text
Today, with the increase in sophisticated cyber threats, there is a high need for real-time monitoring and analysis on systems to detect threats on time and act accordingly. Wazuh is a free and open-source monitoring solution. It is used to detect threats, monitor the integrity of a system, and incident response. It provides lightweight OS-level-based security using multi-platform agents. Using Wazuh, one can collect, index, aggregate, and analyze security data hence detecting system intrusions and abnormalities. The Wazuh server can be used for: Cloud security Container security Log analysis Vulnerability detection Security analysis This guide aims to demonstrate how to run the Wazuh Server in Docker Containers. Normally, there are two deployment options for Wazuh. All-in-one deployment: Here, both the Wazuh and Open Distro for Elasticsearch are installed on a single host. Distributed deployment: This method involves installing the components on separate hosts as a single/multi-node cluster. This method is preferred since it provides high availability and scalability of the product and hence convenient for large environments. During the Wazuh installation, one can choose between two options: Unattended installation– Wazuh is installed using an automated script. It performs health checks to verify that the available system resources meet the minimal requirements Step by step installation– Involves the manual installation with detailed description of each process. Docker is an open-source engine used to automate the deployment of different applications inside software containers. In this guide, we will install the Wazuh All-in-one deployment in a docker container. The Docker image contains: Wazuh Manager Filebeat Elasticsearch Kibana Nginx and Open Distro for Elasticsearch Let’s dive in! Getting Started. Prepare your system for installation by updating the available packages and installing required packages. ## On Debian/Ubuntu sudo apt update && sudo apt upgrade sudo apt install curl vim git ## On RHEL/CentOS/RockyLinux 8 sudo yum -y update sudo yum -y install curl vim git ## On Fedora sudo dnf update sudo dnf -y install curl vim git Step 1 – Docker Installation on Linux The first thing here is to install docker and docker-compose if you do not have them installed. Docker can be installed on any Linux system using the dedicated guide below: How To Install Docker CE on Linux Systems Once installed, start and enable docker. sudo systemctl start docker && sudo systemctl enable docker Also, add your system user to the docker group. sudo usermod -aG docker $USER newgrp docker With docker installed, install docker-compose using the below commands: curl -s https://api.github.com/repos/docker/compose/releases/latest | grep browser_download_url | grep docker-compose-linux-x86_64 | cut -d '"' -f 4 | wget -qi - chmod +x docker-compose-linux-x86_64 sudo mv docker-compose-linux-x86_64 /usr/local/bin/docker-compose Verify the installation. $ docker-compose version Docker Compose version v2.3.0 Step 2 – Provision the Wazuh Server Before we proceed, you need to make the following settings: Increase max_map_count on your host sudo sysctl -w vm.max_map_count=262144 If this is not set, Elasticsearch may fail to work. Configure SELinux on Rhel-based systems For docker-elk to start, SELinux needs to be set into permissive mode as below sudo chcon -R system_u:object_r:admin_home_t:s0 docker-elk/ All the required Wazuh components are available in a single Open Distro for Elasticsearch file that can be pulled as below: $ cd ~ $ git clone https://github.com/wazuh/wazuh-docker.git -b v4.2.5 --depth=1 Now navigate into the directory. cd wazuh-docker Step 3 – Run the Wazuh Container In the directory, there is a docker-compose.yml used for the demo deployment. Run the containers in the background as below. docker-compose up -d
Check if the containers are running: $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d64698a06cc4 wazuh/wazuh-kibana-odfe:4.2.5 "/bin/sh -c ./entryp…" 38 seconds ago Up 36 seconds 0.0.0.0:443->5601/tcp, :::443->5601/tcp wazuh-docker-kibana-1 2bb0d8088b0f amazon/opendistro-for-elasticsearch:1.13.2 "/usr/local/bin/dock…" 48 seconds ago Up 37 seconds 9300/tcp, 9600/tcp, 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 9650/tcp wazuh-docker-elasticsearch-1 7eed74a2a2ae wazuh/wazuh-odfe:4.2.5 "/init" 48 seconds ago Up 36 seconds 0.0.0.0:1514-1515->1514-1515/tcp, :::1514-1515->1514-1515/tcp, 0.0.0.0:514->514/udp, :::514->514/udp, 0.0.0.0:55000->55000/tcp, :::55000->55000/tcp, 1516/tcp wazuh-docker-wazuh-1 At this point, Wazuh can be accessed on port 443. This is used for demo deployments, for Production deployment, we need to make several configurations to these containers. Production deployment For Production deployment, the production-cluster.yml is the base for this deployment. But before we run the containers, we need to make a few configurations. Data Persistent. Create persistent volumes for the containers: sudo mkdir /wazuh_logs cd /wazuh_logs sudo mkdir ossec-api-configuration sudo mkdir ossec-etc sudo mkdir ossec-logs sudo mkdir ossec-queue sudo mkdir ossec-var-multigroups sudo mkdir ossec-integrations sudo mkdir ossec-active-response sudo mkdir ossec-agentless sudo mkdir ossec-wodles sudo mkdir filebeat-etc sudo mkdir filebeat-var sudo mkdir worker-ossec-api-configuration sudo mkdir worker-ossec-etc sudo mkdir worker-ossec-logs sudo mkdir worker-ossec-queue sudo mkdir worker-ossec-var-multigroups sudo mkdir worker-ossec-integrations sudo mkdir worker-ossec-active-response sudo mkdir worker-ossec-agentless sudo mkdir worker-ossec-wodles sudo mkdir worker-filebeat-etc sudo mkdir worker-filebeat-var sudo mkdir elastic-data-1 sudo mkdir elastic-data-2 sudo mkdir elastic-data-3 To be able to persist data to the local machine, you need to edit the volumes in the production-cluster.yml to suit the created paths above. cd ~/wazuh-docker/ sudo vim production-cluster.yml For example for the Wazuh container, set the paths as below: volumes: ... - /wazuh_logs/ossec-api-configuration:/var/ossec/api/configuration - /wazuh_logs/ossec-etc:/var/ossec/etc - /wazuh_logs/ossec-logs:/var/ossec/logs - /wazuh_logs/ossec-queue:/var/ossec/queue - /wazuh_logs/ossec-var-multigroups:/var/ossec/var/multigroups - /wazuh_logs/ossec-integrations:/var/ossec/integrations - /wazuh_logs/ossec-active-response:/var/ossec/active-response/bin - /wazuh_logs/ossec-agentless:/var/ossec/agentless - /wazuh_logs/ossec-wodles:/var/ossec/wodles - /wazuh_logs/filebeat-etc:/etc/filebeat - /wazuh_logs/filebeat-var:/var/lib/filebeat .... Do this for all other containers by substituting the correct volume name. Secure Traffic. The available demo certificates need to be replaced for each node in the cluster. Use the below command to obtain the certificates using the generate-opendistro-certs.yml docker-compose -f generate-opendistro-certs.yml run --rm generator Sample output: [+] Running 15/15 ⠿ generator Pulled 16.8s ⠿ d6ff36c9ec48 Pull complete 4.7s ⠿ c958d65b3090 Pull complete 5.2s
⠿ edaf0a6b092f Pull complete 5.6s ⠿ 80931cf68816 Pull complete 8.3s ⠿ bf04b6bbed0c Pull complete 9.3s ⠿ 8bf847804f9e Pull complete 9.5s ⠿ 6bf89641a7f2 Pull complete 13.2s ⠿ 040f240573da Pull complete 13.4s ⠿ ac14183eb55b Pull complete 13.8s ⠿ debf0fc68082 Pull complete 14.1s ⠿ 62fb2ae4a19e Pull complete 14.3s ⠿ d3aeb8473c73 Pull complete 14.4s ⠿ 939b8ae6540a Pull complete 14.6s ⠿ f8b27a6da615 Pull complete 14.8s Root certificate and signing certificate have been sucessfully created. Created 4 node certificates. Created 1 client certificates. Success! Exiting. At this point, you will have the certificates saved at production_cluster/ssl_certs. $ ls -al production_cluster/ssl_certs total 88 drwxr-xr-x 2 thor thor 4096 Mar 5 04:26 . drwxr-xr-x 7 thor thor 4096 Mar 5 02:56 .. -rw-r--r-- 1 root root 1704 Mar 5 04:26 admin.key -rw-r--r-- 1 root root 3022 Mar 5 04:26 admin.pem -rw-r--r-- 1 thor thor 888 Mar 5 04:26 certs.yml -rw-r--r-- 1 root root 294 Mar 5 04:26 client-certificates.readme -rw-r--r-- 1 root root 1158 Mar 5 04:26 filebeat_elasticsearch_config_snippet.yml -rw-r--r-- 1 root root 1704 Mar 5 04:26 filebeat.key -rw-r--r-- 1 root root 3067 Mar 5 04:26 filebeat.pem -rw-r--r-- 1 root root 1801 Mar 5 04:26 intermediate-ca.key -rw-r--r-- 1 root root 1497 Mar 5 04:26 intermediate-ca.pem -rw-r--r-- 1 root root 1149 Mar 5 04:26 node1_elasticsearch_config_snippet.yml -rw-r--r-- 1 root root 1704 Mar 5 04:26 node1.key -rw-r--r-- 1 root root 3075 Mar 5 04:26 node1.pem -rw-r--r-- 1 root root 1149 Mar 5 04:26 node2_elasticsearch_config_snippet.yml -rw-r--r-- 1 root root 1704 Mar 5 04:26 node2.key -rw-r--r-- 1 root root 3075 Mar 5 04:26 node2.pem -rw-r--r-- 1 root root 1149 Mar 5 04:26 node3_elasticsearch_config_snippet.yml -rw-r--r-- 1 root root 1704 Mar 5 04:26 node3.key -rw-r--r-- 1 root root 3075 Mar 5 04:26 node3.pem -rw-r--r-- 1 root root 1700 Mar 5 04:26 root-ca.key -rw-r--r-- 1 root root 1330 Mar 5 04:26 root-ca.pem Now in the production-cluster.yml file, set up the SSL certs for: Wazuh container For the Wazuh-master container, set the SSL certificates as below. ...... environment: ..... - FILEBEAT_SSL_VERIFICATION_MODE=full - SSL_CERTIFICATE_AUTHORITIES=/etc/ssl/root-ca.pem - SSL_CERTIFICATE=/etc/ssl/filebeat.pem - SSL_KEY=/etc/ssl/filebeat.key volumes: - ./production_cluster/ssl_certs/root-ca.pem:/etc/ssl/root-ca.pem - ./production_cluster/ssl_certs/filebeat.pem:/etc/ssl/filebeat.pem - ./production_cluster/ssl_certs/filebeat.key:/etc/ssl/filebeat.key ...... Elasticsearch Container The Elasticsearch has 3 nodes here, we will configure each of them as below: elasticsearch: .... volumes: ... - ./production_cluster/ssl_certs/root-ca.pem:/usr/share/elasticsearch/config/root-ca.pem - ./production_cluster/ssl_certs/node1.key:/usr/share/elasticsearch/config/node1.key - ./production_cluster/ssl_certs/node1.pem:/usr/share/elasticsearch/config/node1.pem - ./production_cluster/ssl_certs/admin.pem:/usr/share/elasticsearch/config/admin.pem - ./production_cluster/ssl_certs/admin.key:/usr/share/elasticsearch/config/admin.key - ./production_cluster/elastic_opendistro/elasticsearch-node1.yml:/usr/share/elasticsearch/config/elasticsearch.yml - ./production_cluster/elastic_opendistro/internal_users.yml:/usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml
For elasticsearch-2 the configuration is almost similar to the above. elasticsearch-2: ... volumes: - ./production_cluster/ssl_certs/root-ca.pem:/usr/share/elasticsearch/config/root-ca.pem - ./production_cluster/ssl_certs/node2.key:/usr/share/elasticsearch/config/node2.key - ./production_cluster/ssl_certs/node2.pem:/usr/share/elasticsearch/config/node2.pem - ./production_cluster/elastic_opendistro/elasticsearch-node2.yml:/usr/share/elasticsearch/config/elasticsearch.yml - ./production_cluster/elastic_opendistro/internal_users.yml:/usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml For elasticsearch-3: elasticsearch-3: ... volumes: - ./production_cluster/ssl_certs/root-ca.pem:/usr/share/elasticsearch/config/root-ca.pem - ./production_cluster/ssl_certs/node3.key:/usr/share/elasticsearch/config/node3.key - ./production_cluster/ssl_certs/node3.pem:/usr/share/elasticsearch/config/node3.pem - ./production_cluster/elastic_opendistro/elasticsearch-node3.yml:/usr/share/elasticsearch/config/elasticsearch.yml - ./production_cluster/elastic_opendistro/internal_users.yml:/usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml Kibana Container Generate self-signed certificates for Kibana using the command: bash ./production_cluster/kibana_ssl/generate-self-signed-cert.sh Sample Output: Generating a RSA private key ...............................................+++++ .........................................................................................................................................+++++ writing new private key to 'key.pem' ----- Now you will have certificates for Kibana. Set SSL to true and provide the certificates’ path. environment: - SERVER_SSL_ENABLED=true - SERVER_SSL_CERTIFICATE=/usr/share/kibana/config/cert.pem - SERVER_SSL_KEY=/usr/share/kibana/config/key.pem ... volumes: - ./production_cluster/kibana_ssl/cert.pem:/usr/share/kibana/config/cert.pem - ./production_cluster/kibana_ssl/key.pem:/usr/share/kibana/config/key.pem Nginx Container The Nginx load balancer also requires certificates at ./production_cluster/nginx/ssl/. You can generate self-signed certificates using the command: bash ./production_cluster/nginx/ssl/generate-self-signed-cert.sh Add the certificates path for the container: nginx: .... volumes: - ./production_cluster/nginx/nginx.conf:/etc/nginx/nginx.conf:ro - ./production_cluster/nginx/ssl:/etc/nginx/ssl:ro The ./production_cluster/nginx/nginx.conf is a file containing variables about the Nginx container. Now you should have the production-cluster.yml configured with the SSL certificates as above. Stop and remove the previously running demo containers and run the Production deployment as below: docker-compose -f production-cluster.yml up -d Check if the containers are running: $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 42d2b8882740 nginx:stable "/docker-entrypoint.…" 2 minutes ago Up About a minute 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp, 0.0.0.0:1514->1514/tcp, :::1514->1514/tcp wazuh-docker-nginx-1 9395abddd27c wazuh/wazuh-kibana-odfe:4.2.5 "/bin/sh -c ./entryp…" 2 minutes ago Up 2 minutes 0.0.0.0:5601->5601/tcp, :::5601->5601/tcp wazuh-docker-kibana-1 53aaa86606b6 amazon/opendistro-for-elasticsearch:1.13.2 "/usr/local/bin/dock…" 2 minutes ago Up 2 minutes 9300/tcp, 9600/tcp, 0.
0.0.0:9200->9200/tcp, :::9200->9200/tcp, 9650/tcp wazuh-docker-elasticsearch-1 771a5d5d6aaf wazuh/wazuh-odfe:4.2.5 "/init" 2 minutes ago Up 2 minutes 1514-1516/tcp, 514/udp, 55000/tcp wazuh-docker-wazuh-worker-1 327e32da3e61 wazuh/wazuh-odfe:4.2.5 "/init" 2 minutes ago Up About a minute 1514/tcp, 0.0.0.0:1515->1515/tcp, :::1515->1515/tcp, 0.0.0.0:514->514/udp, :::514->514/udp, 1516/tcp, 0.0.0.0:55000->55000/tcp, :::55000->55000/tcp wazuh-docker-wazuh-master-1 67da0a98a5a6 amazon/opendistro-for-elasticsearch:1.13.2 "/usr/local/bin/dock…" 2 minutes ago Up 2 minutes 9200/tcp, 9300/tcp, 9600/tcp, 9650/tcp wazuh-docker-elasticsearch-3-1 8874fa896370 amazon/opendistro-for-elasticsearch:1.13.2 "/usr/local/bin/dock…" 2 minutes ago Up 2 minutes 9200/tcp, 9300/tcp, 9600/tcp, 9650/tcp wazuh-docker-elasticsearch-2-1 Now we have all the 7 containers running and the web service exposed using the Nginx container. Step 4 – Access the Wazuh Kibana Interface The Kibana interface can be accessed on port 443 exposed by Nginx. If you have a firewall enabled, allow this port through it. ##For Firewalld sudo firewall-cmd --add-port=443/tcp --permanent sudo firewall-cmd --reload ##For UFW sudo ufw allow 443/tcp Now access the Kibana web interface using the URL https://IP_address or https://domain_name Login using the set credentials for Elasticseach ELASTICSEARCH_USERNAME=admin ELASTICSEARCH_PASSWORD=SecretPassword Wazuh will initialize as below. The Wazuh dashboard will appear as below with modules. You can now create and view dashboards on Kibana as below. That is it! You now have the Wazuh server set up for real-time monitoring and analysis. This will help you to detect threats on time and act in time. I hope this was significant.
0 notes
oom-killer · 6 years ago
Text
2019/02/12-17
*ここまでやる? MicrosoftがAWSを猛追、激化するクラウド戦争の行方 https://www.itmedia.co.jp/enterprise/articles/1902/13/news042.html >「あれっ?」と思ったのは、その理由について日本マイクロソフトの >平野拓也代表取締役社長が、 > >  「最大の要因は3年間の延長セキュリティ更新プログラムだ」 > >と言っていることです。 > >  これはサポート終了後もセキュリティ更新プログラムの提供を3年間継続 >  するもので、Azure上であれば無償提供、その他の環境に対しては有償提供 >  となる。 > >のだそうです。
*不正アクセスを教訓に GMOペパボが500台超のサーバに導入したオープンソースのセキュリティ監査基盤「Wazuh」とは https://www.atmarkit.co.jp/ait/articles/1902/18/news012.html
*Amazon RDS for MySQL のパラメーターを設定するためのベストプラクティス。パート 3: セキュリティ、操作管理性、および接続タイムアウトに関連するパラメーター https://aws.amazon.com/jp/blogs/news/best-practices-for-configuring-parameters-for-amazon-rds-for-mysql-part-3-parameters-related-to-security-operational-manageability-and-connectivity-timeout/
*TOTO、NTTコムのクラウドからAWSにリプレース https://tech.nikkeibp.co.jp/atcl/nxt/column/18/00001/01659/ >同社は2018年10月に業務システムの実行環境を自社サーバーから >NTTコミュニケーションズのプライベートクラウドに移し終えたばかり >だった。それにもかかわらずAWSへのリプレースに踏み切ったのは、 >BCP(事業継続計画)を考慮してサーバーの冗長性や災害対応を強化 >するのに加え、AI(人工知能)やIoT(インターネット・オブ・シン >グズ)関連のサービスの柔軟な利用を考えたためだ。
*DNS over HTTPSの必要性について https://qiita.com/migimigi_/items/1ca81163a79f4e154cdf >そのHTTPS通信をする前のDNSによるドメイン解決は暗号化されておらず >盗聴でアクセスするホスト名を把握される、なりすましで偽の応答を返 >されるといった可能性があります。
>現在、改竄となりすましについては対応できますがHTTPSの接続時のSNI >という機能をもとに > > - 盗聴してユーザーがどのドメインのWebサイトにアクセスしたかを把握する > - ユーザーがアクセスしようとするドメインを把握して強制的に切断する > >ということができてしまいます。
>SNIの弱点に対して政府の検閲という分かりやすい攻撃がなされたため、 >このEncrypted SNI(とDNS over HTTPS)に対応するサーバーとクライ >アントが今後広まっていくのではないでしょうか。
*3周年に突入するAbemaTVの挑戦と苦悩 https://speakerdeck.com/miyukki/the-challenge-and-anguish-of-abematv-celebrating-the-third-anniversary > - オンプレはコスト以外のメリットがなく却下 > - 既に知見のあるAWSかGCPの選択肢へ
>なぜGCPを選んだか > - 高機能L7ロードバランサ > - GKE/Kubernetes > - ネットワーク帯域に対するコストの安さ > - Stackdriver Logging/BigQueryなどのログ収集や集計サービスの充実
*ファーストサーバが解散へ。IDCフロンティアによる吸収合併を発表。既存のサービスは継続 https://www.publickey1.jp/blog/19/idc_3.html
*EC2インスタンス内のログをCloudWatch LogsとS3バケットに保存してみた https://dev.classmethod.jp/cloud/aws/ec2-to-cwl-s3/ >EC2の上で動くアプリケーションログを一時的にCloudWatch Logsに保管、 >長期的にS3バケットに保存というアーキテクチャを試してみました。
*CloudWatch エージェントを使用して Amazon EC2 インスタンスとオンプレミスサーバーからメトリクスとログを収集する https://docs.aws.amazon.com/ja_jp/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html > - EC2 インスタンスのメトリクスに加えて、ゲスト内メトリクスなど、 >   Amazon EC2 インスタンスからさらに多くのシステムレベルのメトリ >   クスを収集します。 > > - オンプレミスサーバーからシステムレベルのメトリクスを収集します。 > > - Linux または Windows Server を実行している Amazon EC2 インスタン >   スおよびオンプレミスサーバーから、ログを収集します。 > > - StatsD および collectd プロトコルを使用して、アプリケーション >   またはサービスからカスタムメトリクスを取得します。 > >他の CloudWatch メトリクスと同様に、CloudWatch エージェントで収集 >したメトリクスを CloudWatch でも保存して表示できます。
*8文字のWindowsパスワードはわずか2時間半で突破可能と判明 >NVIDIAの最新GPUであるGeForce RTX 2080 Tiとオープンソースのパスワード >クラッキングツールである「hashcat」を組み合わせることで、8文字の >Windowsパスワードをわずか2時間半で突破できると報告しています。
>Tinker氏が行ったブルートフォース攻撃は、NTLM認証を用いている >WindowsおよびActive Directoryを利用する組織に対して有効です。 >NTLM認証は古いWindowsの認証プロトコルであり、今ではKerberosという >新たな認証方式になっていますが、Tinker氏によるとWindowsのパス >ワードをローカルやActive Directoryのドメインコントローラーの >データベースに保存する際に今でもNTLM認証は使われ続けているとのこと。
>そこで、Tinker氏はパスワードを設定する際に「ランダムで5個の単語を >組み合わせるといい」と述べています。たとえば「Lm7xR0w」という8文字 >の複雑なパスワードよりも、「phonecoffeesilverrisebaseball」という >5個の単語を適当に組み合わせたパスワードの方がセキュリティ的に強く >なるとのこと。また、2段階認証を有効にすることもセキュリティを >強化するために重要だとしています。
*Intel CPUのセキュリティ機構「SGX」にマルウェアを隠すことでセキュリティソフトで検出できない危険性、概念実証用のプログラムも公開済み https://gigazine.net/news/20190214-intel-sgx-vulnerability/
*誰でも簡単にRaspberry Pi 3へ64bit ARM版Windows 10をインストールできるツールが登場 https://gigazine.net/news/20190214-raspberry-pi-run-windows-10/ >WoA Installerは非常にシンプルに設計されていて、ドライバやUEFIなどを >設定する必要はないとのこと。WoA Installerはあくまでも展開を支援する >ツールに過ぎませんが、ユーザーはMicrosoftのサーバーから64bit ARM版の >Windows 10のイメージを入手し、バイナリのセットであるコアパッケージを >ダウンロードし、インストーラーを実行すればOKとなっているそうです。
*[レポート]無意味なアラートからの脱却 ? Datadogを使ってモダンなモニタリングを始めよう ? #devsumi #devsumiB https://dev.classmethod.jp/cloud/developers-summit-2019-15-b-7/ >アラートの種類 > - 記録する(Record) 緊急度:低 > - 通知する(Notification) 緊急度:中 > - 呼び出��(Page) 緊急度:高
0 notes