#systemd-analyse
Explore tagged Tumblr posts
Text
Como saber el tiempo que tarda en arrancar linux
Como saber el tiempo que tarda en arrancar linux. Al encender nuestro sistema Linux, lo primero que vemos es el logotipo del fabricante, probablemente algunos mensajes en la pantalla, después la ventana del Grub, continua con la imagen que indica que el sistema está cargando, y por fin... la de inicio de sesión. Alguna vez te has parado a pensar el tiempo exacto que tarda en arrancar linux?. Todos más o menos sabemos lo que tarda en iniciar nuestro sistema, pero lo calculamos a ojo, jajaj. Pues deberías saber que si como la gran mayoría de distribuciones, tu sistema hace uso de "systemd", puedes saber el tiempo exacto que demoro en arrancar nuestro sistema linux. Ademas no nos quedamos en solo cronometrar el tiempo total, también podemos averiguar el tiempo que tardo cada componente y herramienta. Vemos como extraer estos datos.
Como saber el tiempo que tarda en arrancar linux
Normalmente es a modo de curiosidad, pero esta claro que viendo los tiempos podemos identificar algún posible problema. Systemd nos ofrece la herramienta "systemd-analyse" que nos proporciona el detalle de cuántos servicios se ejecutaron en el último inicio, y cuánto tiempo tardaron en cargar. Ejecuta lo siguiente: systemd-analyze Un ejemplo de salida real... sergio@sololinux ~ $ systemd-analyze Startup finished in 6.305s (kernel) + 18.296s (userspace) = 24.601s sergio@sololinux ~ $ Como puede ver en el resultado anterior, el sistema tardó casi 25 segundos en mostrar la pantalla donde se ingresa el usuario y la contraseña. No esta nada mal. Pero aún podemos desglosar más el tiempo de arranque por componente y herramientas, ahora ejecutamos... systemd-analyze blame Otro ejemplo de salida real... sergio@sololinux ~ $ systemd-analyze blame 9.487s dev-sda1.device 7.950s lvm2-monitor.service 7.855s systemd-tmpfiles-setup-dev.service 4.523s accounts-daemon.service 4.510s gpu-manager.service 3.466s NetworkManager.service 2.814s apt-daily.service 2.741s ModemManager.service 2.611s thermald.service 2.602s loadcpufreq.service 2.185s keyboard-setup.service 1.743s systemd-modules-load.service 1.431s systemd-journald.service 1.323s plymouth-quit-wait.service 1.303s alsa-restore.service 1.291s apport.service 1.247s ntp.service 1.241s irqbalance.service 1.133s wpa_supplicant.service 916ms systemd-remount-fs.service 875ms avahi-daemon.service 869ms sys-kernel-debug.mount 868ms dev-hugepages.mount 856ms systemd-logind.service etc............ Si quieres desactivar algún servicio no imprescindible que consuma demasiado, por ejemplo el Network Manager Service (al iniciar el sistema tendrás internet, no te preocupes) ejecutamos el siguiente comando: sudo systemctl disable NetworkManager.service Si lo quieres volver a activar... sudo systemctl enable NetworkManager.service No desactives servicios sin saber exactamente para que sirven. Puedes lamentarlo. Espero que este articulo te sea de utilidad, puedes ayudarnos a mantener el servidor con una donación (paypal), o también colaborar con el simple gesto de compartir nuestros artículos en tu sitio web, blog, foro o redes sociales. Read the full article
#arrancar#GRUB#Linux#NetworkManagerService#sistemaLinux#systemd#systemd-analyse#tardaenarrancarlinux#tiempo#tiempoquetarda
0 notes
Text
Graylog is an opensource log aggregation and management tool which can be used to store, analyse and send alerts from the logs collected. Graylog can be used to analyse both structured and unstructured logs using ElasticSearch and MongoDB. This includes a variety of systems including Windows systems, Linux systems, different applications and micro-services etc. Graylog makes it easier to easily analyse, and monitor these systems and applications from a single host. Graylog has the following components: Graylog server MongoDB ElasticSearch Let us quickly jump into the installation of Graylog server on an Ubuntu 22.04|20.04 host. We shall then configure SSL using Let’sEncrypt. To achieve this, we will need to install Nginx to serve as a reverse-proxy on our system. Similar articles: How To Forward Logs to Grafana Loki using Promtail Setup Pre-requisites Before we can install on your box, make sure your host meets the below minimal requirements: 4 CPU Cores 8 GB RAM SSD Hard Disk Space with High IOPS for Elasticsearch Log Storage Ubuntu 22.04|20.04 LTS installed and updated. All packages upgraded With the above conditions met, let us begin the installation process. Step 1 – Install Java on Ubuntu 22.04|20.04 Before Java installation, let’s update and upgrade our system. sudo apt update && sudo apt -y full-upgrade We highly recommend you perform a system reboot after the upgrade: [ -f /var/run/reboot-required ] && sudo reboot -f Java version 8 and above is required for Graylog installation. In this post, we shall use open JDK 11 sudo apt update sudo apt install vim apt-transport-https openjdk-11-jre-headless uuid-runtime pwgen curl dirmngr You can verify the java version you just installed using the java -version command: $ java -version openjdk version "11.0.15" 2022-04-19 OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.20.04.1) OpenJDK 64-Bit Server VM (build 11.0.15+10-Ubuntu-0ubuntu0.20.04.1, mixed mode, sharing) Step 2 – Install Elasticsearch on Ubuntu 22.04|20.04 Elastic search is the tool that is used to store and analyse incoming logs from external sources. It uses the web-based RESTful API. Download and install Elasticsearch GPG signing key. curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/elastic.gpg Add Elasticsearch repository to your sources list: echo "deb https://artifacts.elastic.co/packages/oss-7.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-7.x.list Install Elasticsearch: sudo apt update sudo apt install elasticsearch-oss -y Configure cluster name for Graylog. sudo vim /etc/elasticsearch/elasticsearch.yml Edit the cluster name to graylog cluster.name: graylog Add the following information in the same file action.auto_create_index: false Reload daemon the start Elasticsearch service. sudo systemctl daemon-reload sudo systemctl restart elasticsearch sudo systemctl enable elasticsearch You can check for the status of the service by : $ systemctl status elasticsearch ● elasticsearch.service - Elasticsearch Loaded: loaded (/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled) Active: active (running) since Sun 2020-11-08 12:36:10 UTC; 14s ago Docs: http://www.elastic.co Main PID: 1352139 (java) Tasks: 15 (limit: 4582) Memory: 1.1G CGroup: /system.slice/elasticsearch.service └─1352139 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Des.> Nov 08 12:36:10 graylog.computingpost.com systemd[1]: Started Elasticsearch. Elasticsearch runs on port 9200 and this can be virified by curl command: curl -X GET http://localhost:9200 You should see your cluster name in the output. $ curl -X GET http://localhost:9200 "name" : "ubuntu", "cluster_name" : "graylog", "cluster_uuid" : "RsPmdLmDQUmNGKC-E4JPmQ",
"version" : "number" : "7.10.2", "build_flavor" : "oss", "build_type" : "deb", "build_hash" : "747e1cc71def077253878a59143c1f785afa92b9", "build_date" : "2021-01-13T00:42:12.435326Z", "build_snapshot" : false, "lucene_version" : "8.7.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" , "tagline" : "You Know, for Search" Step 3 – Install MongoDB on Ubuntu 22.04|20.04 Download and install mongoDB from Ubuntu’s base repository. sudo apt update sudo apt install mongodb-server -y Start MongoDB sudo systemctl start mongodb sudo systemctl enable mongodb $ systemctl status mongodb ● mongodb.service - An object/document-oriented database Loaded: loaded (/lib/systemd/system/mongodb.service; enabled; vendor preset: enabled) Active: active (running) since Sun 2020-11-08 12:45:21 UTC; 1s ago Docs: man:mongod(1) Main PID: 1352931 (mongod) Tasks: 3 (limit: 4582) Memory: 27.9M CGroup: /system.slice/mongodb.service └─1352931 /usr/bin/mongod --unixSocketPrefix=/run/mongodb --config /etc/mongodb.conf Nov 08 12:45:21 graylog.computingpost.com systemd[1]: Started An object/document-oriented database. Step 4 – Install Graylog Server on Ubuntu 22.04|20.04 Download and configure Graylog repository. wget https://packages.graylog2.org/repo/packages/graylog-4.3-repository_latest.deb sudo dpkg -i graylog-4.3-repository_latest.deb Install Graylog server: sudo apt update sudo apt install graylog-server Generate a secret to secure user passwords using pwgen command pwgen -N 1 -s 96 The output should look like: FFP3LhcsuSTMgfRvOx0JPcpDomJtrxovlSrbfMBG19owc13T8PZbYnH0nxyIfrTb0ANwCfH98uC8LPKFb6ZEAi55CvuZ2Aum Edit the graylog config file to add the secret we just created: sudo vim /etc/graylog/server/server.conf Locate the password_secret = line and add the above created secret after it. password_secret = FFP3LhcsuSTMgfRvOx0JPcpDomJtrxovlSrbfMBG19owc13T8PZbYnH0nxyIfrTb0ANwCfH98uC8LPKFb6ZEAi55CvuZ2Aum If you would like to Graylog interface with Server IP Address and port, then set http_bind_address to the public host name or a public IP address of the machine you can connect to. $ sudo vim /etc/graylog/server/server.conf http_bind_address = 0.0.0.0:9000 The next step is to create a hash sha256 password for the administrator. This is the password you will need to login to the web interface. $ echo -n "Enter Password: " && head -1
0 notes
Text
Linux in 2020: 27.8 million lines of code in the kernel, 1.3 million in systemd
Linux in 2020: 27.8 million lines of code in the kernel, 1.3 million in systemd
[ad_1] The Linux kernel has around 27.8 million lines of code in its Git repository, up from 26.1 million a year ago, while systemd now has nearly 1.3 million lines of code, according to GitHub stats analysed by Michael Larabel at Phoronix.
There were nearly 75,000 code commits to the kernel during 2019 which is actually slightly down on 2018 (80,000 commits), and the lowest number since 2013. The…
View On WordPress
0 notes
Link
Pyruse 1.0 : une solution alternative à Fail2ban et Epylog #ChrisTec Tous les administrateurs de systèmes de type UNIX (Linux, etc.) connaissent fail2ban. Un peu moins connu est Epylog. Le développeur inspiré Yetl propose une très intéressante alternative libre à ces deux outils. Fail2ban scrute en permanence les logs du système, à la recherche d'erreurs de connexion, ou de tentatives d'attaque. Il réagit à cela en fermant le port attaqué pour l'IP attaquante avec Iptables. Epylog scrute en permanence les logs du système pour les analyser, les retranscrire en données plus humainement lisibles et envoyer des rapports par mail à l'administrateur. Deux outils indispensables à l'administrateur, mais qui ont tous deux en commun de scruter en permanence les logs, ce qui est un gaspillage de ressources. En dépit des inestimables services qu'ils rendent, ils ont également en commun le fait que leurs possibilités de configuration pourraient être améliorées. Pyruse, écrit lui aussi en Python (un langage qui va bien, ça va tellement bien :-) procède différemment. Pyruse est en quelque sorte abonné au journal système de systemd et se contente de pousser chaque entrée lue du journal dans un workflow constitué de filtres et d’actions. Les filtres sont à base d'expressions rationnelles compatibles Perl (pcre), avec sauvegarde des valeurs capturées. Très intéressant, les filtres peuvent aussi détecter des connexions réussies pour placer les IP concernées dans des listes blanches, pendant un temps déterminé. Pyruse est très facile à utiliser, ses possibilités de configuration sont riches, et, construit sur une architecture de modules, il est très facilement extensible. L'auteur accueille d'ailleurs volontiers les contributions. Un beau logiciel dans un beau projet. Plus d'informations. Code source de Pyruse. Catégorie actualité: Outils administration système, sécurité, Python Image actualité AMP:
0 notes
Text
SSTIC 2017 Wrap-Up Day #1
I’m in Rennes, France to attend my very first edition of the SSTIC conference. SSTIC is an event organised in France, by and for French people. The acronym means “Symposium sur la sécurité des technologies de l’information et des communications“. The event has a good reputation about its content but is also known to have a very strong policy to sell tickets. Usually, all of them are sold in a few minutes, spread across 3 waves. I was lucky to get one this year. So, here is my wrap-up! This is already the fifteen edition with a new venue to host 600 security people. A live streaming is also available and a few hundred people are following talks remotely.
The first presentation was performed by Octave Klaba who’s the CEO of the OVH operator. OVH is a key player on the Internet with many services. It is known via the BGP AS16276. Octave started with a complete overview of the backbone that he build from zero a few years ago. Today, it has a capacity of 11Tpbs and handles 2500 BGP sessions. It’s impressive how this CEO knows his “baby”. The next part of the talk was a deep description of their solution “VAC” deployed to handle DDoS attacks. For information, OVH is handler ~1200 attacks per day! They usually don’t communicate with them, except if some customers are affected (the case of Mirai was provided as an example by Octave). They chose the name “VAC” for “Vacuum Cleaner“. The goal is to clean the traffic as soon as possible before it enters the backbone. An interesting fact about anti-DDoS solutions: it is critical to detect them as soon as possible. Why? Let’s assume that your solution detects a DDoS within x seconds, attackers will launch attacks of less than x seconds. Evil! The “VAC” can be seen as a big proxy and is based on multiple components that can filter specific types of protocols/attacks. Interesting: to better handle some DDoS, the OVH teams reversed some gaming protocols to better understand how they work. Octave described in deep details how the solution has been implemented and is used today… for any customer! This is a free service! It was really crazy to get so many technical details from a… CEO! Respect!
The second talk was “L’administration en silo” by Aurélien Bordes and focused on some best practices for Windows services administration. Aurélien started with a fact: When you ask a company how is the infrastructure organised, they speak usually about users, data, computers, partners but… they don’t mention administrative accounts. From where and how are managed all the resources? Basically, they are three categories of assets. They can be classified based on colours or tiers.
Red: resources for admins
Yellow: core business
Green: computers
The most difficult layer to protect is… the yellow one. After some facts about the security of AD infrastructure, Aurélien explained how to improve the Kerberos protocol. The solution is based on FAST, a framework to improve the Kerberos protocol. Another interesting tool developed by Aurélien: The Terminal Server Security Auditor. Interesting presentation but my conclusion is that in increase the complexity of Kerberos which is already not easy to master.
During the previous talk, Aurélien presented a slide with potential privilege escalation issues in an Active Directory environment. One of them was the WSUS server. It’s was the topic of the research presented by Romain Coltel and Yves Le Provost. During a pentest engagement, they compromised a network “A” but they also discovered a network “B” completely disconnected from “A”. Completely? Not really, there were WSUS servers communicating between them. After a quick recap of the WSUS server and its features, they explained how they compromised the second network “B” via the WSUS server. Such a server is based on three major components:
A Windows service to sync
A web service web to talk to clients (configs & push packages)
A big database
This database is complex and contains all the data related to patches and systems. Attacking a WSUS server is not new. In 2015, there was a presentation at BlackHat which demonstrated how to perform a man-in-the-middle attack against a WSUS server. But today, Romain and Yves used another approach. They wrote a tool to directly inject fake updates in the database. The important step is to use the stored procedures to not break the database integrity. Note that the tool has a “social engineering” approach and fake info about the malicious patch can be injected too to entice the admin to allow the installation of the fake patch on the target system(s). To be deployed, the “patch” must be a binary signed by Microsoft. Good news, plenty of tools are signed and can be used to perform malicious tasks. They use the tool psexec for the demo:
psexec -> cmd.exe -> net user /add
The DB being synced between different WSUS servers, it was possible to compromise the network “B”. The tool they developed to inject data into the WSUS database is called WUSpendu. A good recommendation is to put WSUS servers in the “red�� zone (see above) and to consider them as critical assets. Very interesting presentation!
After two presentations focusing on the Windows world, back to the UNIX world and more precisely Linux with the init system called systemd. Since it was implemented in major Linux distribution, systemd has been the centre of huge debates between the pro-initd and pro-systemd. Same for me, I found it not easy to use, it introduces complexity, etc… But the presentation gave nice tips that could be used to improve the security of daemons started via systemd. A first and basic tip is to not use the root account but many new features are really interesting:
seccomp-bpf can be used to disable access to certain syscalls (like chroot() or obsolete syscalls)
capacities can be disabled (ex: CAP_NET_BIND_SERVICE)
name spaces mount (ex: /etc/secrets is not visible by the service)
Nice quick tips that can be easily implemented!
The next talk was about Landlock by Michael Salaün. The idea is to build a sandbox with unprivileged access rights and to run your application in this restricted space. The perfect example that was used by Michael is a multi-media player. This kind of application includes many parsers and is, therefore, a good candidate to attacks or bugs. The recommended solution is, as always, to write good (read: safe) code and the sandbox must be seen as an extra security control. Michael explained how the sandbox is working and how to implement it. The example with the media player was to allow it to disable write access to the filesystem except if the file is a pipe.
After the lunch, a set of talks was scheduled around the same topic: analysis of code. If started with “Static Analysis and Run-time Assertion checking” by Dillon Pariente, Julien Signoles. The presented Frama-C a framework of C code analysis.
Then Philippe Biondi, Raphaël Rigo, Sarah Zennou, Xavier Mehrenberger presented BinCAT (“Binary Code Analysis Tool”). It can analyse binaries (x86 only) but will never execute code. Just by checking the memory, the register and much other stuff, it can deduce a program behaviour. BinCAT is integrated into IDA. They performed a nice demo of a keygen tool. BinCAT is available here and can also be executed in a Docker container. The last talk in this set was “Désobfuscation binaire: Reconstruction de fonctions virtualisées” by Jonathan Salwan, Marie-Laure Potet, Sébastien Bardin. The principle of the binary protection is to make a binary more difficult to analyse/decode but without changing the original capabilities. This is not the same as a packer. Here there is some kind of virtualization that emulates proprietary bytecode. Those three presentations represented a huge amount of work but were too specific for me.
Then, Geoffroy Couprie, Pierre Chifflier presented “Writing parsers like it is 2017“. Writing parsers is hard. Just don’t try to write your own parser, you’ll probably fail. But parsers are available in many applications. They are hard to maintain (old code, handwritten, hard to test & refactor). Issues based on parsers can have huge security impacts, just remember the Cloudbleed bleed bug! The proposed solution is to replace classic parsers by something stronger. The criteria’s are: must be memory safe, called by / can call C code and, if possible, no garbage collection process. RUST is a language made to develop parsers like nom. To test it, it has been used in projects like the VLC player and the Suricata IDS. Suricata was a good candidate with many challenges: safety, performance. The candidate protocol was TLS. About VLC and parser, the recent vulnerability affecting the subtitles parser is a perfect example why parsers are critical.
The last talk of the day was about caradoc. Developed by the ANSSI (French agency), it’s a toolbox able to decode PDF files. The goal is not to extract and analyse potentially malicious streams from PDF files. Like the previous talk, the main idea was to avoid parsing issues. After reviewing the basics of the PDF file format, Guillaume Endignoux, Olivier Levillain made two demos. The first one was to open the same PDF file within two readers (Acrobat and Fox-It). The displayed content was not the same. This could be used in phishing campaigns or to defeat the analyst. The second demo was a malicious PDF file that crashed Fox-It but not Adobe (DDoS). Nice tool.
The day ended with a “rump” session (also called lighting talks by other conferences). I’m really happy with the content of the first day. Stay tuned for more details tomorrow! If you want to follow live talks, the streaming is available here.
[The post SSTIC 2017 Wrap-Up Day #1 has been first published on /dev/random]
from Xavier
0 notes