#rsyslogd
Explore tagged Tumblr posts
gslin · 3 months ago
Text
0 notes
rodrigocarran · 1 year ago
Text
Um dia, no mesmo servidor rodando Ubuntu 20.04, notei muitas mensagens em /var/log/syslog: 1234CRON[1835247]: (root) CMD (/bin/bash /usr/local/bin/fail2ban_banned_db unban_db)rsyslogd: action 'action-6-builtin:omfile' resumed (module 'builtin:omfile') [v8.2001.0 try https://www.rsyslog.com/e/2359 ]rsyslogd: action 'action-6-builtin:omfile' suspended (module 'builtin:omfile'), retry 0. There…
View On WordPress
0 notes
cloudshiftco · 4 years ago
Text
How to Send Journald Logs From CoreOS to Remote Logging Server ?
https://cloudshift.co/gnu-linux/coreos/how-to-send-journald-logs-from-coreos-to-remote-logging-server/
How to Send Journald Logs From CoreOS to Remote Logging Server ?
CoreOS is an operating system that goes beyond the ordinary. When you need to send the journald logs to remote server, it will not be so simple but it is not too hard too.
First of all you can configure the rsyslogd but it will not send the journald logs. Journald daemon logs the events and outputs as binary file.  Only solution is to be able to read that journald via journalctl command line utility.  So this means we can not configure rsyslogd for sending journald logs to the remote logging server( Elasticsearch, Splunk, Graylog, etc… )
First of all we need to create a systemd configuration file for sending the logs to the remote.
# vi /etc/systemd/system/sentjournaldlogs.service Description=Sent Journald Logs to Remote Logging Service After=systemd-journald.service Requires=systemd-journald.service [Service] ExecStart=/bin/sh -c "journalctl -f | ncat -u RemoteServerIP RemotePort" TimeoutStartSec=0 Restart=on-failure RestartSec=5s [Install] WantedBy=multi-user.target
You need to change RemoteServerIP and  RemotePort  with your remote logging server’s IP address and service port.
If you are using or the remote logging server is listening on standard ports that will be 514. If you look closely to the ncat command there is -u argument which specifies that the connection will use UDP.  If you want to use TCP then please delete -u argument.
# systemctl daemon-reload # systemctl enable sentjournaldlogs.service # systemctl start sentjournaldlogs.service # systemctl status sentjournaldlogs.service
We need to reload systemd daemon and start the service.  That’s all.
0 notes
artmetic-blog · 8 years ago
Text
Brak log-ów /var/log/syslog
Jeżeli masz problem z logami
apt-get install --reinstall rsyslogd apt-get install inetutils-syslogd
Pamiętaj o uprawnieniach pliku:
sudo chown syslog:adm /var/log sudo chmod 0775 /var/log
Prawdopodobnych błędów można szukać w pliku: /etc/rsyslog.conf
i plikach
/etc/rsyslog.d/50-default.conf
można uruchomić również usługę w trybie debugowania: /etc/init.d/rsyslog stop rsyslogd -n -i…
View On WordPress
0 notes
computingpostcom · 3 years ago
Text
Logs on any Linux system are critical for analyzing and troubleshooting any issues related to system and applications. With the help of tools like Graylog, you can easily ship these logs to a centralized platform for easy visualization. In this guide, we will look at how to Configure Rsyslog Centralized Log Server on Ubuntu 22.04|20.04|18.04LTS. On Linux, by default, all log files are located under /var/log directory. There are several types of log files storing varying messages, which can be cron, kernel, security, events, users e.t.c. Mostly these logs file are controlled by rsyslog service. On recent systems with systemd, some logs are managed by journald daemon and they are written binary format. These logs are volatile since they are written to RAM and doesn’t withstand system reboot. They are often found on./run/log/journal/ But note that journald can also be configured to permanently store log messages by writing to file. Configure Rsyslog Log Server on Ubuntu 22.04|20.04|18.04 We’re going to configure rsyslog server as central Log management system. This follows the client-server model where rsyslog service will listen on either udp/tcp port. The default port used by rsyslog is 514. On the client system, rsyslog will collect and ship logs to a central rsyslog server over the network via UDP or TCP ports. When working with syslog messages, there is a priority/severity level that characterizes a log file. Namely: emerg, panic (Emergency ): Level 0 – This is the lowest log level. system is unusable alert (Alerts):  Level 1 – action must be taken immediately err (Errors): Level 3 – critical conditions warn (Warnings): Level 4 – warning conditions notice (Notification): Level 5 – normal but significant condition info (Information): Level 6 – informational messages debug (Debugging):  Level 7 – This is the highest level – debug-level messages Rsyslog is installed by default on a freshly installed Ubuntu system. If for any reason the package is not installed, you can install it by running: sudo apt-get update sudo apt-get install rsyslog When installed, check service to see if it is running: $ systemctl status rsyslog ● rsyslog.service - System Logging Service Loaded: loaded (/lib/systemd/system/rsyslog.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2018-07-18 11:30:10 UTC; 4 days ago Main PID: 903 (rsyslogd) Tasks: 4 (limit: 4704) CGroup: /system.slice/rsyslog.service └─903 /usr/sbin/rsyslogd -n Configure rsyslog to run in Server Mode Now configure rsyslog service to run in server mode: sudo vim /etc/rsyslog.conf Uncomment the lines for udp and tcp port binding: module(load="imudp") input(type="imudp" port="514") # provides TCP syslog reception module(load="imtcp") input(type="imtcp" port="514") If you would like to limit access from to specific subnet, IP or domain, add like below: $AllowedSender TCP, 127.0.0.1, 192.168.10.0/24, *.example.com You can add above line after input(type="imtcp" port="514") line. Remember to substitute given values with correct ones Create a new template for receiving remote messages Let’s create a template that will instruct rsyslog server how to store incoming syslog messages. Add the template just before GLOBAL DIRECTIVES section: $template remote-incoming-logs,"/var/log/%HOSTNAME%/%PROGRAMNAME%.log" *.* ?remote-incoming-logs & ~ The received logs will be parsed using the template above and stored inside directory/var/log/. The file naming follows the convention: %HOSTNAME% and %PROGRAMNAME% variables, i.e the client hostname, and client facility that produced the log message. The & ~ instructs rsyslog daemon to store the log message only to a specified file. Other variables that can be used include: %syslogseverity%, %syslogfacility%, %timegenerated%, %HOSTNAME%, %syslogtag%, %msg%, %FROMHOST-IP%, %PRI%, %MSGID%, %APP-NAME%, %TIMESTAMP%, %$year%, %$month%, %$day% Restart rsyslog service for the changes to take effect:
sudo systemctl restart rsyslog Confirm if the service is listening on configured ports: $ ss -tunelp | grep 514 udp UNCONN 0 0 0.0.0.0:514 0.0.0.0:* users:(("rsyslogd",pid=15220,fd=5)) ino:441816 sk:2 udp UNCONN 0 0 [::]:514 [::]:* users:(("rsyslogd",pid=15220,fd=6)) ino:441817 sk:5 v6only:1 tcp LISTEN 0 25 0.0.0.0:514 0.0.0.0:* users:(("rsyslogd",pid=15220,fd=7)) ino:441820 sk:a tcp LISTEN 0 25 [::]:514 [::]:* users:(("rsyslogd",pid=15220,fd=8)) ino:441821 sk:11 v6only:1 Configure Rsyslog firewall If you have ufw firewall service running, allow rsyslog firewall ports: sudo ufw allow 514/tcp sudo ufw allow 514/udp Configure Rsyslog as a Client Once you’re done configuring rsyslog server, head over to your rsyslog client machines and configure them to send logs to remote rsyslog server. sudo vim /etc/rsyslog.conf Allow preservation of FQDN: $PreserveFQDN on Add remote rsyslog server at the end: *.* @ip-address-of-rsysog-server:514 You can also use FQDN instead of Server IP Address: *.* @fqdn-of-rsysog-server:514 The above line will enable sending of logs over UDP, for tcp use @@ instead of a single @ *.* @@ip-address-of-rsysog-server:514 #OR *.* @@fqdn-of-rsysog-server:514 Also add the following for when rsyslog server will be down: $ActionQueueFileName queue $ActionQueueMaxDiskSpace 1g $ActionQueueSaveOnShutdown on $ActionQueueType LinkedList $ActionResumeRetryCount -1 Then restart rsyslog service sudo systemctl restart rsyslog Also check: Manage Logs with Graylog server on Ubuntu 18.04 How to Install Graylog 2.4 with Elasticsearch on CentOS 7.
0 notes
kcpolh · 3 years ago
Text
Dvd archtect used the most recent menu command before
Tumblr media
#DVD ARCHTECT USED THE MOST RECENT MENU COMMAND BEFORE HOW TO#
If you don’t require nanosecond accuracy, but you do want timestamps that are easier to read than the defaults, use the -T (human readable) option. The messages that occurred in each minute are labeled with the seconds and nanoseconds from the start of that minute. The timestamps show a timestamp with the date and time, with a minute resolution.The output is automatically displayed in less.To have this rendered in a more human-friendly format, use the -H (human) option. To force dmesg to always default to a colorized display use this command: sudo dmesg -color=alwaysīy default, dmesg use a timestamp notation of seconds and nanoseconds since the kernel started. If it isn’t, you can tell dmesg to colorize its output using the -L (color) option. sudo sysctl -w kernel.dmesg_restrict=0īy default, dmesg will probably be configured to produce colored output. But, be aware: it lets anyone with a user account your computer use dmesg without having to use sudo. If you want to avoid having to use sudo each time you use dmesg, you can use this command.
#DVD ARCHTECT USED THE MOST RECENT MENU COMMAND BEFORE HOW TO#
RELATED: How to Use the less Command on Linux Removing the Need for sudo Start the search function by pressing the forward slash key “/” in less. You can use the search function within less to locate and highlight items and terms you’re interested in. Now we can scroll through the messages looking for items of interest. Obviously, what we need to do is pipe it through less: sudo dmesg | less sudo dmesgĪll of the messages in the ring buffer are displayed in the terminal window. By default, you need to use sudo to use dmesg. The dmesg command allows you to review the messages that are stored in the ring buffer. Because it contains these low-level startup messages, the ring buffer is a good place to start an investigation into hardware errors or other startup issues.īut don’t go empty-handed. The kernel ring buffer stores information such as the initialization messages of device drivers, messages from hardware, and messages from kernel modules. Conceptually it can be thought of as a “ circular buffer.” When it is full, newer messages overwrite the oldest messages. It is simple in design, and of a fixed size. To avoid losing notable error messages and warnings from this phase of initialization, the kernel contains a ring buffer that it uses as a message store.Ī ring buffer is a memory space reserved for messages. In the very early stages of initialization, logging daemons such as syslogd or rsyslogd are not yet up and running. The startup processes then pick up the baton and complete the initialization of the operating system.
Tumblr media
0 notes
certificacaolinux-blog · 4 years ago
Text
Comando pstree no Linux (lista processos) [Guia Básico]
Tumblr media
O comando pstree no Linux irá mostrar toda a árvore de processos desde o init ou systemd até o último processo em execução. É similar ao comando ps –auxf. Ele é útil para entendermos a hierarquia dos processos no Linux. Além das opções abaixo, o pstree poderá mostrar a hierarquia pertencente a um usuário ou de um processo específico através do seu PID. -a: Esta opção mostra a linha de comando utilizada para iniciar os processos; -c: Desabilita a função de mesclar os processos idênticos no mesmo nível de hierarquia; -G: Utiliza o formato VT100 para mostrar as linhas entre os processos no lugar dos caracteres de teclado; -h: Destaca os processos ligados ao terminal no exato momento; -p: Inclui o PID dos processos na listagem. Exemplo: $ pstree -G -c -psystemd(1)─┬─acpid(2813)├─agetty(2674)├─agetty(2675)├─atd(2570)├─auditd(1884)───{auditd}(1885)├─chronyd(1940)├─crond(2590)├─dbus-daemon(1909)├─dhclient(2211)├─firewalld(1915)───{firewalld}(2274)├─nginx(10982)─┬─nginx(10983)│ ├─nginx(10984)│ └─nginx(10985)├─php-fpm(2387)─┬─php-fpm(2412)│ ├─php-fpm(2413)│ └─php-fpm(30458)├─rsyslogd(2542)─┬─{rsyslogd}(2559)│ └─{rsyslogd}(2744)├─systemd-journal(1421)├─systemd-logind(1919)└─systemd-udevd(1787) https://youtu.be/LEH2EI9bu5Y Aprenda muito mais sobre Linux em nosso curso online. Você pode efetuar a matrícula aqui. Se você já tem uma conta, ou quer criar uma, basta entrar ou criar seu usuário aqui. Gostou? Compartilhe   Read the full article
0 notes
awsexchage · 5 years ago
Photo
Tumblr media
DockerのUbuntuコンテナ上にsyslogを導入する手順 https://ift.tt/2PTJLWg
Dockerで使用するUbuntuイメージ(18.04)には最低限のパッケージしか同梱されていないので、syslogを使いたい場合は以下の手順で別途インストールする。
Tumblr media
Dockerfileとrsyslogデーモンを起動するスクリプトを用意
⦿ Dockerfile
FROM ubuntu:18.04 RUN apt-get update && \ apt-get install -y rsyslog COPY startup.sh /startup.sh RUN chmod 744 /startup.sh CMD ["/startup.sh"]
⦿ startup.sh
#!/usr/bin/env bash service rsyslog start
Dockerイメージビルド & 実行
⦿ ビルド
$ docker image build -t test .
⦿ 実行
$ docker run --privileged test
--privilegedはコンテナ内の全てのデバイスへのアクセスを可能にするオプションであるが、これを付けないとdocker run時に以下の権限エラーになり、rsyslogデーモンが起動できない。
rsyslogd: imklog: cannot open kernel log (/proc/kmsg): Operation not permitted. rsyslogd: activation of module imklog failed [v8.32.0 try http://www.rsyslog.com/e/2145 ]
参考サイト
Docker公式ドキュメント (英語)
Dockerドキュメント日本語化プロジェクト
Docker情報 (Qiita)
元記事はこちら
「 DockerのUbuntuコンテナ上にsyslogを導入する手順」
December 24, 2019 at 04:00PM
0 notes
9maf4you · 6 years ago
Text
Как правильно готовить SaltStack
SaltStack - очень мощный и гибкий инструмент управления конфигурациями. К большому сожалению из-за отсутствия договоренности ( политики / RFC , как это ни называй )  мало кто задумывается о "чистой конфигурации".
Под чистой конфигурацией я понимаю легкость чтения стейтов и их поддержку в будущем. Для себя я выбрал несколько принципов, которые позволят сделать ту самую "чистую конфигурацию".
В первую очередь подход заключается в том, чтобы готовить стейты как это принято в debain репозиториях и пакетах, плюс некоторые нюансы, которые взяты из моей практики.
Стейт, как deb-пакет - является самост��ятельно и отчуждаемой сущностью
Каждый стейт должен находится в директории, сущности который он принадлежит.
Не стоит брезговать копипастой во избежании программирования на Jinja. Простые и примитивные вещи не ломаются.
Важно разделять стейты на слои. То что можно поставить на все хосты стоит унести в base. Все остальное - нужно уносить в другие environment ( production / stage / service ). Overwritting.
Pillars - это настройки ( конфиги ), Grains - это методанные ( информация о миньоне, например кол-во жестких дисков.
А теперь более детально по каждому пункту.
Стейт, как deb-пакет - является самостоятельной и отчуждаемой сущностью.
В первую очередь - это важно потому оно становится переносимым и самостоятельным. Как пример, можно смотреть на fat-binary в языке Go  или на APP- приложений в MacOSX.
Идя другим путем можно наткнутся на типичную проблему с зависимостей.
Пример
Вы хотите принести какой-то экспортет на свой хост, чтобы собирать метрики или python-script написанный python3.6.
Разберем оба примера.
node_exporter - бинарь, а для его запуска нам нужно service-unit, возможно какой-то конфиг, если мы не запускаем его через аргументы командной строки.
Итого, наш стейт-пакет будет выглядеть так.
/path/to/node-exporter-service-unit:  file.managed:    - source: salt://node-exporter-service-unit packages:  pkg.installed:    - pkgs:      - node_exporter node-exporter-service-unit:  service.running:    - enable: True    - reload: True
/path/to/my-powerfull-script.py: # Приностим сам скрипт  file.managed:    - source: salt://my-powerfull-script.py packages:  pkg.installed:    - pkgs:      - python3.6 # Да приносим python, даже если вы уверены, что он на хосте есть.  - python3-pip      - systemd python-click-pkg:  pip.installed:    - name: click    - require:      - pkg: python3-pip /path/to/my-powerfull-script.py:  cron.present:    - user: root    - minute: random    - hour: 2
Этот стейт приносит с собой необходимые зависимости. python, pip3 и даже systemd. Это на сто процентов гарантирует, что скрипт заработает.
Также он легко читаем, потому что все находится перед глазами в одном файле.
Его легко перенести куда угодно по дереву директорий или просто передать кому-то на переиспользование.
Каждый стейт должен находится в директории, сущности который он принадлежит.
Для упрощения понимания структуры / чтения. Стоит помещать каждый из стейтов в директорию, сущности, которой он принадлежит.
По аналогии с репозиториями deb-пакетов. Там это не так явно выражено, но есть метаданные пакета Group , которая указывает пакет какой категории вы устанавливаете.
Такой подходит позволит четко понимать какой стейты привязаны к миньону при просмотре state.show_top
Пример    
Допустим, мы хотим настроить наш rsyslog для отправки логов в разные точки назначения - ELK и  rsyslog-server.
А также хотим собирать метрики через node_exporter, а логи ротировать.  
Дерево будет выглядеть так
. └── base    ├── logs    │   ├── logrotate.sls    │   ├── rsyslog.sls    │   └── rsyslog-elk.sls    ├── monitoring    │   ├── node_exporter.sls    │   └── redis.sls    └── system
Для elk нам понадобится отдельный модуль из репозитория + шаблон, для второго - только шаблон. И все они требуют установленный rsyslogd.
Как показано в пункте 1 каждый стейт должен приносить зависимости для себя.
Ответ: "ДА" - в каждом стейте мы дублируем rsyslog как зависимость.  
P.S. Возможны случаи, когда не ясно как классифицировать тот или иной стейт. В таких случая я уношу их  в system. Но всячески стараюсь этого избегать.
Плохой пример - называть директории backend / frontend - это не класс, а роль.
Не стоит брезговать копипастой.
Многие не согласны с таким высказыванием, но как я уже упоминал - это упросит отчуждаемость стейтов для переноса и переиспользования.
В примере выше нам нужно настроить rsyslog и они оба имеют одну зависимость - rsyslog пакет.
Казалось бы, можно вынести в отдельный стейт и назвать его packages.sls, но тем самым есть риск сломать стейт, кто-то другой решит о не надобности rsyslog, потому что он приезжает с базовой системой и по своему будет прав.
packages:  pkg.installed:    - pkgs:      - rsyslog /etc/rsyslog.d/10-main.conf:  file.managed:    - source: salt://10-main.conf
packages:  pkg.installed:    - pkgs:      - rsyslog      - rsyslog-elasticsearch /etc/rsyslog.d/30-elasticsearch.conf:  file.managed:    - source: salt://30-elasticsearch.conf
Важно разделять стейты на слои. Environments: base / production / stage / service  и overwritting.
Слои лучше рассматривать как: base - это то что одинаково для всех, например, список установленных пакетов. production / stage / servicename - это сервсис-зависимый слой.
В большинстве случаев ( в основном в небольших проектах ) нет необходимости в использование среды кроме base.
Как быть, если node_exporter нужен всем, а запускается он различными аргументами ( например, порт на котором он слушает или debug-флаг ) у каждого сервиса или в разной среде ( production / stage )?
Куда положить такой стейт в base или  сервсисо-зависимый слой? Ответ: "в оба".
В случае когда есть единая команда ответственная за инфраструктуру ( нижний слой - предоставление виртуалок / железа и проч ) она должна предоставить готовое решение - железку/виртуалку с минимально рабочей конфигурацией - некоторыми дефолами.
А команды сервисов - сами вольны решать как именно им перенастроить дефолты - overwritting
Пример
. ├── base │   ├── logs │   │   ├── logrotate.sls │   │   ├── rsyslog │   │   └── rsyslog-elk.sls │   ├── monitoring │   │   ├── node_exporter.sls # Базовая настройка node_exoprter слушает порт 9117 │   │   └── redis.sls │   └── system └── service-name    └── monitoring        └── node_exporter.sls # Перезаписанная настройка node_exoprter слушает порт 8080 и запускается c debug-флагом
Pillars - это настройки, Grains - это методанные.
Если очень примитивно: Pillars - это настройки, Grains - это методанные. В принципе все что нужно о них знать.
К Pillars можно отнести: переменные для конфигов, секреты - пароли / сертификаты.
Grains - это то, что описывает миньон - метаданные: кол-во жестких дисков,  модель процессора. У нас в компании - это еще и принадлежность хоста к группе ( они же hostgroups ).
Самая худшее что можно придумать с этиму сущностями - это делать на них таргетинг непосредствено в стейтах.
Плохой пример
base:  '*':    - node_exporter
packages:  pkg.installed:    - pkgs:      - node_exporter run-exporter:  cmd.run {% if grains['hostgroup'] != 'production-server' %}    - 'node_exporter --arg1 foo' {% else %}    - 'node_exporter --arg1 foo --arg2 bar' {% endif %}
Не стоит использовать их для таргетирования - это увеличивает энтропию. Для таргетирования -  top.sls
Хороший пример
base:  'G:production-server':    - path.to.node_exporter    'G:dev-server':    - other.path.to.node_exporter
packages:  pkg.installed:    - pkgs:      - node_exporter run-exporter:  cmd.run    - 'node_exporter --arg1 foo'
packages:  pkg.installed:    - pkgs:      - node_exporter run-exporter:  cmd.run    - 'node_exporter --arg1 foo --arg2 bar'
0 notes
rafi1228 · 5 years ago
Link
The best Linux training where students will perform Linux Administration tasks just like in corporate environment
What you’ll learn
By the end of this course you will be a professional Linux administrator and be able to apply for Linux jobs
You will learn almost 150+ Linux commands
You will gain advance Linux systems administration skills and have deep understanding of Linux fundamentals and concepts
You will be able to troubleshoot everyday Linux related issues
You will manage Linux servers in a corporate environment
You will write basic to advance level shell scripts
Requirements
Desire to learn Linux and a computer
Description
Linux is the number ONE operating system for the Corporate Enterprise world.
If you want to start your career in Linux and have little or no knowledge of Linux then I can help.  In this course you will  learn Linux installation, configuration, administration, troubleshooting, command line, OS tools and much more…  I have also included  Resume and Interview workshop that will definitely help you get your  dream IT job.
In addition to the lectures there will be quizzes, homework and hand-out material just like a live classroom training
I have been teaching this exact course in a classroom environment in New York City.  Please note 80% of my students who took this course got the job in Linux within months.  Imagine those who take my course only to level  up their career, how productive this training can be for them
Following is the list of topics I will cover in this course:
Module 1 – Understanding Linux Concepts
What is Linux?
Everyday use of Linux
Unix vs. Linux
Quiz, Homework and Handouts
  Module 2 – Download, Install and Configure
What is Oracle Virtual Box?
Downloading and Installing Oracle Virtual Box
Creating virtual machine
Linux Distributions
Different way to install Linux
Downloading and Installing Linux (CentOS)
Redhat Linux installation (Optional)
Linux Desktop (GUI)
Virtual Machine Management
Linux vs. Windows
Who Uses Linux?
Quiz, Homework and Handouts
Module 3 – System Access and File System
Accessing Linux system
Download and install Putty
Connect Linux VM via Putty
Important Things to Remember in Linux
Introduction to File System
File system structure description
File system navigation commands
File System Paths
Directory listing overview
Creating Files and Directories
Finding Files and Directories (find, locate)
Changing Password
Wildcard (*, $, ^)
Combining and Splitting Files (cat and cut)
Soft and Hard Links (ln)
Quiz, Homework and Handouts
  Module 4 – Linux Fundamentals
Commands Syntax
File Permissions (chmod)
File Ownership (chown, chgrp)
Getting Help (man, whatis etc.)
TAB completion and up arrow keys
Adding text to file
Pipes ( | )
File Maintenance Commands
File Display Commands
Filters / Text Processing Commands (cut, sort, grep, awk, uniq, wc)
Compare Files (diff, cmp)
Compress and Un-compress files/directories (tar, gzip, gunzip)
Combining and Splitting Files
Linux vs. Windows Commands
Quiz, Homework and Handouts
  Module 5 – System Administration
Linux File Editors (vi text editor)
sed Command
User account management
Switch users and Sudo access
Monitor users
Talking to users (users, wall, write)
Linux Directory Service – Account Authentication
Difference between Active Directory, LDAP, IDM, WinBIND, OpenLDAP etc.
System utility commands (date, uptime, hostname, which, cal, bc etc.)
Processes and schedules (systemctl, ps, top, kill, crontab and at)
Process Management
System Monitoring Commands (top, df, dmesg, iostat 1, netstat, free etc.)
OS Maintenance Commands (shutdown, reboot, halt, init etc.)
Changing System Hostname (hostnamectl)
Finding System Information (uname, cat /etc/redhat-release, cat /etc/*rel*, dmidecode)
System Architecture (arch)
Terminal control keys
Terminal Commands (clear, exit, script)
Recover root Password (single user mode)
SOS Report
Quiz, Homework and Handouts
  Module 6 – Shell Scripting
Linux Kernel
What is a Shell?
Types of Shells
Shell scripting
Basic Shell scripts
If-then scripts
For loop scripts
do-while scripts
Case statement scripts
Aliases
Shell History
Command history
  Module 7 – Networking, Servers and System Updates
Enabling internet in Linux VM
Network Components
Network files and commands (ping, ifconfig, netstat, tcpdump, networking config files)
NIC Information (ethtool)
NIC or Port Bonding
Downloading Files or Apps (wget)
curl and ping Commands
File Transfer Commands
System updates and repositories (rpm and yum)
System Upgrade/Patch Management
Create Local Repository from CD/DVD
Advance Package Management
SSH and Telnet
DNS
Hostname and IP Lookup (nslookup and dig)
NTP
chronyd (Newer version of NTP)
Sendmail
Apache Web Server (http)
Central Logger (rsyslogd)
Securing Linux Machine (OS Hardening)
OpenLDAP Installation
Quiz, Homework and Handouts
  Module 8 – Disk Management and Run Levels
System run levels
Linux Boot Process
Message of the Day
Disk partition (df, fdisk, etc.)
Storage
Logical Volume Management (LVM)
LVM Configuration during Installation
Add Disk and Create Standard Partition
Add Disk and Create LVM Partition
LVM Configuration during Installation
Add Virtual Disk and Create New LVM Partition (pvcreate, vgcreate, lvcreate,)
Extend Disk using LVM
Adding Swap Space
RAID
Quiz, Homework and Handouts
  Module 9 – All About Resume
Resume workshop
Cover Letter
Linux job description or duties
Exposure to Other Technologies
Homework and Handouts
  Module 10 – All About Interview
IT Components
IT Job Statistics
Linus Around Us
Linux Operating System Jobs
IT Management Jobs
Post Resume and What to Expect
Interview workshop
Join Linux Community
200+ interview questions
Homework
Course Recap
Commands We Have Learned
Don’t Give up
Congratulations
Recap – Handouts
You can reach me at [email protected] for any questions or concerns
— Imran Afzal
Who this course is for:
Anyone who wants to start a career in Linux
Anyone who wants to have complete Linux training to get a job in IT
Anyone who wants to advance his/her career
Anyone who wants to master the Linux command line skills
Who wants help and advise in resume and interview
Created by Imran Afzal Last updated 4/2019 English English [Auto-generated]
Size: 9.10 GB
   Download Now
https://ift.tt/2QlN9a5.
The post Complete Linux Training Course to Get Your Dream IT Job 2019 appeared first on Free Course Lab.
0 notes
sysadminworks-blog · 7 years ago
Text
Rsyslogd imuxsock begins to drop messages from pid due to rate limiting
I was setting up filesystem auditing with logger and was getting lots of
rsyslogd-2177: imuxsock begins to drop messages from pid 48885 due to rate-limiting
messages. It appears that rsyslog by default has limit 200 messages per 5 seconds.
You can change the limits or disable by editing (adding) the following lines to /etc/rsyslog.conf
$SystemLogRateLimitInterval 0 $SystemLogRateLimitBurst 200
In my configuration LimitBurst doesn’t mean anything as I have set LimitInterval to 0, which means disabled.
0 notes
lbcybersecurity · 8 years ago
Text
The clang thread sanitizer
Finding threading bugs is hard. Clang thread sanitizer makes it easier. The thread sanitizer instruments the to-be-tested code and emits useful information about actions that look suspicious (most importantly data races). This is a great aid in development and for QA. Thread sanitizer is faster than valgrind's helgind, which makes it applicable to more use cases. Note however that helgrind and thread sanitizer sometimes each detect issues that the other one does not.
This is how thread sanitizer can be activated:
install clang package (the OS package is usually good enough, but if you want to use clang 5.0, you can obtain packages from http://apt.llvm.org/)
export CC=clang // or CC=clang-5.0 for the LLVM packages
export CFLAGS="-g -fsanitize=thread -fno-omit-frame-pointer"
re-run configure (very important, else CFLAGS is not picked up!)
make clean (important, else make does not detect that it needs to build some files due to change of CFLAGS)
make
install as usual
If you came to this page trying to debug a rsyslog problem, we strongly suggest to run your instrumented version interactively. To do so:
stop the rsyslog system service
sudo -i (you usually need root privileges for a typical rsyslogd configuration)
execute /path/to/rsyslogd -n ...other options... here "/path/to" may not be required and often is just "/sbin" (so "/sbin/rsyslogd") "other options" is whatever is specified in your OS startup scripts, most often nothing
let rsyslog run; thread sanitizer will spit out messages to stdout/stderr (or nothing if all is well)
press ctl-c to terminate rsyslog run
Note that the thread sanitizer will display some false positives at the start (related to pthread_cancel, maybe localtime_r). The stack trace shall contain exact location info. If it does not, the ASAN_SYMBOLIZER is not correctly set, but usually it "just works".
Doc on thread sanitizer ist available here: https://clang.llvm.org/docs/ThreadSanitizer.html
The post The clang thread sanitizer appeared first on Security Boulevard.
from The clang thread sanitizer
0 notes
lapismoon · 14 years ago
Text
slapdのログの分離
/var/log/messagesがslapdのログで埋まってしまって、他の情報を発掘しないと見られないような状況になっていたので、slapdのログを単体のファイルに持つことにした。
インストールされているのはrsyslogdだが、書いてあることは大体syslogd互換の記法である。
<facility>.<priority>   <action>
という感じの記法で、facilityは宛先みたいなもので、priorityはログレベルみたいなもの、actionはログを残す先を指定する。
slapdのデフォルトのfacilityはlocal4なので、
local4.*    -/var/log/slapd.log
みたいに書けば良い。priorityはワイルドカードが使用可能。actionの - は非同期出力で、ログの量が多いものに同期出力をすると同期待ちが頻繁に起こって動作が遅くなるので指定しておくと良いらしい。
しかし、openSUSEのrsyslogdのルールには、全てのログを/var/log/messagesに保存するというのが存在する。
*.*;mail.none;news.none     -/var/log/messages
これが結構後半に書いてあるので、/etc/rsyslog.dの中に新しい設定を放り込んだとしても、このルールが適用されて、ログが2重に取られてしまう。
結局、/etc/rsyslog.dの中だけで解決する方法が分からなかったので、/etc/rsyslog.confを以下のやり方でいじって、除外ルールを追加することにした。
Manpage of SYSLOG.CONF
# info メッセージは /var/log/messages へ記録する。 # *.=info;\ mail,news.none /var/log/messages
この指示により syslogd は priority info の全てのメッセージをファイル /var/log/messages に記録する。ただし facility が mail と news の両方のメッセージは保存しない。
今回除外したいのはslapdのfacilityであるlocal4なので、以下のようになる。
*.*;mail.none;news.none;local4.none     -/var/log/messages
rsyslog.confに手を入れずに除外する方法をご存じの方はご一報を。
2 notes · View notes
tikusrumput-blog · 11 years ago
Text
Silly rsyslogd
Today I found that after upgrading my ubuntu VPS from 12.04 to 14.04 one of my cpu core are hung at 100%. I tried to htop it and found rsyslogd consuming 100% cpu.
My vps is OpenVZ, and short story I found how to fix it. Just run this command on your OpenVZ vps
service rsyslog stop
sed -i -e 's/^\$ModLoad imklog/#\$ModLoad imklog/g' /etc/rsyslog.conf
service rsyslog start
Quite easy, but dont ask me what second line means, I'm bad at explaining, lol
1 note · View note
eventenrichment · 11 years ago
Text
New Post has been published on Event Enrichment.Org - http://www.eventenrichment.org/event-enrichment-unix-rsyslogd-imuxsock-message-drops/
New Post has been published on http://www.eventenrichment.org/event-enrichment-unix-rsyslogd-imuxsock-message-drops/
Event Enrichment : Unix : Rsyslogd : IMUXSOCK MESSAGE DROPS
This is the first in our sample Event Enrichment series!
While enjoying your shift in the quiet solitude of the NOC ;), you suddenly receive an alert from PagerDuty or your NMS. Depending on your level of expertise, you would typically need to open a runbook or Ops Wiki to determine how to handle the event.
Instead, let’s explore a different method, Event Enrichment, using the following syslog entry as our reference.
It all starts with an alert…
Jan 28 12:01:20 zenoss-mon rsyslogd-2177: imuxsock lost 5 messages from pid 1169 due to rate-limiting
This event has some useful information (we are losing messages which could be important, or event critical), but requires user intervention in order to investigate the problem. Now, assume that this same event arrives at the NOC already enriched with the steps required to handle the event. Mean Time to Repair (MTTR) would decrease given that the information required to properly triage the problem is already included in the initial alert.
Event Enrichments are composed of two components: remediation and escalation. Remediation consists of the steps necessary to rectify the problem, beginning with troubleshooting. The escalation includes the information to pass along as well as the intended recipient of said information (team or individual engineer)
REMEDIATION 
The first step in investigating this alert is to log into the device / server generating the error.
ops@noc-jump:~$ ssh ops@zenoss-mon Last login: Tue Jan 28 17:38:54 2014 from 172.25.230.5 [ops@zenoss-mon ~]#
The next step is to determine the process associated with the PID referenced in the alert.
Example
[ops@zenoss-mon ~]# ps aux | grep 1169 root 1169 0.2 0.1 203312 10368 ? S Jan23 22:48 /usr/sbin/snmpd -LS0-6d -Lf /dev/null -p /var/run/snmpd.pid
From the result of this command we can conclude that snmpd is generating more events than the configured rate-limiting threshold for rsyslogd (default is 200 events in a 5 second interval). On-call systems engineering will need to investigate the cause of this message suppression.
ESCALATION
Escalate to the on-call SysEng team using the PagerDuty SysEng Service (or other alerting mechanism) and include the following information:
Original Event Summary:  [Jan 28 12:01:20 zenoss-mon rsyslogd-2177: imuxsock lost 5 messages from pid 1169 due to rate-limiting]
Verified Findings:
This error is being generated due to an issue with /usr/sbin/snmpd.
[ops@zenoss-mon ~]# ps aux | grep 1169 root 1169 0.2 0.1 203312 10368 ? S Jan23 22:48 /usr/sbin/snmpd -LS0-6d -Lf /dev/null -p /var/run/snmpd.pid
      Adopting the Event Enrichment methodology enhances the standardization and scalability of your NOC and on-call processes.
Now check out the Beginner’s Guide to Event Enrichment to deepen your understanding of the methodology.
0 notes
myowan · 12 years ago
Text
rsyslog/rsyslog2 troubleshooting
/sbin/rsyslog -f /etc/rsyslog.conf   (interactive) use debug: /sbin/rsyslogd -c3 -dn > logfile checks config file: /sbin/rsyslogd -f /etc/rsyslog.conf -N1
0 notes