too tired to keep this personal log updated but we gained a new member, a new kid.
dunno what to call them rn but we're calling them chibi for now. a literal kid, messy when talking. but for some reason have full memory access. perhaps they have been here for longer than we thought? we don't know tho. but they front a lot for some reason by sudden switches. tho then again we're all pretty drained these days. but chibi has been nice and agreeable anyway when we talk about the fronting, kinda cute. they just want to help and kinda try to defend us in some way too.
chibi is byelingual, barely able to say full sentence without braining too hard. very honest. nice kid overall even tho i worry for our system to work in public now.
loadmodule textops.so loadmodule maxfwd.so loadmodule xlog.so. loadmodule textops.so loadmodule siputils.so loadmodule xlog.so. KAMAILIO This config file implements the basic P-CSCF functionality - web. # opensips-cli -x mi subscribers_list E_RTPPROXY_STATUS unix:/tmp/event. KAMAILIO define WITHMYSQL define WITHAUTH define WITHUSRLOCDB define. # opensips-cli -x mi subscribers_list E_RTPPROXY_STATUS If the socket is also specified, only one subscriber information is returned. By setting the usrloc’s parameter dbmode to 2 we tell OpenSER to use mysql for storing contact information (and not the memory). We need mysql module to store user locations in a database. We xlog() function for logging the processing details on the screen. If the event is specified, only the external applications subscribed for that event are returned. We run the OpenSER server in a debug mode as a terminal process. The goal of the implementation is to load balance requests from my SIP provider to a farm of 10 asterisk servers for media processing. Output: If no parameter is specified, then the command returns information about all events and their subscribers. hey, i am new to OpenSIPS/OpenSER and just finished writing my first config. In Kamailio, we often may wish to add headers, view the contents of headers and perform an action or re-write headers (Disclaimer about not rewriting Vias as that goes beyond the purview of a SIP. socket (optional) - external application socket The SIP RFC allows for multiple SIP headers to have the same name, For example, it’s very common to have lots of Via headers present in a request.pid (optional) - Unix pid (validated by OpenSIPS).Section 1.2, Implemented Specifiers shows what can be printed out. A C-style printf specifier is replaced with a part of the SIP request or other variables from system. level (optional) - logging level (-3.4) (see meaning of the values) This module provides the possibility to print user formatted log or debug messages from OpenSIPS scripts, similar to printf function.If pid is also given, the logging level will change only for that process. OpenSIPS is a GPL implementation of a multi-functionality SIP Server that targets to deliver a high-level technical solution (performance, security and. If a logging level is given, it will be set for each process. If no argument is passed to the log_level command, it will print a table with the current logging levels of all processes. Get or set the logging level of one or all OpenSIPS processes. Output: an array with one object per connection with the following attributes : ID, type, state, source, destination, lifetime, alias port. Updated to latest upstream version: 1.0.1 Added support for multiple modules, including accounting, mysql, sms, xlog. Pseudo-variable marker - represents the character '' 3.2. The list of pseudo-variables in OpenSER Predefined pseudo-variables are listed in alphabetical order.
Openser xlog code#
As a special service "Fossies" has tried to format the requested source page into HTML format using (guessed) INI source code syntax highlighting (style: standard) with prefixed line numbers.Īlternatively you can here view or download the uninterpreted source code file.The command lists all ongoing TCP/TLS connection from OpenSIPS. Pseudo-variables can be used with following modules of OpenSER: avpops - function avpprintf () xlog - functions xlog () and xdbg () 3.
Is anyone’s allergies worse this year? I used to just snotty and sneezy but now I physically cannot breathe cause of my sinuses being clogged. Not with my you’d or anything but it’s just xlogged.
It’s literally physically unbearable being myself right now
This feature optimizes the WalInsertLock mechanism by using log sequence numbers (LSNs) and log record counts (LRCs) to record the copy progress of each backend. The backend can directly copy logs to the WalBuffer without contending for the WalInsertLock. In addition, a dedicated WALWriter thread is used to write logs, and the backend thread does not need to ensure the Xlog flushing.
The time for a standby node to replay can be delayed.
Benefits
By default, the standby server restores the Xlog records from the primary server as soon as possible. This function allows you to delay the time for a standby node to replay Xlog records. In this case, you can query a copy that records data before a period of time, which helps correct errors such as misoperations.
Description
The GUC parameter recovery_min_apply_delay can be used to set the delay time so that a standby server can replay Xlog records from the primary server after a delay time.
Value range: an integer ranging from 0 to INT_MAX. The unit is ms.
Default value: 0 (no delay)
Enhancements
None
Constraints
The recovery_min_apply_delay parameter is invalid on the primary node. It must be set on the standby node to be delayed.
The delay time is calculated based on the timestamp of transaction commit on the primary server and the current time on the standby server. Therefore, ensure that the clocks of the primary and standby servers are the same.
Operations without transactions are not delayed.
After the primary/standby switchover, if the original primary server needs to be delayed, you need to manually set this parameter.
When synchronous_commit is set to remote_apply, synchronous replication is affected by the delay. Each commit message is returned only after the replay on the standby server is complete.
Using this feature also delays hot_standby_feedback, which may cause the primary server to bloat, so be careful when using both.
If a DDL operation (such as DROP or TRUNCATE) that holds an AccessExclusive lock is performed on the primary server, the query operation on the operation object on the standby server will be returned only after the lock is released during the delayed replay of the record on the standby server.
PostgreSQL is an open-source relational database that offers dependability and resilience. It extends the SQL language by providing many features that store and scale the most complicated workloads. This database system dates back to 1986, introduced as part of the Postgres project at the University of California at Berkeley. Ever since, there has been 30 years of active development on the core platform to gain the current reputation of reliability, data integrity, robust feature set, extensibility e.t.c
One of the amazing features offered by PostgreSQL is data replication. Streaming replication, a feature added to PostgreSQL 9.0 offers the capability to ship and apply the WAL XLOG records. It offers several functions that include:
Log-shipping where XLOG records generated in the primary are periodically shipped to the standby via the network.
Continuous recovery– XLOG records shipped are replayed as soon as possible without waiting until the XLOG file has been filled
Connection settings and authentication – it allows users to configure similar settings as a normal connection to a connection for SR for example keepalive, pg_hba.conf
Progress report – The primary and standby report the progress of log-shipping in the PS display.
Multiple standbys – Multiple standbys can establish a connection to the primary for SR.
Graceful shutdown – when shutdowns are executed, the primary waits until the XLOG records up to the shutdown checkpoint record have been sent to standby.
Activation – standby can keep waiting for activation as long as required by the user.
This guide provides an in-depth illustration of how to configure the PostgreSQL Replication on Rocky Linux 8|AlmaLinux 8.
Step 1 – Install PostgreSQL on All Rocky Linux 8|AlmaLinux 8 Nodes
For streaming replication to occur, you need to have PostgreSQL installed on all the nodes. This can be done using the steps below:
List the available versions available in the default Rocky Linux 8|AlmaLinux 8 repositories.
$ dnf module list postgresql
....
Name Stream Profiles Summary
postgresql 9.6 client, server [d] PostgreSQL server and client module
postgresql 10 [d] client, server [d] PostgreSQL server and client module
postgresql 12 client, server [d] PostgreSQL server and client module
postgresql 13 client, server [d] PostgreSQL server and client module
Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled
For PostgreSQL 13
From the provided lists, you can install a preferred version say PostgreSQL 13.
sudo dnf -qy module enable postgresql:13
sudo dnf install postgresql-server postgresql13-contrib
For PostgreSQL 14
For this guide, we will use PostgreSQL 14, installed by adding an extra repository.
sudo dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm
To avoid conflicts disable the default repository,
sudo dnf -qy module disable postgresql
Install PostgreSQL 14 on Rocky Linux 8|AlmaLinux 8 with the command:
sudo dnf install -y postgresql14-server postgresql14-contrib
Once enabled, initialize your PostgreSQL database with the command:
sudo /usr/pgsql-13/bin/postgresql-14-setup initdb
##OR
sudo /usr/pgsql-14/bin/postgresql-14-setup initdb
Start and enable the service with the command:
sudo systemctl start postgresql-14
sudo systemctl enable postgresql-14
Verify that the service is running:
$ systemctl status postgresql-14
● postgresql-14.service - PostgreSQL 14 database server
Loaded: loaded (/usr/lib/systemd/system/postgresql-14.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2022-06-16 07:35:42 EDT; 15s ago
Docs: https://www.postgresql.org/docs/14/static/
Main PID: 6914 (postmaster)
Tasks: 8 (limit: 23544)
Memory: 16.7M
CGroup: /system.slice/postgresql-14.service
├─6914 /usr/pgsql-14/bin/postmaster -D /var/lib/pgsql/14/data/
├─6916 postgres: logger
├─6918 postgres: checkpointer
├─6919 postgres: background writer
├─6920 postgres: walwriter
├─6921 postgres: autovacuum launcher
├─6922 postgres: stats collector
└─6923 postgres: logical replication launcher
Allow the service through the firewall.
sudo firewall-cmd --add-port=5432/tcp --permanent
sudo firewall-cmd --reload
Step 2 – Configure the PostgreSQL Primary Host
Now proceed and make the below configuration to the primary host.
sudo vim /var/lib/pgsql/14/data/postgresql.conf
Make the below adjustments.
# line 60 : uncomment and change
listen_addresses = '*'
# line 205 : uncomment
wal_level = replica
# line 210 : uncomment
synchronous_commit = on
# line 298 : uncomment (max number of concurrent connections from streaming clients)
max_wal_senders = 10
# line 302 : uncomment and change (minimum number of past log file segments)
wal_keep_segments = 10
# line 312 : uncomment and change
synchronous_standby_names = '*'
Also, open the below file for editing.
sudo vim /var/lib/pgsql/14/data/pg_hba.conf
Make the below changes.
# end line : comment out existing lines and all new lines
# host replication [replication user] [allowed network] [authentication method]
#local replication all peer
#host replication all 127.0.0.1/32 scram-sha-256
#host replication all ::1/128 scram-sha-256
host replication rep_user 192.168.205.2/32 md5
host replication rep_user 192.168.205.3/32 md5
Remember to replace the IP Addresses for the primary and replica hosts. Save the file and restart the service:
sudo systemctl restart postgresql-14
Now save the changes and create a replication user.
sudo su - postgres
createuser --replication -P rep_user
Provide a desired password and exit.
exit
Step 3 – Configure the PostgreSQL Replica Host
Now proceed and make configurations to the replica host. Begin by stopping the PostgreSQL service:
sudo systemctl stop postgresql-14
Remove the existing data.
sudo rm -rf /var/lib/pgsql/14/data/*
Now obtain backup from the primary host(192.168.205.2 for this case)
sudo su - postgres
pg_basebackup -R -h 192.168.205.2 -U rep_user -D /var/lib/pgsql/14/data/ -P
Provide the password for the replication user created on the primary host to obtain the backup
Password:
27205/27205 kB (100%), 1/1 tablespace
Once complete as shown above, exit.
exit
Edit the PostgreSQL configuration.
sudo vim /var/lib/pgsql/14/data/postgresql.conf
Make the below changes.
# line 60 : uncomment and change
listen_addresses = '*'
# line 325 : uncomment
hot_standby = on
Also, edit the hba.conf file.
sudo vim /var/lib/pgsql/14/data/pg_hba.conf
Edit the file as shown.
# end line : comment out existing lines and all new lines
# host replication [replication user] [allowed network] [authentication method]
#local replication all peer
#host replication all 127.0.0.1/32 scram-sha-256
#host replication all ::1/128 scram-sha-256
host replication rep_user 192.168.205.2/32 md5
host replication rep_user 192.168.205.3/32 md5
The lines above already exist since the files have been copied from the primary host. Save the file and start the PostgreSQL service.
sudo systemctl start postgresql-14
Step 4 – Test Streaming Replication on Rocky Linux 8|AlmaLinux 8
Once the above configurations have been made, we now proceed and validate if the replication is happening.
sudo su - postgres
psql -c "select usename, application_name, client_addr, state, sync_priority, sync_state from pg_stat_replication;"
Sample Output:
To do this, we will create a database on the primary host and check if it appears on the replica host.
Access the PostgreSQL shell on the primary host
psql
Create a test database.
# CREATE DATABASE testdb;
CREATE DATABASE
Now on the replica hosts, check if it exists:
$ sudo -u postgres psql
psql (14.3)
Type "help" for help.
postgres=# \l
Sample Output:
Final Thoughts
That is it! We have PostgreSQL Replication on Rocky Linux 8|AlmaLinux 8 configured and running as expected. I hope this added value to you.
by x (helped by few others, mainly real life buds)
Decided to post this in case you need some help to self-diagnosis to determine you are DID/OSDD systems. It is not the best, but this is what we personally did to help us out to figure things out. You can try other methods of course. People are experiencing things differently. Once again, this is a personal list and not 100% correct. We are glad if it helps, but if it doesn’t- you can always figure out other ways.
Before we started, this is optional- but it’s more recommended to have at least one real life person you trust(and the alters who joined this thing) to accompany you in some of these. If you can’t find anyone, then you can just find the ones in this list that doesn’t require another person-in-another-body.
This whole thing requires at least two participating alters to co-op to determine. And a media to write down your results (book/phone notes/anything). Book is more recommended though- it’s faster, plus it might help you to recognize different writing pattern between the participating alters.
It’s also HIGHLY RECOMMENDED to still go to professional despite you ended up being sure with the results. You may bring the results to some professional for confirmation at least. This whole thing only able to help you to figure things out, not being the confirmation itself.
These are not that much, but hope it helped.
try to take picture of yourself when you are fronting, tell the alter to do the same when they front after you. try doing co-front selfie and totally -alone- selfie.
try to focus on switching and put it in a video. watch it aftermath.
recite each of you own memories. which are blurry and which are not. try to write those down.
look at your favorite series/character/anything and determine you/alter reaction to it if it differs.
make one of you write down something for the other when they are fronting, try the other to read it later.
try to "guess”(or remember) what they wrote down before opening it to read.
if you have another person helping you with this, try to make them help you writing down your differences in behaving, even if it’s a small one. (and several facial details like eye width difference or so)
your friend show one alter something, anything. Something simple enough to memorize. When they switched, make the other alter to recite what it was.
Determine switch triggers and reaction to certain calling(helped by having friend more).
try all of those with non-fronting alter turning on and off on your co-consciousness and see if there are differences. (i don’t even know how to put it in words because somewhat i always am co-con) difference in clear remembering, blurry, and that-remember-but-on-tip-of-your-tongue - write those all down.
examine your entire handwritings.
when you are not fronting but the other is, try to take notice of their behavior in public- later write it down under their name. let your alter do the same for you too. Including talking pattern and habits. (much easier with friend really)
try talking(head/not) when co-fronting and see if there is any difference.
most of these are mainly to determine whether you really differ with your alter(if you doubted yourself if you are the same person or not) and perhaps some a bit too focused in a few aspects. If the results of differentiation and lapses and emotions happen, you can tell you (and that alter) are two different people.
Sorry it’s not really written in the best way. But I kinda tried. Feel free to add.
openGauss|High Performance-Kunpeng NUMA Architecture Optimization
Kunpeng NUMA Architecture Optimization
Availability
This feature is available since openGauss 1.0.0.
Introduction
Kunpeng NUMA architecture optimization mainly focuses on Kunpeng processor architecture features and ARMv8 instruction set, and optimizes the system from multiple layers, including OS, software architecture, lock concurrency, logs, atomic operations, and cache access. This greatly improves the openGauss performance on the Kunpeng platform.
Benefits
Transactions per minute (TPM) is a key performance indicator of the database competitiveness. Under the same hardware costs, a higher database performance means the database can process more services, thereby reducing the usage cost of customers.
Description
openGauss optimizes the Kunpeng NUMA architecture based on the architecture characteristics. This reduces cross-core memory access latency and maximizes multi-core Kunpeng computing capabilities. The key technologies include redo log batch insertion, NUMA distribution of hotspot data, and CLog partitions, greatly improving the TP system performance.
Based on the ARMv8.1 architecture used by the Kunpeng chip, openGauss uses the LSE instruction set to implement efficient atomic operations, effectively improving the CPU usage, multi-thread synchronization performance, and Xlog write performance.
Based on the wider L3 cache line provided by the Kunpeng chip, openGauss optimizes hotspot data access, effectively improving the cache access hit ratio, reducing the cache consistency maintenance overhead, and greatly improving the overall data access performance of the system.
Kunpeng 920, 2P server (64 cores x 2, memory: 768 GB), 10 GE network, I/O: 4 NVMe PCIe SSDs, TPC-C: 1000 warehouses, performance: 1,500,000 tpmC.
Enhancements
Batch redo log insertion and CLog partition are supported, improving the database performance on the Kunpeng platform.
Efficient atomic operations using the LSE instruction set are supported, improving multi-thread synchronization performance.