#friendly reminder that Linux exists and most versions are use friendly
Explore tagged Tumblr posts
Text
Version 324
youtube
windows
zip
exe
os x
app
tar.gz
linux
tar.gz
source
tar.gz
I had a great week. The downloader overhaul is almost done.
pixiv
Just as Pixiv recently moved their art pages to a new phone-friendly, dynamically drawn format, they are now moving their regular artist gallery results to the same system. If your username isn't switched over yet, it likely will be in the coming week.
The change breaks our old html parser, so I have written a new downloader and json api parser. The way their internal api works is unusual and over-complicated, so I had to write a couple of small new tools to get it to work. However, it does seem to work again.
All of your subscriptions and downloaders will try to switch over to the new downloader automatically, but some might not handle it quite right, in which case you will have to go into edit subscriptions and update their gallery manually. You'll get a popup on updating to remind you of this, and if any don't line up right automatically, the subs will notify you when they next run. The api gives all content--illustrations, manga, ugoira, everything--so there unfortunately isn't a simple way to refine to just one content type as we previously could. But it does neatly deliver everything in just one request, so artist searching is now incredibly faster.
Let me know if pixiv gives any more trouble. Now we can parse their json, we might be able to reintroduce the arbitrary tag search, which broke some time ago due to the same move to javascript galleries.
twitter
In a similar theme, given our fully developed parser and pipeline, I have now wangled a twitter username search! It should be added to your downloader list on update. It is a bit hacky and may be ultimately fragile if they change something their end, but it otherwise works great. It discounts retweets and fetches 19/20 tweets per gallery 'page' fetch. You should be able to set up subscriptions and everything, although I generally recommend you go at it slowly until we know this new parser works well. BTW: I think twitter only 'browses' 3200 tweets in the past, anyway. Note that tweets with no images will be 'ignored', so any typical twitter search will end up with a lot of 'Ig' results--this is normal. Also, if the account ever retweets more than 20 times in a row, the search will stop there, due to how the clientside pipeline works (it'll think that page is empty).
Again, let me know how this works for you. This is some fun new stuff for hydrus, and I am interested to see where it does well and badly.
misc
In order to be less annoying, the 'do you want to run idle jobs?' on shutdown dialog will now only ask at most once per day! You can edit the time unit under options->maintenance and processing.
Under options->connection, you can now change max total network jobs globally and per domain. The defaults are 15 and 3. I don't recommend you increase them unless you know what you are doing, but if you want a slower/more cautious client, please do set them lower.
The new advanced downloader ui has a bunch of quality of life improvements, mostly related to the handling of example parseable data.
full list
downloaders:
after adding some small new parser tools, wrote a new pixiv downloader that should work with their new dynamic gallery's api. it fetches all an artist's work in one page. some existing pixiv download components will be renamed and detached from your existing subs and downloaders. your existing subs may switch over to the correct pixiv downloader automatically, or you may need to manually set them (you'll get a popup to remind you).
wrote a twitter username lookup downloader. it should skip retweets. it is a bit hacky, so it may collapse if they change something small with their internal javascript api. it fetches 19-20 tweets per 'page', so if the account has 20 rts in a row, it'll likely stop searching there. also, afaik, twitter browsing only works back 3200 tweets or so. I recommend proceeding slowly.
added a simple gelbooru 0.1.11 file page parser to the defaults. it won't link to anything by default, but it is there if you want to put together some booru.org stuff
you can now set your default/favourite download source under options->downloading
.
misc:
the 'do idle work on shutdown' system will now only ask/run once per x time units (including if you say no to the ask dialog). x is one day by default, but can be set in 'maintenance and processing'
added 'max jobs' and 'max jobs per domain' to options->connection. defaults remain 15 and 3
the colour selection buttons across the program now have a right-click menu to import/export #FF0000 hex codes from/to the clipboard
tag namespace colours and namespace rendering options are moved from 'colours' and 'tags' options pages to 'tag summaries', which is renamed to 'tag presentation'
the Lain import dropper now supports pngs with single gugs, url classes, or parsers--not just fully packaged downloaders
fixed an issue where trying to remove a selection of files from the duplicate system (through the advanced duplicates menu) would only apply to the first pair of files
improved some error reporting related to too-long filenames on import
improved error handling for the folder-scanning stage in import folders--now, when it runs into an error, it will preserve its details better, notify the user better, and safely auto-pause the import folder
png export auto-filenames will now be sanitized of \, /, :, *-type OS-path-invalid characters as appropriate as the dialog loads
the 'loading subs' popup message should appear more reliably (after 1s delay) if the first subs are big and loading slow
fixed the 'fullscreen switch' hover window button for the duplicate filter
deleted some old hydrus session management code and db table
some other things that I lost track of. I think it was mostly some little dialog fixes :/
.
advanced downloader stuff:
the test panel on pageparser edit panels now has a 'post pre-parsing conversion' notebook page that shows the given example data after the pre-parsing conversion has occurred, including error information if it failed. it has a summary size/guessed type description and copy and refresh buttons.
the 'raw data' copy/fetch/paste buttons and description are moved down to the raw data page
the pageparser now passes up this post-conversion example data to sub-objects, so they now start with the correctly converted example data
the subsidiarypageparser edit panel now also has a notebook page, also with brief description and copy/refresh buttons, that summarises the raw separated data
the subsidiary page parser now passes up the first post to its sub-objects, so they now start with a single post's example data
content parsers can now sort the strings their formulae get back. you can sort strict lexicographic or the new human-friendly sort that does numbers properly, and of course you can go ascending or descending--if you can get the ids of what you want but they are in the wrong order, you can now easily fix it!
some json dict parsing code now iterates through dict keys lexicographically ascending by default. unfortunately, due to how the python json parser I use works, there isn't a way to process dict items in the original order
the json parsing formula now uses a string match when searching for dictionary keys, so you can now match multiple keys here (as in the pixiv illusts|manga fix). existing dictionary key look-ups will be converted to 'fixed' string matches
the json parsing formula can now get the content type 'dictionary keys', which will fetch all the text keys in the dictionary/Object, if the api designer happens to have put useful data in there, wew
formulae now remove newlines from their parsed texts before they are sent to the StringMatch! so, if you are grabbing some multi-line html and want to test for 'Posted: ' somewhere in that mess, it is now easy.
next week
After slaughtering my downloader overhaul megajob of redundant and completed issues (bringing my total todo from 1568 down to 1471!), I only have 15 jobs left to go. It is mostly some quality of life stuff and refreshing some out of date help. I should be able to clear most of them out next week, and the last few can be folded into normal work.
So I am now planning the login manager. After talking with several users over the past few weeks, I think it will be fundamentally very simple, supporting any basic user/pass web form, and will relegate complicated situations to some kind of improved browser cookies.txt import workflow. I suspect it will take 3-4 weeks to hash out, and then I will be taking four weeks to update to python 3, and then I am a free agent again. So, absent any big problems, please expect the 'next big thing to work on poll' to go up around the end of October, and for me to get going on that next big thing at the end of November. I don't want to finalise what goes on the poll yet, but I'll open up a full discussion as the login manager finishes.
1 note
·
View note
Text
2020: The year in review for Amazon DynamoDB
2020 has been another busy year for Amazon DynamoDB. We released new and updated features that focus on making your experience with the service better than ever in terms of reliability, encryption, speed, scale, and flexibility. The following 2020 releases are organized alphabetically by category and then by dated releases, with the most recent release at the top of each category. It can be challenging to keep track of a service’s changes over the course of a year, so use this handy, one-page post to catch up or remind yourself about what happened with DynamoDB in 2020. Let us know @DynamoDB if you have questions. Amazon CloudWatch Application Insights June 8 – Amazon CloudWatch Application Insights now supports MySQL, DynamoDB, custom logs, and more. CloudWatch Application Insights launched several new features to enhance observability for applications. CloudWatch Application Insights has expanded monitoring support for two databases, in addition to Microsoft SQL Server: MySQL and DynamoDB. This enables you to easily configure monitors for these databases on Amazon CloudWatch and detect common errors such as slow queries, transaction conflicts, and replication latency. Amazon CloudWatch Contributor Insights for DynamoDB April 2 – Amazon CloudWatch Contributor Insights for DynamoDB is now available in the AWS GovCloud (US) Regions. CloudWatch Contributor Insights for DynamoDB is a diagnostic tool that provides an at-a-glance view of your DynamoDB tables’ traffic trends and helps you identify your tables’ most frequently accessed keys (also known as hot keys). You can monitor each table’s item access patterns continuously and use CloudWatch Contributor Insights to generate graphs and visualizations of the table’s activity. This information can help you better understand the top drivers of your application’s traffic and respond appropriately to unsuccessful requests. April 2 – CloudWatch Contributor Insights for DynamoDB is now generally available. Amazon Kinesis Data Streams for DynamoDB November 23 – Now you can use Amazon Kinesis Data Streams to capture item-level changes in your DynamoDB tables. Enable streaming to a Kinesis data stream on your table with a single click in the DynamoDB console, or via the AWS API or AWS CLI. You can use this new capability to build advanced streaming applications with Amazon Kinesis services. AWS Pricing Calculator November 23 – AWS Pricing Calculator now supports DynamoDB. Estimate the cost of DynamoDB workloads before you build them, including the cost of features such as on-demand capacity mode, backup and restore, DynamoDB Streams, and DynamoDB Accelerator (DAX). Backup and restore November 23 – You can now restore DynamoDB tables even faster when recovering from data loss or corruption. The increased efficiency of restores and their ability to better accommodate workloads with imbalanced write patterns reduce table restore times across base tables of all sizes and data distributions. To accelerate the speed of restores for tables with secondary indexes, you can exclude some or all secondary indexes from being created with the restored tables. September 23 – You can now restore DynamoDB table backups as new tables in the Africa (Cape Town), Asia Pacific (Hong Kong), Europe (Milan), and Middle East (Bahrain) Regions. You can use DynamoDB backup and restore to create on-demand and continuous backups of your DynamoDB tables, and then restore from those backups. February 18 – You can now restore DynamoDB table backups as new tables in other AWS Regions. Data export to Amazon S3 November 9 – You can now export your DynamoDB table data to your data lake in Amazon S3 to perform analytics at any scale. Export your DynamoDB table data to your data lake in Amazon Simple Storage Service (Amazon S3), and use other AWS services such as Amazon Athena, Amazon SageMaker, and AWS Lake Formation to analyze your data and extract actionable insights. No code-writing is required. DynamoDB Accelerator (DAX) August 11 – DAX now supports next-generation, memory-optimized Amazon EC2 R5 nodes for high-performance applications. R5 nodes are based on the AWS Nitro System and feature enhanced networking based on the Elastic Network Adapter. Memory-optimized R5 nodes offer memory size flexibility from 16–768 GiB. February 6 – Use the new CloudWatch metrics for DAX to gain more insights into your DAX clusters’ performance. Determine more easily whether you need to scale up your cluster because you are reaching peak utilization, or if you can scale down because your cache is underutilized. DynamoDB local May 21 – DynamoDB local adds support for empty values for non-key String and Binary attributes and 25-item transactions. DynamoDB local (the downloadable version of DynamoDB) has added support for empty values for non-key String and Binary attributes, up to 25 unique items in transactions, and 4 MB of data per transactional request. With DynamoDB local, you can develop and test applications in your local development environment without incurring any additional costs. Empty values for non-key String and Binary attributes June 1 – DynamoDB support for empty values for non-key String and Binary attributes in DynamoDB tables is now available in the AWS GovCloud (US) Regions. Empty value support gives you greater flexibility to use attributes for a broader set of use cases without having to transform such attributes before sending them to DynamoDB. List, Map, and Set data types also support empty String and Binary values. May 18 – DynamoDB now supports empty values for non-key String and Binary attributes in DynamoDB tables. Encryption November 6 – Encrypt your DynamoDB global tables by using your own encryption keys. Choosing a customer managed key for your global tables gives you full control over the key used for encrypting your DynamoDB data replicated using global tables. Customer managed keys also come with full AWS CloudTrail monitoring so that you can view every time the key was used or accessed. Global tables October 6 – DynamoDB global tables are now available in the Europe (Milan) and Europe (Stockholm) Regions. With global tables, you can give massively scaled, global applications local access to a DynamoDB table for fast read and write performance. You also can use global tables to replicate DynamoDB table data to additional AWS Regions for higher availability and disaster recovery. April 8 – DynamoDB global tables are now available in the China (Beijing) Region, operated by Sinnet, and the China (Ningxia) Region, operated by NWCD. With DynamoDB global tables, you can create fully replicated tables across Regions for disaster recovery and high availability of your DynamoDB tables. With this launch, you can now add a replica table in one AWS China Region to your existing DynamoDB table in the other AWS China Region. When you use DynamoDB global tables, you benefit from an enhanced 99.999% availability SLA at no additional cost. March 16 – You can now update your DynamoDB global tables from version 2017.11.29 to the latest version (2019.11.21) with a few clicks on the DynamoDB console. By upgrading the version of your global tables, you can easily increase the availability of your DynamoDB tables by extending your existing tables into additional AWS Regions, with no table rebuilds required. There is no additional cost for this update, and you benefit from improved replicated write efficiencies after you update to the latest version of global tables. February 6 – DynamoDB global tables are now available in the Asia Pacific (Mumbai), Canada (Central), Europe (Paris), and South America (São Paulo) Regions. NoSQL Workbench May 4 – NoSQL Workbench for DynamoDB adds support for Linux. NoSQL Workbench for DynamoDB is a client-side application that helps developers build scalable, high-performance data models, and simplifies query development and testing. NoSQL Workbench is available for Ubuntu 12.04, Fedora 21, Debian 8, and any newer versions of these Linux distributions, in addition to Windows and macOS. March 3 – NoSQL Workbench for DynamoDB is now generally available. On-demand capacity mode March 16 – DynamoDB on-demand capacity mode is now available in the Asia Pacific (Osaka-Local) Region. On-demand is a flexible capacity mode for DynamoDB that is capable of serving thousands of requests per second without requiring capacity planning. DynamoDB on-demand offers simple pay-per-request pricing for read and write requests, so you only pay for what you use, making it easy to balance cost and performance. PartiQL support November 23 – You can now use PartiQL, a SQL-compatible query language, to query, insert, update, and delete table data in DynamoDB. PartiQL makes it easier to interact with DynamoDB and run queries on the AWS Management Console. Training June 17 – Coursera offers a new digital course about building DynamoDB-friendly apps. AWS Training and Certification has launched “DynamoDB: Building NoSQL Database-Driven Applications,” a self-paced, digital course now available on Coursera. About the Author Craig Liebendorfer is a senior technical editor at Amazon Web Services. He also runs the @DynamoDB Twitter account. https://aws.amazon.com/blogs/database/2020-the-year-in-review-for-amazon-dynamodb/
0 notes
Text
Master the Linux 'ls' command
Master the Linux 'ls' command https://red.ht/2YfHYfe
The ls command lists files on a POSIX system. It's a simple command, often underestimated, not in what it can do (because it really does only one thing), but in how you can optimize your use of it.
Of the 10 most essential terminal commands to know, the humble ls command is in the top three, because ls doesn't just list files, it tells you important information about them. It tells you things like who owns a file or directory, when each file was lost or modified, and even what kind of file it is. And then there's its incidental function of giving you a sense of where you are, what nearby objects are lying around, and what you can do with them.
If your experience with ls is limited to whatever your distribution aliases it to in .bashrc, then you're probably missing out.
GNU or BSD?
Before looking at the hidden powers of ls, you must determine which ls command you're running. The two most popular versions are the GNU version, included in the GNU coreutils package, and the BSD version. If you're running Linux, then you probably have ls installed. If you're running BSD or MacOS, then you have the BSD version. There are differences, for which this article accounts.
You can find out which version is on your computer with the --version option:
$ ls --version
If this returns information about GNU coreutils, then you have the GNU version. If it returns an error, you're probably running the BSD version (run man ls | head to be sure).
You should also investigate what presets your distribution may have in place. Customizations to terminal commands are frequently placed in $HOME/.bashrc or $HOME/.bash_aliases or $HOME/.profile, and they're accomplished by aliasing ls to a more complex ls command. For example:
alias ls='ls --color'
The presets provided by distributions are very helpful, but they do make it difficult to discern what ls does on its own and what its additional options provide. Should you ever want to run ls and not the alias, you can "escape" the command with a backslash:
$ \ls
Classify
Run on its own, ls simply lists files in as many columns as can fit into your terminal:
$ ls ~/example bunko jdk-10.0.2 chapterize otf2ttf.ff despacer overtar.sh estimate.sh pandoc-2.7.1 fop-2.3 safe_yaml games tt
It's useful information, but all of those files look basically the same without the convenience of icons to quickly convey which is a directory, or a text file, or an image, and so on.
Use the -F (or --classify on GNU) to show indicators after each entry that identify the kind of file it is:
$ ls ~/example bunko jdk-10.0.2/ chapterize* otf2ttf.ff* despacer* overtar.sh* estimate.sh pandoc@ fop-2.3/ pandoc-2.7.1/ games/ tt*
With this option, items listed in your terminal are classified by file type using this shorthand:
A slash (/) denotes a directory (or "folder").
An asterisk (*) denotes an executable file. This includes a binary file (compiled code) as well as scripts (text files that have executable permission).
An at sign (@) denotes a symbolic link (or "alias").
An equals sign (=) denotes a socket.
On BSD, a percent sign (%) denotes a whiteout (a method of file removal on certain file systems).
On GNU, an angle bracket (>) denotes a door (inter-process communication on Illumos and Solaris).
A vertical bar (|) denotes a FIFO.
A simpler version of this option is -p, which only differentiates a file from a directory.
Long list
Getting a "long list" from ls is so common that many distributions alias ll to ls -l. The long list form provides many important file attributes, such as permissions, the user who owns each file, the group to which the file belongs, the file size in bytes, and the date the file was last changed:
$ ls -l -rwxrwx---. 1 seth users 455 Mar 2 2017 estimate.sh -rwxrwxr-x. 1 seth users 662 Apr 29 22:27 factorial -rwxrwx---. 1 seth users 20697793 Jun 29 2018 fop-2.3-bin.tar.gz -rwxrwxr-x. 1 seth users 6210 May 22 10:22 geteltorito -rwxrwx---. 1 seth users 177 Nov 12 2018 html4mutt.sh [...]
If you don't think in bytes, add the -h flag (or --human in GNU) to translate file sizes to more human-friendly notation:
$ ls --human -rwxrwx---. 1 seth users 455 Mar 2 2017 estimate.sh -rwxrwxr-x. 1 seth seth 662 Apr 29 22:27 factorial -rwxrwx---. 1 seth users 20M Jun 29 2018 fop-2.3-bin.tar.gz -rwxrwxr-x. 1 seth seth 6.1K May 22 10:22 geteltorito -rwxrwx---. 1 seth users 177 Nov 12 2018 html4mutt.sh
You can see just a little less information by showing only the owner column with -o or only the group column with -g:
$ ls -o -rwxrwx---. 1 seth 455 Mar 2 2017 estimate.sh -rwxrwxr-x. 1 seth 662 Apr 29 22:27 factorial -rwxrwx---. 1 seth 20M Jun 29 2018 fop-2.3-bin.tar.gz -rwxrwxr-x. 1 seth 6.1K May 22 10:22 geteltorito -rwxrwx---. 1 seth 177 Nov 12 2018 html4mutt.sh
Combine both options to show neither.
Time and date format
The long list format of ls usually looks like this:
-rwxrwx---. 1 seth users 455 Mar 2 2017 estimate.sh -rwxrwxr-x. 1 seth users 662 Apr 29 22:27 factorial -rwxrwx---. 1 seth users 20697793 Jun 29 2018 fop-2.3-bin.tar.gz -rwxrwxr-x. 1 seth users 6210 May 22 10:22 geteltorito -rwxrwx---. 1 seth users 177 Nov 12 2018 html4mutt.sh
The names of months aren't easy to sort, both computationally or (depending on whether your brain tends to prefer strings or integers) by recognition. You can change the format of the time stamp with the --time-style option plus the name of a format. Available formats are:
full-iso (1970-01-01 21:12:00)
long-iso (1970-01-01 21:12)
iso (01-01 21:12)
locale (uses your locale settings)
posix-STYLE (replace STYLE with a locale definition)
You can also create a custom style using the formal notation of the date command.
Sort by time
Usually, the ls command sorts alphabetically. You can make it sort according to which file was most recently changed (the newest is listed first) with the -t option.
For example:
$ touch foo bar baz $ ls bar baz foo $ touch foo $ ls -t foo bar baz
List type
The standard output of ls balances readability with space efficiency, but sometimes you want your file list in a specific arrangement.
For a comma-separated list of files, use -m:
ls -m ~/example bar, baz, foo
To force one file per line, use the -1 option (that's the number one, not a lowercase L):
$ ls -1 ~/bin/ bar baz foo
To sort entries by file extension rather than the filename, use -X (that's a capital X):
$ ls bar.xfc baz.txt foo.asc $ ls -X foo.asc baz.txt bar.xfc
Hide the clutter
There are a few entries in some ls listings that you may not care about. For instance, the metacharacters . and .. represent "here" and "back one level," respectively. If you're familiar with navigating in a terminal, you probably already know that each directory refers to itself as . and to its parent as .., so you don't need to be constantly reminded of it when you use the -a option to show hidden files.
To show almost all hidden files (the . and .. excluded), use the -A option:
$ ls -a . .. .android .atom .bash_aliases [...] $ ls -A .android .atom .bash_aliases [...]
With many good Unix tools, there's a tradition of saving backup files by appending some special character to the name of the file being saved. For instance, in Vim, backups get saved with the ~ character appended to the name.
These kinds of backup files have saved me from stupid mistakes on several occasions, but after years of enjoying the sense of security they provide, I don't feel the need to have visual evidence that they exist. I trust Linux applications to generate backup files (if they claim to do so), and I'm happy to take it on faith that they exist.
To hide backup files from view, use -B or --ignore-backups to conceal common backup formats (this option is not available in BSD ls):
$ ls bar.xfc baz.txt foo.asc~ foo.asc $ ls -B bar.xfc baz.txt foo.asc
Of course, the backup file still exists; it's just filtered out so that you don't have to look at it.
GNU Emacs saves backup files (unless otherwise configured) with a hash character (#) at the start and end of the file name (#file#). Other applications may use a different style. It doesn't matter what pattern is used, because you can create your own exclusions with the --hide option:
$ ls bar.xfc baz.txt #foo.asc# foo.asc $ ls --hide="#*#" bar.xfc baz.txt foo.asc
List directories with recursion
The contents of directories are not listed with the ls command unless you run ls on that directory specifically:
$ ls -F example/ quux* xyz.txt $ ls -R quux xyz.txt ./example: bar.xfc baz.txt #foo.asc# foo.asc
Make it permanent with an alias
The ls command is probably the command used most often during any given shell session. It's your eyes and ears, providing you with context and confirming the results of commands. While it's useful to have lots of options, part of the beauty of ls is its brevity: two characters and the Return key, and you know exactly where you are and what's nearby. If you have to stop to think about (much less type) several different options, it becomes less convenient, so typically even the most useful options are left off.
The solution is to alias your ls command so that when you use it, you get the information you care about the most.
To create an alias for a command in the Bash shell, create a file in your home directory called .bash_aliases (you must include the dot at the beginning). In this file, list the command you want to create an alias for and then the alias you want to create. For example:
alias ls='ls -A -F -B --human --color'
This line causes your Bash shell to interpret the ls command as ls -A -F -B --human --color.
You aren't limited to redefining existing commands. You can create your own aliases:
alias ll='ls -l' alias la='ls -A' alias lh='ls -h'
For aliases to work, your shell must know that the .bash_aliases configuration file exists. Open the .bashrc file in an editor (or create it, if it doesn't exist), and include this block of code:
if [ -e $HOME/.bash_aliases ]; then source $HOME/.bash_aliases fi
Each time .bashrc is loaded (which is any time a new Bash shell is launched), Bash will load .bash_aliases into your environment. You can close and relaunch your Bash session or just force it to do that now:
$ source ~/.bashrc
If you forget whether you have aliased a command, the which command tells you:
$ which ls alias ls='ls -A -F -B --human --color' /usr/bin/ls
If you've aliased the ls command to itself with options, you can override your own alias at any time by prefacing ls with a backslash. For instance, in the example alias, backup files are hidden using the -B option, which means there's no way to back up files with the ls command. Override the alias to see the backup files:
$ ls bar baz foo $ \ls bar baz baz~ foo
Do one thing and do it well
The ls command has a staggering number of options, many of which are niche or highly dependent upon the terminal you use. Take a look at info ls on GNU systems or man ls on GNU or BSD systems for more options.
You might find it strange that a system famous for the premise that each tool "does one thing and does it well" would weigh down its most common command with 50 options. But ls does only one thing: it lists files. And with 50 options to allow you to control how you receive that list, ls does its one job very, very well.
via Opensource.com https://red.ht/2JQVGBt https://red.ht/2JO4xDK July 24, 2019 at 04:14AM
0 notes