dougsparlingdotorg
dougsparlingdotorg
dougsparlingdotorg
12 posts
Don't wanna be here? Send us removal request.
dougsparlingdotorg · 6 years ago
Text
How do I find all symlinks in a directory tree
The first method:
$ ls -lR /path/to/folder | grep ^l
with output like this:
 818410      0 lrwxrwxrwx   1 webdog   webdog         45 Jul 11 13:58 ./releases/20190711185424/tmp/pids -> /usr/local/sites/www.example.com/shared/pids   397271      0 lrwxrwxrwx   1 webdog   webdog         44 Jul 11 13:58 ./releases/20190711185424/log -> /usr/local/sites/www.uexpress.com/shared/log
The second method:
$ find ./ -type l -print0 | xargs -0 ls -plah
with output like this:
lrwxrwxrwx 1 webdog webdog  44 Aug 13 09:31 ./releases/20190813142947/log -> /usr/local/sites/www.uexpress.com/shared/log lrwxrwxrwx 1 webdog webdog  47 Aug 13 09:31 ./releases/20190813142947/public/assets -> /usr/local/sites/www.exaample.com/shared/assets
Source: https://stackoverflow.com/questions/8513133/how-do-i-find-all-of-the-symlinks-in-a-directory-tree
0 notes
dougsparlingdotorg · 7 years ago
Text
MySql export schema without data
mysqldump -u root -p --no-data dbname > schema.sql
0 notes
dougsparlingdotorg · 7 years ago
Link
0 notes
dougsparlingdotorg · 7 years ago
Link
0 notes
dougsparlingdotorg · 7 years ago
Text
How to remove all empty directories
find . -type d -empty -delete
1 note · View note
dougsparlingdotorg · 7 years ago
Text
Crawl and download website using wget
$ wget -r -l0 www.example.com
If you want to save the wget output to a file:
$ wget -r -l0 www.example.com 2>out.txt
0 notes
dougsparlingdotorg · 7 years ago
Text
Using cURL to Test the WordPress Theme and Plugin APIs
Occasionally (rarely) when trying to search for a theme or plugin via the WordPress dashboard, you may see an error like this:
An unexpected error occurred. Something may be wrong with WordPress.org or this server’s configuration. If you continue to have problems, please try the support forums.
This means that the request WordPress has made to the theme or plugins api has failed, or that the body of the response is bad or empty. Often web hosts will turn off outbound http requests and this will be the source of your problem. However, there can be a myriad of other issues that may cause this error. WordPress will use one of three “transports” and search for them on your server in this order: curl, streams, and fsockopen. Since the focus of this article is on using cURL, that’s what will use at the command line.
To check if cURL is installed on your server, use the Unix ‘which’ command to find it’s install location.
$ which curl
and you should get a response something like this (your path may vary):
/usr/bin/curl
To simply check connectivity with the WordPress theme and plugin apis, you can make an http HEAD request with cURL:
$ curl -I http://api.wordpress.org/plugins/info/1.0/
and
$ curl -I http://api.wordpress.org/themes/info/1.0/
You should see output something like this:
HTTP/1.1 200 OK Server: nginx Date: Mon, 09 Jul 2018 16:41:02 GMT Content-Type: text/html; charset=utf-8 Connection: keep-alive Vary: Accept-Encoding X-Frame-Options: SAMEORIGIN
If you don’t have connectivity, you may see something like this:
>curl: (6) Could not resolve host: api.wordpress.org; nodename nor servname provided, or not known
If you want to duplicate the request made when you’re actually on the WordPress dashboard, you’ll have to make a POST request with serialized data parameters. To mimic a search for a “blue” theme, use this cURL command:
$ curl --data 'action=query_themes&request=O:8:"stdClass":4:{s:4:"page";i:1;s:8:"per_page";i:36;s:6:"fields";N;s:6:"search";s:4:"blue";}' http://api.wordpress.org/themes/info/1.0/
To mimic a search for a “cache” plugin, use this command:
$ curl --data 'action=query_plugins&request=O:8:"stdClass":3:{s:4:"page";i:1;s:8:"per_page";i:30;s:6:"search";s:5:"cache";}' http://api.wordpress.org/plugins/info/1.0/
A successful request will return quite a bit of HTML and serialized data (which I won’t post here).
0 notes
dougsparlingdotorg · 7 years ago
Text
Sort directory by file size with formatted size field using du -k | sort -nr | awk
du -k | sort -nr | awk ' BEGIN {    split("KB,MB,GB,TB", Units, ","); } {    u = 1;    while ($1 >= 1024) {        $1 = $1 / 1024;        u += 1;    }    $1 = sprintf("%.1f %s", $1, Units[u]);    print $0; } ' > sort_file.txt
1 note · View note
dougsparlingdotorg · 7 years ago
Link
If you're on a unix system, time and nslookup work just fine:
--jth
$ time nslookup www.arstechnica.com 1.2.3.4 Server:  a.com Address: 1.2.3.4
Non-authoritative answer: Name:    www.arstechnica.com Address:  216.110.36.107
real    0m0.24s user    0m0.00s sys     0m0.02s
$ time nslookup www.arstechnica.com 192.168.1.2     Server:  b.com Address:  192.168.1.2
Non-authoritative answer: Name:    www.arstechnica.com Address:  216.110.36.107
real    0m0.16s user    0m0.00s sys     0m0.02s                                                      
0 notes
dougsparlingdotorg · 7 years ago
Link
$ lsb_release -a
0 notes
dougsparlingdotorg · 7 years ago
Link
From this brilliant blog post...  https://blog.josephscott.org/2011/10/14/timing-details-with-curl/
cURL supports formatted output for the details of the request (see the cURL manpage for details, under -w, –write-out <format>). For our purposes we’ll focus just on the timing details that are provided.
1. Create a new file, curl-format.txt, and paste in:
   time_namelookup:  %{time_namelookup}\n       time_connect:  %{time_connect}\n    time_appconnect:  %{time_appconnect}\n   time_pretransfer:  %{time_pretransfer}\n      time_redirect:  %{time_redirect}\n time_starttransfer:  %{time_starttransfer}\n                    ----------\n         time_total:  %{time_total}\n
2. Make a request:
curl -w "@curl-format.txt" -o /dev/null -s "http://wordpress.com/"
Or on Windows, it's...
curl -w "@curl-format.txt" -o NUL -s "http://wordpress.com/"
What this does:
-w "@curl-format.txt" tells cURL to use our format file -o /dev/null redirects the output of the request to /dev/null -s tells cURL not to show a progress meter "http://wordpress.com/" is the URL we are requesting. Use quotes particularly if your URL has "&" query string parameters
And here is what you get back:
  time_namelookup:  0.001      time_connect:  0.037   time_appconnect:  0.000  time_pretransfer:  0.037     time_redirect:  0.000 time_starttransfer:  0.092                   ----------        time_total:  0.164
2 notes · View notes
dougsparlingdotorg · 7 years ago
Link
$ wget http://picasaweb.google.com 2>&1 | grep Location: Location: /home [following] Location: https://www.google.com/accounts/ServiceLogin?hl=en_US&continue=https%3A%2F%2Fpicasaweb.google.com%2Flh%2Flogin%3Fcontinue%3Dhttps%253A%252F%252Fpicasaweb.google.com%252Fhome&service=lh2&ltmpl=gp&passive=true [following] Location: https://accounts.google.com/ServiceLogin?hl=en_US&continue=https%3A%2F%2Fpicasaweb.google.com%2Flh%2Flogin%3Fcontinue%3Dhttps%3A%2F%2Fpicasaweb.google.com%2Fhome&service=lh2&ltmpl=gp&passive=true [following]
1 note · View note