#Windows automatic ssh tunnel manager
Explore tagged Tumblr posts
dpterri · 3 years ago
Text
Windows automatic ssh tunnel manager
Tumblr media
Windows automatic ssh tunnel manager how to#
Windows automatic ssh tunnel manager software#
The SSH Server is developed and supported professionally by Bitvise. This allows you to use PuTTY just for SSH shell sessions (without opening tunnels), and use PuTTY Tunnel Manager just for tunneling. You can also move the tunnels from PuTTY to PuTTY Tunnel Manager. It is robust, easy to install, easy to use, and works well with a variety of SSH clients, including Bitvise SSH Client, OpenSSH, and PuTTY. PuTTY Tunnel Manager allows you to easily open tunnels, that are defined in a PuTTY session, from the system tray.
dynamic port forwarding through an integrated proxy īitvise SSH Server is an SSH, SFTP and SCP server for Windows.
The SSH Client is robust, easy to install, easy to use, and supports all features supported by PuTTY, as well as the following: It is developed and supported professionally by Bitvise. Using PSM for SSH, Security Managers can control access by determining which users can access different target systems. In the Saved Sessions field, enter a name for the profile. In the Auto-login username field, specify the username with which you want to log in to your NPS server. They are not endorsements by the PuTTY project.īitvise SSH Client is an SSH and SFTP client for Windows. Enter 192.168.4.101 under Host Name (or IP address), enter 22 under Port and select SSH under Protocol. With SSH Tunnel Manager you can set up as many tunnels as you wish, each one containing as many port redirections as you wish' and is an app.
Windows automatic ssh tunnel manager software#
PuTTY is open source software that is available with source code and is developed and supported by a group of volunteers.īelow suggestions are independent of PuTTY. After that, an X-Windows window will automatically open whenever you start an X-Windows program on any remote Unix host that supports SSH and X11 tunneling. SSH Tunnel Manager is described as 'tool to manage SSH Tunnels (commonly invoked with -L and -R arguments in the console). If allowed by the SSH server, it is also possible to reach a private server (from.
Windows automatic ssh tunnel manager how to#
PuTTY is an SSH and telnet client, developed originally by Simon Tatham for the Windows platform. Fig1: How to connect to a service blocked by a firewall through SSH tunnel. Download PuTTY - a free SSH and telnet client for Windows
Tumblr media
0 notes
huntermillionaire752 · 4 years ago
Text
Winscp For Mac Free
Tumblr media
RemoteFinder v.0.12RemoteFinder is a graphical SCP program for Mac OS X. It will provide features similar to other programs such as WinSCP. The Look and Feel will be Mac-Like.In the future, other Protocols such als FTP and WebDAV will be ...
Download Scp For Windows
Sftp Client For Mac
How To Install Winscp Open Source Software In Ubuntu 18.04: First, download the packages from here https. WinSCP 5.17 is a major application update. New features and enhancements include: Improvements to sessions and workspace management, so that WinSCP can now easily restore tabs that were open when it was last closed. Jan 03, 2018 But it's open source, so a Mac spawn of WinSCP. Still on the hunt for a WinSCP equivalent for Mac OSX. Just google Cyberduck for Mac and download it for free! Find the best programs like WinSCP for Mac. More than 19 alternatives to choose: FileZilla, Cyberduck. Free Download Platform Mac. Open source SFTP. Verdict: WinSCP contains many more features and functionalities like connection tunneling, workspaces, master password, directory caching, file masks, etc. Price: WinSCP is a free and open-source tool. Website: WinSCP.
Beyond CVS Eclipse Plug-In v.201003051612BeyondCVS is an Eclipe plug-in that enables using Beyond Compare (externally) for comparing files and folders. It also allows comparing a single file to a previous revision in CVS/SVN or Local History. There is also support for opening Putty and ...
DatacenterManager v.1.0Remotely Inventory and Poll UNIX servers in seconds. (without installing extra software on your servers, just by SSH communication plain old UNIX commands).Your entire datacenter can be automatically inventoried by supplying hostname, username & ...
SSH System Administration Tool v.201211071651ssh Java interface for Unix, Linux and MS Windows system administration.Allows you to remotely access and control your servers through google talk.Automates firewall rule checks; exporting the results into Excel.Allows you to run multiple ...
Tumblr media
Winscp software by TitlePopularityFreewareLinuxMac
Download Scp For Windows
Tumblr media
Today's Top Ten Downloads for Winscp
Tumblr media
DatacenterManager Remotely Inventory and Poll UNIX servers in seconds.
Beyond CVS Eclipse Plug-In BeyondCVS is an Eclipe plug-in that enables using Beyond
RemoteFinder RemoteFinder is a graphical SCP program for Mac OS X. It
SSH System Administration Tool ssh Java interface for Unix, Linux and MS Windows system
Tumblr media Tumblr media
Sftp Client For Mac
Visit HotFiles@Winsite for more of the top downloads here at WinSite!
Tumblr media
1 note · View note
dritanext · 3 years ago
Text
Windows apple remote desktop client
Tumblr media
Windows apple remote desktop client for mac#
Windows apple remote desktop client mac os#
Windows apple remote desktop client install#
Windows apple remote desktop client Offline#
Give it a try to see if this change works. This bypass the check on verifies that if you are connecting to the “correct” Windows-based computer. Go to Preferences > Security tab > and select “ Always connect, even if authentication fails” But that doesn’t mean you should just give up, it’s still usable. To be fair, this is an extremely old version of Microsoft Remote Desktop client, things that stopped working seems normal. It will prompt again and you will end up in an infinite loop. “ The Certificate or associated chain is not valid.” error will prompt, and answering “Connect” on “Do you want to connect to this computer anyway?” does not bypass this error.
Windows apple remote desktop client for mac#
In this guide, we will only focus on the legacy Microsoft Remote Desktop Connection Client for Mac (latest v2.1.1).Īfter upgrading one of my work Mac to MacOS Sierra (10.12.+), Microsoft’s Windows Remote Desktop stopped working. Microsoft stopped bundle a newer version of a remote desktop client with Mac Office 2016, instead, you can get it standalone from Mac App Store. Microsoft Remote Desktop Connection Client for Mac Version 2.1.1 is an app that comes with the Mac Office 2011. There are mainly two Apps that you can use from Microsoft to remote desktop into any Windows machine. Tightly integrated into Mavericks’ and Yosemite’s power saving technology to give you the best battery life on the go.Let’s face it, running Microsoft’s remote desktop on Mac isn’t the best experience. Quickly search and launch computers directly from Spotlight. Fully customizable: Configure your own shortcuts or use the built-in defaults. Seamlessly switch between computers without changing the way you use the keyboard. SSH Tunnelling and SSL/TLS encryption for VNC. Supports NLA, TLS/SSL encryption for RDP. Automatic connections are always encrypted by default. Jump encrypts the connection between computers to ensure privacy and security. Compatible with both RDP and VNC, Jump Desktop is secure, reliable and very easy to set up. Jump Desktop is a remote desktop application that lets you securely connect to any computer in the world. Save workflows as plug-ins to provide simple, customized interfaces to Apple Remote Desktop features. Combine actions with other application actions to create end-to-end solutions. Chain actions together to create powerful system administration workflows. Get started immediately with over 40 actions. Use a Task Server to assemble inventory reports, even from mobile systems not connected to the network. See reports on user logins and application use. Gather reports on more than 200 Mac hardware attributes. Perform lightning-fast searches with Remote Spotlight search. Execute UNIX shell scripts or commands on your client systems. Remotely lock screens, sleep, wake, restart and shutdown of Mac systems. Perform over a dozen commands securely on remote Mac systems. Control Virtual Network Computing (VNC)–enabled computers including Windows, Linux and UNIX systems. Prevent end-users from viewing the screen while you control their systems with Curtain Mode. Copy and paste information between any two computers. Transfer files between Mac computers using Drag and Drop.
Windows apple remote desktop client Offline#
Configure a Task Server to assist with package installations on offline computers. Encrypt network data when copying packages and files.
Windows apple remote desktop client install#
Easily copy and install software on remote Mac systems. Distribute software, provide real-time online help to end-users, create detailed software and hardware reports, and automate routine management tasks - all from your own Mac. Microsoft Remote Desktop App For MacĪpple Remote Desktop is the best way to manage the Mac computers on your network. Lets begin with a list of the best remote desktop software for MacOS, let us know if you want us to include your app here by contacting us.
Windows apple remote desktop client mac os#
There are various professional first-party, third-party, open source, and freeware remote desktop applications, some of which are cross-platform across various versions of Windows, Mac OS X, UNIX, and Linux. This is widely used by many computer manufacturers and large businesses’ help desks for technical troubleshooting of their customers’ problems. Remote access can also be explained as remote control of a computer by using another device connected via the internet or another network. The term remote desktop refers to a software or operating system feature that allows a personal computer’s desktop environment to be run remotely on one system, but the concept applies equally to a server.
#1. Microsoft Remote Desktop App For Mac.
Tumblr media
1 note · View note
aftitta · 3 years ago
Text
Royal tsx mac keystroke focus
Tumblr media
#Royal tsx mac keystroke focus for free
#Royal tsx mac keystroke focus for mac os x
#Royal tsx mac keystroke focus for mac
#Royal tsx mac keystroke focus update
#Royal tsx mac keystroke focus Pc
Set the aerial videos recorded for the fourth-generation Apple TV as your Mac's screensaver and enjoy beautiful scenery from around the world Microsoft Remote Desktop Offers you the possibility to quickly connect to a Windows-based computer in order to work with its programs and files, access data and more. HELLO brings you the latest celebrity & royal news from the UK & around the world. This allows you to get started quickly and if you only have a small environment you can continue using our products free of charge in 'Shareware Mode'. Apartment Therapy Apartment Therapy is a blog focusing on 24.
#Royal tsx mac keystroke focus for free
Below you will be able to download the full size image of this photo in high quality for free, enjoy! Royal Downloads Our products can be downloaded, installed and used for free without any time limit, license key or registration. Return to gallery index: Top 5 Remote Desktop Apps For Mac. for peach, plum, cherry, apple, pear, and orange trees. This collection consists of scripts by the Royal Apps team or contributions from our great user-base most recent commit 2 months ago. Also included are dynamic folder samples. SSH-based tunneling (Secure Gateway) support is tightly integrated in Royal TSX. Their cheerful notes, a delicate repast, The royal board to grace. This repository contains various automation scripts for Royal TS (for Windows) and Royal TSX (for macOS). Royal TSX is the perfect tool for server admins, system engineers, developers, and IT-focused information workers using OS X, who constantly need to access. Command Tasks and Key Sequence Tasks make it easy to quickly automate repetitive tasks. It's the perfect tool for server admins, system engineers, developers, and IT-focused information workers using macOS, who constantly need to access remote systems with different protocols. Share a list of connections, without sharing your personal credentials. Royal TSX provides easy and secure access to your remote systems. Use the edit button to edit the selected credential. If set to true, Windows key combinations are passed to. If set to true, certain macOS native keyboard shortcuts are passed to the remote session as their Windows equivalents. If you open the Window menu for instance, there are menu items for 'Select Next Tab' and 'Select Previous Tab' and the respective keyboard shortcuts are control+t. Use the add button to add a new credential. If set to true, keyboard accelerators are passed to the server. Introduction Most of the supported keyboard shortcuts can be found in the menus. Use an existing task Choose a command task from the drop-down list. This option is not available on the document level. The index is 100 S&P/TSX Capped Energy TRI and is computed by Ninepoint Partners LP. No worries, Royal TSX got you covered! Built-in credential management. Royal TS will look for a configured connect task in the parent folder. Provides access to a focused portfolio of mid-cap energy companies. Find here Royal Enfield Spare Parts, Royal Enfield Parts dealers, retailers, stores & distributors.
#Royal tsx mac keystroke focus Pc
Colorization support in the Navigation tree. Shop keyboard and mouse combos for PC and Mac. Dedicated Notes toolbar item allows you to view and edit the notes of the selected object. Navigation Panel can be automatically hidden when available screen space is at a premium. Comprehensive Remote Management Solution. Navigation Panel to browse and organize documents, connections, folders, tasks and credentials. Royal TSX is very well designed and makes it straightforward to manage connections in the left-hand sidebar with convenient tabs to control sessions.Features Download Buy Support Blog.
#Royal tsx mac keystroke focus for mac
Royal TSX for Mac is focused firmly on system administrators or professionals that need a remote desktop solutions with a very high level of security. This allows you to get started quickly and if you only have a small environment you can continue using our products free of charge in 'Shareware Mode'. Royal Downloads Our products can be downloaded, installed and used for free without any time limit, license key or registration. Royal TSX is the perfect tool for server admins, system engineers, developers and IT focused information workers using OS X, who. Royal TSX provides easy and secure access to your remote systems. You can revise this essential study guide using your PC, MAC, tablet.
#Royal tsx mac keystroke focus update
This version of Mac has a lot of capabilities and features. 4 1 April 2020 COVID-19 STRATEGY UPDATE A renewed focus on public health Figure 1.
#Royal tsx mac keystroke focus for mac os x
Royal Tsx 3.3.2 freeload For Mac Office 2016 For Mac With Vl License Category Frostpunk Download For Mac Download Kodi For Mac Os X 10.7.5 Where Is Dark Blue Text 2 In Excel For Mac Ms office 2016 mac crack is Here to download for free and direct link, exclusive on.
Tumblr media
0 notes
chaoticstudentcomputer · 4 years ago
Text
Iproxy Download
Tumblr media
Languages: English • français • ไทย
Download and Run checkra1n on your device. Open two terminal tabs. One: iproxy 2222 44 device udid Two: ssh root@localhost -p 2222. Mount -o rw,union,update /. Welcome to Proxy-List.Download. We provide great lists of free public proxy servers with different protocols to unblock contents, bypass restrictions or surf anonymously. Enjoy the unique features that only our page have on all the internet. 35648 Proxies available. 13552 HTTP Proxies. Download iproxy for free. Kubernetes Cluster Explorer gives you full visibility—and peace of mind.
1SSH over USB using usbmuxd
1.1Using binary
SSH over USB using usbmuxd
You can either download a binary and run that or use a python script. The python script is a lot slower than the binary version. On Linux the python method is mostly deprecated, use the binary version provided by libimobiledevice. There is also a newer solution called gandalf.
Using binary
On Windows, ensure iTunes is installed, then download itunnel_mux_rev71.zip from Google Code. Unzip to a directory of choice.
On OS X and Linux, install usbmuxd from your package manager.
Then:
X-Proxy 6.2.0.4 add to watchlist send us an update. 4 screenshots: runs on: Windows 10 32/64 bit Windows 8 32/64 bit Windows 7 32/64 bit.
Windows: Run path/to/itunnel_mux.exe --iport 22 --lport 2222
OS X/Linux: iproxy 2222 22
Connect to localhost -p 2222 as you would over wifi.
If you have multiple devices connected, it may be useful to run multiple instances, specifying UDIDs and ports like so:
Making iproxy run automatically in the background on OS X
Install it with Homebrew (brew install libimobiledevice).
Create the file ~/Library/LaunchAgents/com.usbmux.iproxy.plist with the contents:
Run launchctl load ~/Library/LaunchAgents/com.usbmux.iproxy.plist.
You now don't have to run the iproxy binary every time you want to SSH over USB as the iproxy software is always running in the background.
If you have several devices you can create a daemon with a specific port for each one.
Create a file in ~/Library/LaunchAgents/ but name it using the device UDID, name or an identifier of your choice (like com.usbmux.iproxy.iPhone7,2.plist).
Replace UDID_HERE in the following snippet with the device UDID. The label should be unique and is best to match the filename you used.
Run launchctl load ~/Library/LaunchAgents/FILE_NAME_OF_YOUR_CHOICE.
You now don't have to run the iproxy binary every time you want to SSH over USB as the iproxy software is always running in the background.
Using python
Tested on OS X and Windows.
You will need to have Python installed on your system.
Get usbmuxd source package and unpack. (Or if the linked usbmuxd package doesn't work, try libusbmuxd.)
Go into folder python-client
chmod +x tcprelay.py
Run ./tcprelay.py -t 22:2222
Now you can log into your device via ssh mobile@localhost -p 2222
The -t switch tells tcprelay to run threaded and allow more than one ssh over the same port.
Proxy Download For Google Chrome
See ./tcprelay.py --help for further options.
Using gandalf
Tested on OS X and Linux, each with up to 29 devices connected at the same time. The advantage of using gandalf is that it is written in a functional programming language, which practically means that it won't give you seg faults and it is actively maintained https://github.com/onlinemediagroup/ocaml-usbmux
Installation
You need to have opam installed, it is OCaml's package manager.
On OS X you can do:
(If on Linux, then get opam via your package manager, details available https://opam.ocaml.org/doc/Install.html, Ubuntu users please pay attention, need to use a ppa for opam). It is important that your compiler is up to date, you cancheck with opam switch, make sure its at least >= 4.02.0
then
This will install the command line tool gandalf and an OCamllibrary.
gandalf usage.
Tumblr media
The following are a series of usages of gandalf, all short formarguments have long-forms as well and -v can be added at any time.
1) See with realtime updates what devices are connected
This will start up gandalf in listen mode, that is it will printout whenever a device connects or disconnects and more crucially it will print out the UDID of each device.
2) Start with a mapping file which is of the form
So an example mapping file would be:
and the gandalf invocation is:
2.1) You can also daemonize gandalf with the -d flag. *NOTE*: You might need to end up doing that under sudo as gandalf needs tomake a pid file under /var/run.
3) To see a pretty JSON representation of devices and their ports that are currently connected, do:
4) To reload gandalf with a new set of mappings, do:
This will cancel all running threads and reload from the originalmappings file, so make your changes there.
5) To cleanly exit gandalf, do: *NOTE* This might require super user permissions.
Check out the man page, accessible with:
or
Simple invocation:
Important Notes and Catches
1) If you are running this on Linux, then you might get issues withusbmuxd having issues when more than around 7 devices are pluggedin. This is because multiple threads are trying to call variouslibxml2 freeing functions. I have a forked version of libplistthat usbmuxd uses, sans the memory freeing calls. Its availablehere. Compile and install that, then compile and install usbmuxdfrom source. This will leak memory but its not that much at all andI believe it to be a fixed amount.
2) Another issue you might have is USB3.0. The Linux kernel might crapout on you after 13 devices. This is a combination of the kernelnot giving enough resources and the host controller on yourmotherboard being crappy. The solution to this problem is todisable USB3.0 in your BIOS. To verify that USB3.0 isn't workingcheck with lsusb
SSH over USB using the iFunBox GUI (Windows only)
This feature only exists in the Windows build of iFunBox.
Get the latest Windows build of iFunBox and install it.
Click on 'Quick Toolbox,' then 'USB Tunnel.'
Assign ports as you see fit.
SSH over USB using iPhoneTunnel Menu Bar Application (macOS only)
Turn Tunnel On
Tools -> SSH
Theos usage
Export the following variables in your shell in order to deploy builds to the connected device:
export THEOS_DEVICE_IP=localhost
export THEOS_DEVICE_PORT=2222
SSH without password
Run the following commands one time and you will not be asked to type your password again.
You must create an SSH key with ssh-keygen if you have not created one. A passphrase isn’t required but still recommended. You can use ssh-agentas described here to keep the passphrase in memory and not be prompted for it constantly.
Then run the following command:ssh-copy-id root@DEVICE_IP
On OS X, ssh-copy-id will need to be installed with brew install ssh-copy-id.
Retrieved from 'https://iphonedevwiki.net/index.php?title=SSH_Over_USB&oldid=5201'
Tumblr media
Q: What is checkra1n? A: checkra1n is a community project to provide a high-quality semi-tethered jailbreak to all, based on the ‘checkm8’ bootrom exploit.
Iproxy Download
Q: How does it work? A: Magic hax.
Proxy Download For Pc
Q: Why was the beta release delayed? A: We didn't want the release quality to end up like iOS 13.2, you deserve better.
Q: wen eta? A: bruh we're past that.
Q: How do I use it? A: Open the checkra1n app, and follow the instructions to put your device into DFU mode. Hax happens auto-magically from that point and the device will boot into jailbroken mode. If you reboot the device without checkra1n, it will revert to stock iOS, and you will not be able to use any 3rd party software installed until you enter DFU and checkra1n the device again.
Q: Ugh, I don't like GUI? A: Ok, you can use './checkra1n.app/Contents/MacOS/checkra1n -c' from the console, or download a Linux CLI build.
Q: Is it safe to jailbreak? Can it harm my device / wipe my data? A: We believe jailbreaking is safe and take precautions to avoid data loss. However, as with any software, bugs can happen and *no warranty is provided*. We do recommend you backup your device before running checkra1n.
Q: I have a problem or issue to report after jailbreaking. A: Many problems and bootloops can be caused by buggy or incompatible tweaks. Remember many tweaks never saw iOS 13 in the pre-checkra1n era. If you suspect a recently installed tweak, you may attempt to enter no-substrate mode by holding vol-up during boot (starting with Apple logo until boot completes). If the issue goes away, a bad tweak is very likely the culprit, and you should contact the tweak developers.
Q: I have a problem or issue to report and I don't think it's related to a bad tweak. A: Please check here and follow the bug report template.
Proxy Download For Free
Q: I lost my passcode. Can checkra1n decrypt my data or get access to a locked device? A: No.
Q: Can I ssh into my device? A: Yes! An SSH server is deployed on port 44 on localhost only. You can expose it on your local machine using iproxy via USB.
Q: I love the project! Can I donate? A: Thanks, we love it too! The project does not currently take any donations. If anyone asks for donations, it's a scam.
Iproxy Download Mac
Q: Where are the sources? I want to write a dark-mode theme and publish the jailbreak as my own. A: checkra1n is released in binary form only at this stage. We plan to open-source later in 2020.
Proxy Download Roblox
Q: When is Windows support coming? A: We need to write a kernel driver to support Windows (which is a very complex piece of code!) which will take time. Rest assured however, we are working hard on it.
Tumblr media
0 notes
t-baba · 5 years ago
Photo
Tumblr media
How to Install MySQL
Almost all web applications require server-based data storage, and MySQL continues to be the most-used database solution. This article discusses various options for using MySQL on your local system during development.
MySQL is a free, open-source relational database. MariaDB is a fork of the database created in 2010 following concerns about the Oracle acquisition of MySQL. (It's is functionally identical, so most of the concepts described in this article also apply to MariaDB.)
While NoSQL databases have surged in recent years, relational data is generally more practical for the majority of applications. That said, MySQL also supports NoSQL-like data structures such as JSON fields so you can enjoy the benefits of both worlds.
The following sections examine three primary ways to use MySQL in your local development environment:
cloud-based solutions
using Docker containers
installing on your PC.
Cloud-based MySQL
MySQL services are offered by AWS, Azure, Google Cloud, Oracle, and many other specialist hosting services. Even low-cost shared hosts offer MySQL with remote HTTPS or tunneled SSH connections. You can therefore use a MySQL database remotely in local development. The benefits:
no database software to install or manage
your production environment can use the same system
more than one developer can easily access the same data
it's ideal for those using cloud-based IDEs or lower-specification devices such as Chromebooks
features such as automatic scaling, replication, sharding, and backups may be included.
The downsides:
set-up can still take considerable time
connection libraries and processes may be subtly different across hosts
experimentation is more risky; any developer can accidentally wipe or alter the database
development will cease when you have no internet connection
there may be eye-watering usage costs.
A cloud-based option may be practical for those with minimal database requirements or large teams working on the same complex datasets.
Run MySQL Using Docker
Docker is a platform which allows you to build, share, and run applications in containers. Think of a container as an isolated virtual machine with its own operating system, libraries, and the application files. (In reality, containers are lightweight processes which share resources on the host.)
A Docker image is a snapshot of a file system which can be run as a container. The Docker Hub provides a wide range of images for popular applications, and databases including MySQL and MariaDB. The benefits:
all developers can use the same Docker image on macOS, Linux, and Windows
MySQL installation configuration and maintenance is minimal
the same base image can be used in development and production environments
developers retain the benefits of local development and can experiment without risk.
Docker is beyond the scope of this article, but key points to note:
Docker is a client–server application. The server is responsible for managing images and containers and can be controlled via a REST API using the command line interface. You can therefore run the server daemon anywhere and connect to it from another machine.
Separate containers should be used for each technology your web application requires. For example, your application could use three containers: a PHP-enabled Apache web server, a MySQL database, and an Elasticsearch engine.
By default, containers don’t retain state. Data saved within a file or database will be lost the next time the container restarts. Persistency is implemented by mounting a volume on the host.
Each container can communicate with others in their own isolated network. Specific ports can be exposed to the host machine as necessary.
A commercial, enterprise edition of Docker is available. This article refers to the open-source community edition, but the same techniques apply.
Install Docker
Instructions for installing the latest version of Docker on Linux are available on Docker Docs. You can also use official repositories, although these are likely to have older editions. For example, on Ubuntu:
sudo apt-get update sudo apt-get remove docker docker-engine docker.io sudo apt install docker.io sudo systemctl start docker sudo systemctl enable docker
Installation will vary on other editions of Linux, so search the Web for appropriate instructions.
Docker CE Desktop for macOS Sierra 10.12 and above and Docker CE Desktop for Windows 10 Professional are available as installable packages. You must register at Docker Hub and sign in to download.
Docker on Windows 10 uses the Hyper-V virtualization platform, which you can enable from the Turn Windows features on or off panel accessed from Programs and Features in the the Control Panel. Docker can also use the Windows Subsystem for Linux 2 (WSL2 — currently in beta).
To ensure Docker can access the Windows file system, choose Settings from the Docker tray icon menu, navigate to the Shared Drives pane, and check which drives the server is permitted to use.
Check Docker has successfully installed by entering docker version at your command prompt. Optionally, try docker run hello-world to verify Docker can pull images and start containers as expected.
Run a MySQL Container
To make it easier for Docker containers to communicate, create a bridged network named dbnet or whatever name you prefer (this step can be skipped if you just want to access MySQL from the host device):
docker network create --driver bridge dbnet
Now create a data folder on your system where MySQL tables will be stored — such as mkdir data.
The most recent MySQL 8 server can now be launched with:
docker run -d --rm --name mysql --net dbnet -p 3306:3306 -e MYSQL_ROOT_PASSWORD=mysecret -v $PWD/data:/var/lib/mysql mysql:8
Arguments used:
-d runs the container as a background service.
--rm removes the container when it stops running.
--name mysql assigns a name of mysql to the container for easier management.
-p 3306:3306 forwards the container port to the host. If you wanted to use port 3307 on the host, you would specify -p 3307:3306.
-e defines an environment variable, in this case the default MySQL root user password is set to mysecret.
-v mounts a volume so the /var/lib/mysql MySQL data folder in the container will be stored at the current folder's data subfolder on the host.
$PWD is the current folder, but this only works on macOS and Linux. Windows users must specify the whole path using forward slash notation — such as /c/mysql/data.
The first time you run this command, MySQL will take several minutes to start as the Docker image is downloaded and the MySQL container is configured. Subsequent restarts will be instantaneous, presuming you don’t delete or change the original image. You can check progress at any time using:
docker logs mysql
Using the Container MySQL Command-line Tool
Once started, open a bash shell on the MySQL container using:
docker exec -it mysql bash
Then connect to the MySQL server as the root user:
mysql -u root -pmysecret
-p is followed by the password set in Docker's -e argument shown above. Don’t add a space!
Any MySQL commands can now be used — such as show databases;, create database new; and so on.
Use a MySQL client
Any MySQL client application can connect to the server on port 3306 of the host machine.
If you don't have a MySQL client installed, Adminer is a lightweight PHP database management tool which can also be run as a Docker container!
docker run -d --rm --name adminer --net dbnet -p 8080:8080 adminer
Once started, open http://localhost:8080 in your browser and enter mysql as the server name, root as the username, and mysecret as the password:
Databases, users, tables, and associated settings can now be added, edited, or removed.
The post How to Install MySQL appeared first on SitePoint.
by Craig Buckler via SitePoint https://ift.tt/2U399ve
0 notes
deep-context-tech · 6 years ago
Text
Putty Version 0.61 and Putty Connection Manager
I wanted to update this post. Putty Connection Manager is no longer active. I have switched to SuperPuTTY. It is available here SuperPuTTY. SuperPuTTy does not use "login scripts". For SSH connections you can pass username and password, however you can not do this for telnet. Here is a SuperPuTTY new session configuration using SSH and passing username and password.
Tumblr media Tumblr media
I use Putty for telnet, SSH, and serial connections to. Putty is an excellent terminal program. I also use Putty Connection Manager for Tabbed windows and login scripts. Putty Connection Manager is a Free Putty Client add-on. http://puttycm.free.fr/cms/index.php Simon Tatham has released version 0.61 of Putty. Putty 0.61 can be downloaded at: http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html 2011-07-12 PuTTY 0.61 is released PuTTY 0.61 is out, after over four years (sorry!), with new features, bug fixes, and compatibility updates for Windows 7 and various SSH server software.These features are new in beta 0.61 (released 2011-07-12): Kerberos/GSSAPI authentication in SSH-2.Local X11 authorisation support on Windows. (Unix already had it, of course.)Support for non-fixed-width fonts on Windows.GTK 2 support on Unix.Specifying the logical host name independently of the physical network address to connect to.Crypto and flow control optimisations.Support for the [email protected] SSH-2 compression method.Support for new Windows 7 UI features: Aero resizing and jump lists.Support for OpenSSH AES-encrypted private key files in PuTTYgen.Bug fix: handles OpenSSH private keys with primes in either order.Bug fix: corruption of port forwarding is fixed (we think).Bug fix: various crashes and hangs when exiting on failure,Bug fix: hang in the serial back end on Windows.Bug fix: Windows clipboard is now read asynchronously, in case of deadlock due to the clipboard owner being at the far end of the same PuTTY's network connection (either via X forwarding or via tunnelled rdesktop). Putty Connection Manager Features Features Tabs and dockable windows for PuTTY instances.Fully compatible with PuTTY configuration (using registry).Easily customizable to optimize workspace (fullscreen, minimze to tray, add/remove toolbar, etc...).Automatic login feature regardless to protocol restrictions (user keyboard simulation).Post-login commands (execute any shell command when logged).Connection Manager : Manage a large number of connections with specific configuration (auto-login, specific PuTTY Session, post-command, etc...).Quick connect toolbar to quickly launch a PuTTY connection.Import/Export whole connections informations to XML format (generate your configuration automatically from another tool and import it, or export your configuration for backup purpose).Encrypted configuration database option available to store connections informations safely (external library supporting AES algorithm used with key sizes of 128, 192 and 256 bits, please refer for the legal status of encryption software in your country).Standalone executable, no setup required.Localizable : English (default) and French available (only when using setup version, standalone is english only).Completely free for commercial and personal use : PuTTY Connection Manager is freeware.
0 notes
just4programmers · 6 years ago
Text
How to use Windows 10's built-in OpenSSH to automatically SSH into a remote Linux machine
In working on getting Remote debugging with VS Code on Windows to a Raspberry Pi using .NET Core on ARM in my last post, I was looking for optimizations and realized that I was using plink/putty for my SSH tunnel. Putty is one of those tools that we (as developers) often take for granted, but ideally I could do stuff like this without installing yet another tool. Being able to use out of the box tools has a lot of value.
A friend pointed out this part where I'm using plink.exe to ssh into the remote Linux machine to launch the VS Debugger:
"pipeTransport": { "pipeCwd": "${workspaceFolder}", "pipeProgram": "${env:ChocolateyInstall}\\bin\\PLINK.EXE", "pipeArgs": [ "-pw", "raspberry", "[email protected]" ], "debuggerPath": "/home/pi/vsdbg/vsdbg" }
I could use Linux/bash that's built into Windows 10 for years now. As you may know, Windows 10 can run many Linuxes out of the box. If I have a Linux distro configured, I can call Linux commands locally from CMD or PowerShell. For example, here you see I have three Linuxes and one is the default. I can call "wsl" and any command line is passed in.
C:\Users\scott> wslconfig /l Windows Subsystem for Linux Distributions: Ubuntu-18.04 (Default) WLinux Debian C:\Users\scott> wsl ls ~/ forablog forablog.2 forablog.2.save forablog.pub myopenaps notreal notreal.pub test.txt
So theoretically I could "wsl ssh" and use that Linux's ssh, but again, requires setup and it's a little silly. Windows 10 now supports OpenSSL already!
Open an admin PowerShell to see if you have it installed. Here I have the client software installed but not the server.
C:\> Get-WindowsCapability -Online | ? Name -like 'OpenSSH*' Name : OpenSSH.Client~~~~0.0.1.0 State : Installed Name : OpenSSH.Server~~~~0.0.1.0 State : NotPresent
You can then add the client (or server) with this one-time command:
Add-WindowsCapability -Online -Name OpenSSH.Client~~~~0.0.1.0
You'll get all the standard OpenSSH stuff that one would want.
Let's say now that I want to be able to ssh (shoosh!) into a remote Linux machine using PGP keys rather than with a password. It's much more convenient and secure. I'll be ssh'ing with my Windows SSH into a remote Linux machine. You can see where ssh is installed:
C:\Users\scott>where ssh C:\Windows\System32\OpenSSH\ssh.exe
Level set - What are we doing and what are we trying to accomplish?
I want to be able to type "ssh pi@crowpi" from my Windows machine and automatically be logged in.
I will
Make a key on my Window machine. The FROM. I want to ssh FROM here TO the Linux machine.
Tell the Linux machine (by transferring it over) about the public piece of my key and add it to a specific user's allowed_keys.
PROFIT
Here's what I did. Note you can do this is several ways. You can gen the key on the Linux side and scp it over, you can use a custom key and give it a filename, you can use a password as you like. Just get the essence right.
Below, note that when the command line is C:\ I'm on Windows and when it's $ I'm on the remote Linux machine/Raspberry Pi.
gen the key on Windows with ssh-keygen
I ssh'ed over to Linux and note I'm prompted for a password, as expected.
I "ls" to see that I have a .ssh/ folder. Cool. You can see authorized_keys is in there, you may or may no have this file or folder. Make the ~/.ssh folder if you don't.
Exit out. I'm in Windows now.
Look closely here. I'm "scott" on Windows so my public key is in c:\users\scott\.ssh\id_rsa.pub. Yours could be in a file you named earlier, be conscious.
I'm type'ing (cat on Linux is type on Windows) that text file out and piping it into SSH where I login that remote machine with the user pi and I then cat (on the Linux side now) and append >> that text to the .ssh/authorized_keys folder. The ~ folder is implied but could be added if you like.
Now when I ssh pi@crowpi I should NOT be prompted for a password.
Here's the whole thing.
C:\Users\scott\Desktop> ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (C:\Users\scott/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in C:\Users\scott/.ssh/id_rsa. Your public key has been saved in C:\Users\scott/.ssh/id_rsa.pub. The key fingerprint is: SHA256:x2vJHHXwosSSzLHQWziyx4II+scott@IRONHEART The key's randomart image is: +---[RSA 2048]----+ | . .... . | |..+. .=+=. o | | .. | +----[SHA256]-----+ C:\Users\scott\Desktop> ssh pi@crowpi pi@crowpi's password: Linux crowpi 2018 armv7l pi@crowpi:~ $ ls .ssh/ authorized_keys id_rsa id_rsa.pub known_hosts pi@crowpi:~ $ exit logout Connection to crowpi closed. C:\Users\scott\Desktop> type C:\Users\scott\.ssh\id_rsa.pub | ssh pi@crowpi 'cat >> .ssh/authorized_keys' pi@crowpi's password: C:\Users\scott\Desktop> ssh pi@crowpi pi@crowpi: ~ $
Fab. At this point I could go BACK to my Windows' Visual Studio Code launch.json and simplify it to NOT use Plink/Putty and just use ssh and the ssh key management that's included with Windows.
"pipeTransport": { "pipeCwd": "${workspaceFolder}", "pipeProgram": "ssh", "pipeArgs": [ "[email protected]" ], "debuggerPath": "/home/pi/vsdbg/vsdbg" }
Cool!
NOTE: In my previous blog post some folks noted I am logging in as "root." That's an artifact of the way that .NET Core is accessing the GPIO pins. That won't be like that forever.
Thoughts? I hope this helps someone.
Sponsor: Your code is bad, but that’s ok thanks to Sentry’s full stack error monitoring that enables you to track and fix application errors in real time. Stop garbage code from becoming garbage fires.
© 2018 Scott Hanselman. All rights reserved.
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
0 notes
savetopnow · 7 years ago
Text
2018-03-18 00 LINUX now
LINUX
Linux Academy Blog
Linux Academy Weekly Roundup 110
Announcing Python 3 for System Administrators
Linux Academy Weekly Roundup 109
The Story of Python 2 and 3
Happy International Women’s Day!
Linux Insider
New Raspberry Pi Packs More Power
SpaceChain, Arch Aim to Archive Human Knowledge in Space
Deepin Desktop Props Up Pardus Linux
Kali Linux Security App Lands in Microsoft Store
Microsoft Gives Devs More Open Source Quantum Computing Goodies
Linux Journal
Weekend Reading: All Things Bash
Security: 17 Things
Private Internet Access Goes Open Source, New Raspbian Image Available, Scarlett Johansson Image an Attack Vector on PostgreSQL and More
Oracle Patches Spectre for Red Hat
Linus Bashes CTS Labs, GNOME 3.28 Released, Project ACRN and More
Linux Magazine
OpenStack Queens Released
Kali Linux Comes to Windows
Ubuntu to Start Collecting Some Data with Ubuntu 18.04
CNCF Illuminates Serverless Vision
LibreOffice 6.0 Released
Linux Today
Intel Outlines Plans for Cascade Lake Xeon Scalable Processors
NATS Messaging Project Joins Cloud Native Computing Foundation
How to reset a Windows password with Linux
Linux man Command Tutorial for Beginners (8 Examples)
Gogo - Create Shortcuts to Long and Complicated Paths in Linux
Linux.com
How to Encrypt Files From Within a File Manager
Linux Beats Windows To Become The Most Popular Development Platform: Stack Overflow Survey 2018
Container Isolation Gone Wrong
ONAP Set to Speed Standards, Network Automation
Introducing Agones: Open Source, Multiplayer, Dedicated Game-Server Hosting Built on Kubernetes
Reddit Linux
UBports Ubuntu Touch Q&A 25 Live @ 19:00 UTC
Humble Bundle: DIY Electronics digital books
Raspbian Remix Lets You Create Your Own Spin That You Can Install on PC or Mac
Ubuntu Has Made its Minimal Images Even More Minimal — Just 28MB!
FAI (Fully Automatic Installation) for Debian GNU/Linux
Riba Linux
MX Linux 17.1 overview | simple configuration, high stability, solid performance
How to install Neptune 5.0
Neptune 5.0 overview | an elegant out of the box experience.
How to install Pardus 17.2
Pardus 17.2 overview | a competitive and sustainable operating system
Slashdot Linux
North Carolina Police Obtained Warrants Demanding All Google Users Near Four Crime Scenes
The Ordinary Engineering Behind the Horrifying Florida Bridge Collapse
Ford's Badly Needed Plan To Catch Up On Hybrid, Electric Cars
Apple's Newest iPhone X Ad Captures an Embarrassing iOS 11 Bug
Amazon Alexa's 'Brief Mode' Makes the Digital Assistant Way Less Chatty
Softpedia
Wine 3.4
Linux Kernel 4.15.10 / 4.16 RC5
Linux Kernel 4.14.27 LTS / 4.9.87 LTS / 4.4.121 LTS / 4.1.50 LTS / 3.18.99 EOL / 3.16.55 LTS
WebKitGTK+ 2.20.0
gscan2pdf 2.0.1
Tecmint
Gogo – Create Shortcuts to Long and Complicated Paths in Linux
5 ‘hostname’ Command Examples for Linux Newbies
Get GOOSE VPN Subscriptions to Browse Anonymously and Securely
AMP – A Vi/Vim Inspired Text Editor for Linux Terminal
How to Install Rust Programming Language in Linux
nixCraft
Raspberry PI 3 model B+ Released: Complete specs and pricing
Debian Linux 9.4 released and here is how to upgrade it
400K+ Exim MTA affected by overflow vulnerability on Linux/Unix
Book Review: SSH Mastery – OpenSSH, PuTTY, Tunnels & Keys
How to use Chomper Internet blocker for Linux to increase productivity
0 notes
technteacher · 5 years ago
Text
OpenSSH 8.2 Released With FIDO/U2F Hardware Authentication
OpenSSH 8.2 Adds Supports FIDO/U2F Hardware Authentication.
OpenSSH is a software for secure networking utilities based on the Secure Shell protocol for remote login. It encrypts all traffic to eliminate eavesdropping, connection hijacking, and other attacks.
In addition, OpenSSH provides a large suite of secure tunneling capabilities, several authentication methods, and sophisticated configuration options.
OpenSSH is complete SSH protocol 2.0 implementation and includes sftp client and server support.
Future deprecation notice
It is now possible to perform chosen-prefix attacks against the SHA-1 hash algorithm for less than USD $50K. For this reason, we will be disabling the “ssh-rsa” public key signature algorithm that depends on SHA-1 by default in a near-future release.
This algorithm is unfortunately still used widely despite the existence of better alternatives, being the only remaining public key signature algorithm specified by the original SSH RFCs.
The better alternatives include:
The RFC8332 RSA SHA-2 signature algorithms rsa-sha2-256/512. These algorithms have the advantage of using the same key type as “ssh-rsa” but use the safe SHA-2 hash algorithms. These have been supported since OpenSSH 7.2 and are already used by default if the client and server support them.
The ssh-ed25519 signature algorithm. It has been supported in OpenSSH since release 6.5.
The RFC5656 ECDSA algorithms: ecdsa-sha2-nistp256/384/521. These have been supported by OpenSSH since release 5.7.
To check whether a server is using the weak ssh-rsa public key algorithm for host authentication, try to connect to it after removing the ssh-rsa algorithm from ssh(1)’s allowed list:
ssh -oHostKeyAlgorithms=-ssh-rsa user@host
  If the host key verification fails and no other supported host key types are available, the server software on that host should be upgraded.
A future release of OpenSSH will enable UpdateHostKeys by default to allow the client to automatically migrate to better algorithms. Users may consider enabling this option manually.
“SHA-1 is a Shambles: First Chosen-Prefix Collision on SHA-1 and Application to the PGP Web of Trust” Leurent, G and Peyrin, T (2020) https://ift.tt/35vtzPF
Security
ssh(1), sshd(8), ssh-keygen(1): this release removes the “ssh-rsa” (RSA/SHA1) algorithm from those accepted for certificate signatures (i.e. the client and server CASignatureAlgorithms option) and will use the rsa-sha2-512 signature algorithm by default when the ssh-keygen(1) CA signs new certificates.
Certificates are at special risk to the aforementioned SHA1 collision vulnerability as an attacker has effectively unlimited time in which to craft a collision that yields them a valid certificate, far more than the relatively brief LoginGraceTime window that they have to forge a host key signature.
The OpenSSH certificate format includes a CA-specified (typically random) nonce value near the start of the certificate that should make exploitation of chosen-prefix collisions in this context challenging, as the attacker does not have full control over the prefix that actually gets signed. Nonetheless, SHA1 is now a demonstrably broken algorithm and futher improvements in attacks are highly likely.
OpenSSH releases prior to 7.2 do not support the newer RSA/SHA2 algorithms and will refuse to accept certificates signed by an OpenSSH 8.2+ CA using RSA keys unless the unsafe algorithm is explicitly selected during signing (“ssh-keygen -t ssh-rsa”).
Older clients/servers may use another CA key type such as ssh-ed25519 (supported since OpenSSH 6.5) or one of the ecdsa-sha2-nistp256/384/521 types (supported since OpenSSH 5.7) instead if they cannot be upgraded.
Potentially-incompatible changes
This release includes a number of changes that may affect existing configurations:
ssh, sshd: the above removal of “ssh-rsa” from the accepted CASignatureAlgorithms list.
ssh, sshd: this release removes diffie-hellman-group14-sha1 from the default key exchange proposal for both the client and server.
ssh-keygen: the command-line options related to the generation and screening of safe prime numbers used by the diffie-hellman-group-exchange- key exchange algorithms have changed. Most options have been folded under the -O flag.
sshd: the sshd listener process title visible to ps(1) has changed to include information about the number of connections that are currently attempting authentication and the limits configured by MaxStartups.
ssh-sk-helper: this is a new binary. It is used by the FIDO/U2F support to provide address-space isolation for token middleware libraries (including the internal one). It needs to be installed in the expected path, typically under /usr/libexec or similar.
Changes since OpenSSH 8.1
This release contains some significant new features.
FIDO/U2F Support
This release adds support for FIDO/U2F hardware authenticators to OpenSSH. U2F/FIDO are open standards for inexpensive two-factor authentication hardware that are widely used for website authentication. In OpenSSH FIDO devices are supported by new public key types “ecdsa-sk” and “ed25519-sk”, along with corresponding certificate types.
ssh-keygen(1) may be used to generate a FIDO token-backed key, after which they may be used much like any other key type supported by OpenSSH, so long as the hardware token is attached when the keys are used. FIDO tokens also generally require the user explicitly authorise operations by touching or tapping them.
Generating a FIDO key requires the token be attached, and will usually require the user tap the token to confirm the operation:
$ ssh-keygen -t ecdsa-sk -f ~/.ssh/id_ecdsa_sk
Generating public/private ecdsa-sk key pair.
You may need to touch your security key to authorize key generation.
Enter file in which to save the key (/home/djm/.ssh/id_ecdsa_sk):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/djm/.ssh/id_ecdsa_sk
Your public key has been saved in /home/djm/.ssh/id_ecdsa_sk.pub
This will yield a public and private key-pair. The private key file should be useless to an attacker who does not have access to the physical token. After generation, this key may be used like any other supported key in OpenSSH and may be listed in authorized_keys, added to ssh-agent, etc. The only additional stipulation is that the FIDO token that the key belongs to must be attached when the key is used.
FIDO tokens are most commonly connected via USB but may be attached via other means such as Bluetooth or NFC. In OpenSSH, communication with the token is managed via a middleware library, specified by the SecurityKeyProvider directive in ssh/sshd_config or the $SSH_SK_PROVIDER environment variable for ssh-keygen(1) and ssh-add(1). The API for this middleware is documented in the sk-api.h and PROTOCOL.u2f files in the source distribution.
OpenSSH includes a middleware (“SecurityKeyProvider=internal”) with support for USB tokens. It is automatically enabled in OpenBSD and may be enabled in portable OpenSSH via the configure flag –with-security-key-builtin. If the internal middleware is enabled then it is automatically used by default.
This internal middleware requires that libfido2 (https://ift.tt/2Mx8mO6) and its dependencies be installed. We recommend that packagers of portable OpenSSH enable the built-in middleware, as it provides the lowest-friction experience for users.
Note: FIDO/U2F tokens are required to implement the ECDSA-P256 “ecdsa-sk” key type, but hardware support for Ed25519 “ed25519-sk” is less common. Similarly, not all hardware tokens support some of the optional features such as resident keys.
The protocol-level changes to support FIDO/U2F keys in SSH are documented in the PROTOCOL.u2f file in the OpenSSH source distribution.
There are a number of supporting changes to this feature:
ssh-keygen: add a “no-touch-required” option when generating FIDO-hosted keys, that disables their default behaviour of requiring a physical touch/tap on the token during authentication.
Note: not all tokens support disabling the touch requirement.
sshd: add a sshd_config PubkeyAuthOptions directive that collects miscellaneous public key authentication-related options for sshd. At present it supports only a single option “no-touch-required”. This causes sshd to skip its default check for FIDO/U2F keys that the signature was authorised by a touch or press event on the token hardware.
ssh, sshd, ssh-keygen: add a “no-touch-required” option for authorized_keys and a similar extension for certificates. This option disables the default requirement that FIDO key signatures attest that the user touched their key to authorize them, mirroring the similar PubkeyAuthOptions sshd_config option.
ssh-keygen: add support for the writing the FIDO attestation information that is returned when new keys are generated via the “-O write-attestation=/path” option. FIDO attestation certificates may be used to verify that a FIDO key is hosted in trusted hardware. OpenSSH does not currently make use of this information, beyond optionally writing it to disk.
FIDO2 resident keys
FIDO/U2F OpenSSH keys consist of two parts: a “key handle” part stored in the private key file on disk, and a per-device private key that is unique to each FIDO/U2F token and that cannot be exported from the token hardware. These are combined by the hardware at authentication time to derive the real key that is used to sign authentication challenges.
For tokens that are required to move between computers, it can be cumbersome to have to move the private key file first. To avoid this requirement, tokens implementing the newer FIDO2 standard support “resident keys”, where it is possible to effectively retrieve the key handle part of the key from the hardware.
OpenSSH supports this feature, allowing resident keys to be generated using the ssh-keygen “-O resident” flag. This will produce a public/private key pair as usual, but it will be possible to retrieve the private key part from the token later.
This may be done using “ssh-keygen -K”, which will download all available resident keys from the tokens attached to the host and write public/private key files for them. It is also possible to download and add resident keys directly to ssh-agent without writing files to the file-system using “ssh-add -K”.
Resident keys are indexed on the token by the application string and user ID. By default, OpenSSH uses an application string of “ssh:” and an empty user ID. If multiple resident keys on a single token are desired then it may be necessary to override one or both of these defaults using the ssh-keygen “-O application=” or “-O user=” options. Note: OpenSSH will only download and use resident keys whose application string begins with “ssh:”
Storing both parts of a key on a FIDO token increases the likelihood of an attacker being able to use a stolen token device. For this reason, tokens should enforce PIN authentication before allowing download of keys, and users should set a PIN on their tokens before creating any resident keys.
Other New Features
sshd: add an Include sshd_config keyword that allows including additional configuration files via glob patterns.
ssh/sshd: make the LE (low effort) DSCP code point available via the IPQoS directive.
ssh: when AddKeysToAgent=yes is set and the key contains no comment, add the key to the agent with the key’s path as the comment. bz2564
ssh-keygen, ssh-agent(1): expose PKCS#11 key labels and X.509 subjects as key comments, rather than simply listing the PKCS#11 provider library path.
ssh-keygen: allow PEM export of DSA and ECDSA keys.
ssh, sshd: make zlib compile-time optional, available via the Makefile.inc ZLIB flag on OpenBSD or via the –with-zlib configure option for OpenSSH portable.
sshd: when clients get denied by MaxStartups, send a notification prior to the SSH2 protocol banner according to RFC4253 section 4.2.
ssh, ssh-agent(1): when invoking the $SSH_ASKPASS prompt program, pass a hint to the program to describe the type of desired prompt. The possible values are “confirm” (indicating that a yes/no confirmation dialog with no text entry should be shown), “none” (to indicate an informational message only), or blank for the original ssh-askpass behaviour of requesting a password/phrase.
ssh: allow forwarding a different agent socket to the path specified by $SSH_AUTH_SOCK, by extending the existing ForwardAgent option to accepting an explicit path or the name of an environment variable in addition to yes/no.
ssh-keygen: add a new signature operations “find-principals” to look up the principal associated with a signature from an allowed-signers file.
sshd: expose the number of currently-authenticating connections along with the MaxStartups limit in the process title visible to “ps”.
Bugfixes
sshd: make ClientAliveCountMax=0 have sensible semantics:
it will now disable connection killing entirely rather than the current behaviour of instantly killing the connection after the first liveness test regardless of success.
sshd: clarify order of AllowUsers / DenyUsers vs AllowGroups / DenyGroups in the sshd manual page.
sshd: better describe HashKnownHosts in the manual page.
sshd: clarify that that permitopen=/PermitOpen do no name or address translation in the manual page. bz3099
sshd: allow the UpdateHostKeys feature to function when multiple known_hosts files are in use. When updating host keys, ssh will now search subsequent known_hosts files, but will add updated host keys to the first specified file only.
All: replace all calls to signal(2) with a wrapper around sigaction(2). This wrapper blocks all other signals during the handler preventing races between handlers, and sets SA_RESTART which should reduce the potential for short read/write operations.
sftp: fix a race condition in the SIGCHILD handler that could turn in to a kill(-1);
sshd: fix a case where valid (but extremely large) SSH channel IDs were being incorrectly rejected.
ssh: when checking host key fingerprints as answers to new hostkey prompts, ignore whitespace surrounding the fingerprint itself.
All: wait for file descriptors to be readable or writeable during non-blocking connect, not just readable. Prevents a timeout when the server doesn’t immediately send a banner (e.g. multiplexers like sslh)
sshd_config: document the [email protected] key exchange algorithm.
Portability
 sshd: multiple adjustments to the Linux seccomp sandbox:
Non-fatally deny IPC syscalls in sandbox
Allow clock_gettime64() in sandbox (MIPS / glibc >= 2.31)
Allow clock_nanosleep_time64 in sandbox (ARM)
Allow clock_nanosleep() in sandbox (recent glibc)
Explicit check for memmem declaration and fix up declaration if the system headers lack it.
Checksums:
– SHA1 (openssh-8.2.tar.gz) = 77584c22fbb89269398acdf53c1e554400584ba8 – SHA256 (openssh-8.2.tar.gz) = UttLaaSYXVK1O65cYvyQzyQ5sCfuJ4Lwrs8zNsPrluQ=
– SHA1 (openssh-8.2p1.tar.gz) = d1ab35a93507321c5db885e02d41ce1414f0507c – SHA256 (openssh-8.2p1.tar.gz) = Q5JRUebPbO4UUBkMDpr03Da0HBJzdhnt/4vOvf9k5nE=
Please note that the SHA256 signatures are base64 encoded and not hexadecimal (which is the default for most checksum tools). The PGP key used to sign the releases is available as RELEASE_KEY.asc from the mirror sites.
Reporting Bugs:
If you want to submit Security bugs, then you should be reported directly to [email protected]
The post OpenSSH 8.2 Released With FIDO/U2F Hardware Authentication appeared first on HackersOnlineClub.
from HackersOnlineClub https://ift.tt/31Y8Eo8 from Blogger https://ift.tt/3byf6qq
0 notes
lbcybersecurity · 8 years ago
Text
The command-line, for cybersec
On Twitter I made the mistake of asking people about command-line basics for cybersec professionals. A got a lot of useful responses, which I summarize in this long (5k words) post. It’s mostly driven by the tools I use, with a bit of input from the tweets I got in response to my query. bash By command-line this document really means bash. There are many types of command-line shells. Windows has two, 'cmd.exe' and 'PowerShell'. Unix started with the Bourne shell ‘sh’, and there have been many variations of this over the years, ‘csh’, ‘ksh’, ‘zsh’, ‘tcsh’, etc. When GNU rewrote Unix user-mode software independently, they called their shell “Bourne Again Shell” or “bash” (queue "JSON Bourne" shell jokes here). Bash is the default shell for Linux and macOS. It’s also available on Windows, as part of their special “Windows Subsystem for Linux”. The windows version of ‘bash’ has become my most used shell. For Linux IoT devices, BusyBox is the most popular shell. It’s easy to clear, as it includes feature-reduced versions of popular commands. man ‘Man’ is the command you should not run if you want help for a command. Man pages are designed to drive away newbies. They are only useful if you already mostly an expert with the command you desire help on. Man pages list all possible features of a program, but do not highlight examples of the most common features, or the most common way to use the commands. Take ‘sed’ as an example. It’s used most commonly to do a search-and-replace in files, like so: $ sed 's/rob/dave/' foo.txt This usage is so common that many non-geeks know of it. Yet, if you type ‘man sed’ to figure out how to do a search and replace, you’ll get nearly incomprehensible gibberish, and no example of this most common usage. I point this out because most guides on using the shell recommend ‘man’ pages to get help. This is wrong, it’ll just endlessly frustrate you. Instead, google the commands you need help on, or better yet, search StackExchange for answers. You might try asking questions, like on Twitter or forum sites, but this requires a strategy. If you ask a basic question, self-important dickholes will respond by telling you to “rtfm” or “read the fucking manual”. A better strategy is to exploit their dickhole nature, such as saying “too bad command xxx cannot do yyy”. Helpful people will gladly explain why you are wrong, carefully explaining how xxx does yyy. If you must use 'man', use the 'apropos' command to find the right man page. Sometimes multiple things in the system have the same or similar names, leading you to the wrong page. apt-get install yum Using the command-line means accessing that huge open-source ecosystem. Most of the things in this guide do no already exist on the system. You have to either compile them from source, or install via a package-manager. Linux distros ship with a small footprint, but have a massive database of precompiled software “packages” in the cloud somewhere. Use the "package manager" to install the software from the cloud. On Debian-derived systems (like Ubuntu, Kali, Raspbian), type “apt-get install masscan” to install “masscan” (as an example). Use “apt-cache search scan” to find a bunch of scanners you might want to install. On RedHat systems, use “yum” instead. On BSD, use the “ports” system, which you can also get working for macOS. If no pre-compiled package exists for a program, then you’ll have to download the source code and compile it. There’s about an 80% chance this will work easy, following the instructions. There is a 20% chance you’ll experience “dependency hell”, for example, needing to install two mutually incompatible versions of Python. Bash is a scripting language Don’t forget that shells are really scripting languages. The bit that executes a single command is just a degenerate use of the scripting language. For example, you can do a traditional for loop like: $ for i in $(seq 1 9); do echo $i; done In this way, ‘bash’ is no different than any other scripting language, like Perl, Python, NodeJS, PHP CLI, etc. That’s why a lot of stuff on the system actually exists as short ‘bash’ programs, aka. shell scripts. Few want to write bash scripts, but you are expected to be able to read them, either to tweek existing scripts on the system, or to read StackExchange help. File system commands The macOS “Finder” or Windows “File Explorer” are just graphical shells that help you find files, open, and save them. The first commands you learn are for the same functionality on the command-line: pwd, cd, ls, touch, rm, rmdir, mkdir, chmod, chown, find, ln, mount. The command “rm –rf /” removes everything starting from the root directory. This will also follow mounted server directories, deleting files on the server. I point this out to give an appreciation of the raw power you have over the system from the command-line, and how easy you can disrupt things. Of particular interest is the “mount” command. Desktop versions of Linux typically mount USB flash drives automatically, but on servers, you need to do it automatically, e.g.: $ mkdir ~/foobar $ mount /dev/sdb ~/foobar You’ll also use the ‘mount’ command to connect to file servers, using the “cifs” package if they are Windows file servers: # apt-get install cifs-utils # mkdir /mnt/vids # mount -t cifs -o username=robert,password=foobar123  //192.168.1.11/videos /mnt/vids Linux system commands The next commands you’ll learn are about syadmin the Linux system: ps, top, who, history, last, df, du, kill, killall, lsof, lsmod, uname, id, shutdown, and so on. The first thing hackers do when hacking into a system is run “uname” (to figure out what version of the OS is running) and “id” (to figure out which account they’ve acquired, like “root” or some other user). The Linux system command I use most is “dmesg” (or ‘tail –f /var/log/dmesg’) which shows you the raw system messages. For example, when I plug in USB drives to a server, I look in ‘dmesg’ to find out which device was added so that I can mount it. I don’t know if this is the best way, it’s just the way I do it (servers don’t automount USB drives like desktops do). Networking commands The permanent state of the network (what gets configured on the next bootup) is configured in text files somewhere. But there are a wealth of commands you’ll use to via the current state of networking, make temporary changes, and diagnose problems. The ‘ifconfig’ command has long been used to via the current TCP/IP configuration and make temporary changes. Learning how TCP/IP works means playing a lot with ‘ifconfig’. Use “ifconfig –a” for even more verbose information. Use the “route” command to see if you are sending packets to the right router. Use ‘arp’ command to make sure you can reach the local router. Use ‘traceroute’ to make sure packets are following the correct route to their destination. You should learn the nifty trick it’s based on (TTLs). You should also play with the TCP, UDP, and ICMP options. Use ‘ping’ to see if you can reach the target across the Internet. Usefully measures the latency in milliseconds, and congestion (via packet loss). For example, ping NetFlix throughout the day, and notice how the ping latency increases substantially during “prime time” viewing hours. Use ‘dig’ to make sure DNS resolution is working right. (Some use ‘nslookup’ instead). Dig is useful because it’s the raw universal DNS tool – every time they add some new standard feature to DNS, they add that feature into ‘dig’ as well. The ‘netstat –tualn’ command views the current TCP/IP connections and which ports are listening. I forget what the various options “tualn” mean, only it’s the output I always want to see, rather than the raw “netstat” command by itself. You’ll want to use ‘ethtool –k’ to turn off checksum and segmentation offloading. These are features that break packet-captures sometimes. There is this new fangled ‘ip’ system for Linux networking, replacing many of the above commands, but as an old timer, I haven’t looked into that. Some other tools for diagnosing local network issues are ‘tcpdump’, ‘nmap’, and ‘netcat’. These are described in more detail below. ssh In general, you’ll remotely log into a system in order to use the command-line. We use ‘ssh’ for that. It uses a protocol similar to SSL in order to encrypt the connection. There are two ways to use ‘ssh’ to login, with a password or with a client-side certificate. When using SSH with a password, you type “ssh username@servername”. The remote system will then prompt you for a password for that account. When using client-side certificates, use “ssh-keygen” to generate a key, then either copy the public-key of the client to the server manually, or use “ssh-copy-id” to copy it using the password method above. How this works is basic application of public-key cryptography. When logging in with a password, you get a copy of the server’s public-key the first time you login, and if it ever changes, you get a nasty warning that somebody may be attempting a man in the middle attack. $ ssh [email protected] @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! When using client-side certificates, the server trusts your public-key. This is similar to how client-side certificates work in SSL VPNs. You can use SSH for things other than loging into a remote shell. You can script ‘ssh’ to run commands remotely on a system in a local shell script. You can use ‘scp’ (SSH copy) to transfer files to and from a remote system. You can do tricks with SSH to create tunnels, which is popular way to bypass the restrictive rules of your local firewall nazi. openssl This is your general cryptography toolkit, doing everything from simple encryption, to public-key certificate signing, to establishing SSL connections. It is extraordinarily user hostile, with terrible inconsistency among options. You can only figure out how to do things by looking up examples on the net, such as on StackExchange. There are competing SSL libraries with their own command-line tools, like GnuTLS and Mozilla NSS that you might find easier to use. The fundamental use of the ‘openssl’ tool is to create public-keys, “certificate requests”, and creating self-signed certificates. All the web-site certificates I’ve ever obtained has been using the openssl command-line tool to create CSRs. You should practice using the ‘openssl’ tool to encrypt files, sign files, and to check signatures. You can use openssl just like PGP for encrypted emails/messages, but following the “S/MIME” standard rather than PGP standard. You might consider learning the ‘pgp’ command-line tools, or the open-source ‘gpg’ or ‘gpg2’ tools as well. You should learn how to use the “openssl s_client” feature to establish SSL connections, as well as the “openssl s_server” feature to create an SSL proxy for a server that doesn’t otherwise support SSL. Learning all the ways of using the ‘openssl’ tool to do useful things will go a long way in teaching somebody about crypto and cybersecurity. I can imagine an entire class consisting of nothing but learning ‘openssl’. netcat (nc, socat, cyptocat, ncat) A lot of Internet protocols are based on text. That means you can create a raw TCP connection to the service and interact with them using your keyboard. The classic tool for doing this is known as “netcat”, abbreviated “nc”. For example, connect to Google’s web server at port and type the HTTP HEAD command followed by a blank line (hit [return] twice): $ nc www.google.com 80 HEAD / HTTP/1.0 HTTP/1.0 200 OK Date: Tue, 17 Jan 2017 01:53:28 GMT Expires: -1 Cache-Control: private, max-age=0 Content-Type: text/html; charset=ISO-8859-1 P3P: CP="This is not a P3P policy! See https://www.google.com/support/accounts/answer/151657?hl=en for more info." Server: gws X-XSS-Protection: 1; mode=block X-Frame-Options: SAMEORIGIN Set-Cookie: NID=95=o7GT1uJCWTPhaPAefs4CcqF7h7Yd7HEqPdAJncZfWfDSnNfliWuSj3XfS5GJXGt67-QJ9nc8xFsydZKufBHLj-K242C3_Vak9Uz1TmtZwT-1zVVBhP8limZI55uXHuPrejAxyTxSCgR6MQ; expires=Wed, 19-Jul-2017 01:53:28 GMT; path=/; domain=.google.com; HttpOnly Accept-Ranges: none Vary: Accept-Encoding Another classic example is to connect to port 25 on a mail server to send email, spoofing the “MAIL FROM” address. There are several versions of ‘netcat’ that work over SSL as well. My favorite is ‘ncat’, which comes with ‘nmap’, as it’s actively maintained. In theory, “openssl s_client” should also work this way. nmap At some point, you’ll need to port scan. The standard program for this is ‘nmap’, and it’s the best. The classic way of using it is something like: # nmap –A scanme.nmap.org The ‘-A’ option means to enable all the interesting features like OS detection, version detection, and basic scripts on the most common ports that a server might have open. It takes awhile to run. The “scanme.nmap.org” is a good site to practice on. Nmap is more than just a port scanner. It has a rich scripting system for probing more deeply into a system than just a port, and to gather more information useful for attacks. The scripting system essentially contains some attacks, such as password guessing. Scanning the Internet, finding services identified by ‘nmap’ scripts, and interacting with them with tools like ‘ncat’ will teach you a lot about how the Internet works. BTW, if ‘nmap’ is too slow, using ‘masscan’ instead. It’s a lot faster, though has much more limited functionality. Packet sniffing with tcpdump and tshark All Internet traffic consists of packets going between IP addresses. You can capture those packets and view them using “packet sniffers”. The most important packet-sniffer is “Wireshark”, a GUI. For the command-line, there is ‘tcpdump’ and ‘tshark’. You can run tcpdump on the command-line to watch packets go in/out of the local computer. This performs a quick “decode” of packets as they are captured. It’ll reverse-lookup IP addresses into DNS names, which means its buffers can overflow, dropping new packets while it’s waiting for DNS name responses for previous packets. # tcpdump –p –i eth0 A common task is to create a round-robin set of files, saving the last 100 files of 1-gig each. Older files are overwritten. Thus, when an attack happens, you can stop capture, and go backward in times and view the contents of the network traffic using something like Wireshark: # tcpdump –p -i eth0 -s65535 –C 1000 –W 100 –w cap Instead of capturing everything, you’ll often set “BPF” filters to narrow down to traffic from a specific target, or a specific port. The above examples use the –p option to capture traffic destined to the local computer. Sometimes you may want to look at all traffic going to other machines on the local network. You’ll need to figure out how to tap into wires, or setup “monitor” ports on switches for this to work. A more advanced command-line program is ‘tshark’. It can apply much more complex filters. It can also be used to extract the values of specific fields and dump them to a text files. Base64/hexdump/xxd/od These are some rather trivial commands, but you should know them. The ‘base64’ command encodes binary data in text. The text can then be passed around, such as in email messages. Base64 encoding is often automatic in the output from programs like openssl and PGP. In many cases, you’ll need to view a hex dump of some binary data. There are many programs to do this, such as hexdump, xxd, od, and more. grep Grep searches for a pattern within a file. More important, it searches for a regular expression (regex) in a file. The fu of Unix is that a lot of stuff is stored in text files, and use grep for regex patterns in order to extra stuff stored in those files. The power of this tool really depends on your mastery of regexes. You should master enough that you can understand StackExhange posts that explain almost what you want to do, and then tweek them to make them work. Grep, by default, shows only the matching lines. In many cases, you only want the part that matches. To do that, use the –o option. (This is not available on all versions of grep). You’ll probably want the better, “extended” regular expressions, so use the –E option. You’ll often want “case-insensitive” options (matching both upper and lower case), so use the –i option. For example, to extract all MAC address from a text file, you might do something like the following. This extracts all strings that are twelve hex digits. $ grep –Eio ‘[0-9A-F]{12}’ foo.txt Text processing Grep is just the first of the various “text processing filters”. Other useful ones include ‘sed’, ‘cut’, ‘sort’, and ‘uniq’. You’ll be an expert as piping output of one to the input of the next. You’ll use “sort | uniq” as god (Dennis Ritchie) intended and not the heresy of “sort –u”. You might want to master ‘awk’. It’s a new programming language, but once you master it, it’ll be easier than other mechanisms. You’ll end up using ‘wc’ (word-count) a lot. All it does is count the number of lines, words, characters in a file, but you’ll find yourself wanting to do this a lot. csvkit and jq You get data in CSV format and JSON format a lot. The tools ‘csvkit’ and ‘jq’ respectively help you deal with those tools, to convert these files into other formats, sticking the data in databases, and so forth. It’ll be easier using these tools that understand these text formats to extract data than trying to write ‘awk’ command or ‘grep’ regexes. strings Most files are binary with a few readable ASCII strings. You use the program ‘strings’ to extract those strings. This one simple trick sounds stupid, but it’s more powerful than you’d think. For example, I knew that a program probably contained a hard-coded password. I then blindly grabbed all the strings in the program’s binary file and sent them to a password cracker to see if they could decrypt something. And indeed, one of the 100,000 strings in the file worked, thus finding the hard-coded password. tail -f So ‘tail’ is just a standard Linux tool for looking at the end of files. If you want to keep checking the end of a live file that’s constantly growing, then use “tail –f”. It’ll sit there waiting for something new to be added to the end of the file, then print it out. I do this a lot, so I thought it’d be worth mentioning. tar –xvfz, gzip, xz, 7z In prehistorical times (like the 1980s), Unix was backed up to tape drives. The tar command could be used to combine a bunch of files into a single “archive” to be sent to the tape drive, hence “tape archive” or “tar”. These days, a lot of stuff you download will be in tar format (ending in .tar). You’ll need to learn how to extract it: $ tar –xvf something.tar Nobody knows what the “xvf” options mean anymore, but these letters most be specified in that order. I’m joking here, but only a little: somebody did a survey once and found that virtually nobody know how to use ‘tar’ other than the canned formulas such as this. Along with combining files into an archive you also need to compress them. In prehistoric Unix, the “compress” command would be used, which would replace a file with a compressed version ending in ‘.z’. This would found to be encumbered with patents, so everyone switched to ‘gzip’ instead, which replaces a file with a new one ending with ‘.gz’. $ ls foo.txt* foo.txt $ gzip foo.txt $ ls foo.txt* foo.txt.gz Combined with tar, you get files with either the “.tar.gz” extension, or simply “.tgz”. You can untar and uncompress at the same time: $ tar –xvfz something .tar.gz Gzip is always good enough, but nerds gonna nerd and want to compress with slightly better compression programs. They’ll have extensions like “.bz2”, “.7z”, “.xz”, and so on. There are a ton of them. Some of them are supported directly by the ‘tar’ program: $ tar –xvfj something.tar.bz2 Then there is the “zip/unzip” program, which supports Windows .zip file format. To create compressed archives these days, I don’t bother with tar, but just use the ZIP format. For example, this will recursively descend a directory, adding all files to a ZIP file that can easily be extracted under Windows: $ zip –r test.zip ./test/ dd I should include this under the system tools at the top, but it’s interesting for a number of purposes. The usage is simply to copy one file to another, the in-file to the out-file. $ dd if=foo.txt of=foo2.txt But that’s not interesting. What interesting is using it to write to “devices”. The disk drives in your system also exist as raw devices under the /dev directory. For example, if you want to create a boot USB drive for your Raspberry Pi: # dd if=rpi-ubuntu.img of=/dev/sdb Or, you might want to hard erase an entire hard drive by overwriting random data: # dd if=/dev/urandom of=/dev/sdc Or, you might want to image a drive on the system, for later forensics, without stumbling on things like open files. # dd if=/dev/sda of=/media/Lexar/infected.img The ‘dd’ program has some additional options, like block size and so forth, that you’ll want to pay attention to. screen and tmux You log in remotely and start some long running tool. Unfortunately, if you log out, all the processes you started will be killed. If you want it to keep running, then you need a tool to do this. I use ‘screen’. Before I start a long running port scan, I run the “screen” command. Then, I type [ctrl-a][ctrl-d] to disconnect from that screen, leaving it running in the background. Then later, I type “screen –r” to reconnect to it. If there are more than one screen sessions, using ‘-r’ by itself will list them all. Use “-r pid” to reattach to the proper one. If you can’t, then use “-D pid” or “-D –RR pid” to forced the other session to detached from whoever is using it. Tmux is an alternative to screen that many use. It’s cool for also having lots of terminal screens open at once. curl and wget Sometimes you want to download files from websites without opening a browser. The ‘curl’ and ‘wget’ programs do that easily. Wget is the traditional way of doing this, but curl is a bit more flexible. I use curl for everything these days, except mirroring a website, in which case I just do “wget –m website”. The thing that makes ‘curl’ so powerful is that it’s really designed as a tool for poking and prodding all the various features of HTTP. That it’s also useful for downloading files is a happy coincidence. When playing with a target website, curl will allow you do lots of complex things, which you can then script via bash. For example, hackers often write their cross-site scripting/forgeries in bash scripts using curl. node/php/python/perl/ruby/lua As mentioned above, bash is its own programming language. But it’s weird, and annoying. So sometimes you want a real programming language. Here are some useful ones. Yes, PHP is a language that runs in a web server for creating web pages. But if you know the language well, it’s also a fine command-line language for doing stuff. Yes, JavaScript is a language that runs in the web browser. But if you know it well, it’s also a great language for doing stuff, especially with the “nodejs” version. Then there are other good command line languages, like the Python, Ruby, Lua, and the venerable Perl. What makes all these great is the large library support. Somebody has already written a library that nearly does what you want that can be made to work with a little bit of extra code of your own. My general impression is that Python and NodeJS have the largest libraries likely to have what you want, but you should pick whichever language you like best, whichever makes you most productive. For me, that’s NodeJS, because of the great Visual Code IDE/debugger. iptables, iptables-save I shouldn’t include this in the list. Iptables isn’t a command-line tool as such. The tool is the built-in firewalling/NAT features within the Linux kernel. Iptables is just the command to configure it. Firewalling is an important part of cybersecurity. Everyone should have some experience playing with a Linux system doing basic firewalling tasks: basic rules, NATting, and transparent proxying for mitm attacks. Use ‘iptables-save’ in order to persistently save your changes. MySQL Similar to ‘iptables’, ‘mysql’ isn’t a tool in its own right, but a way of accessing a database maintained by another process on the system. Filters acting on text files only goes so far. Sometimes you need to dump it into a database, and make queries on that database. There is also the offensive skill needed to learn how targets store things in a database, and how attackers get the data. Hackers often publish raw SQL data they’ve stolen in their hacks (like the Ashley-Madisan dump). Being able to stick those dumps into your own database is quite useful. Hint: disable transaction logging while importing mass data. If you don’t like SQL, you might consider NoSQL tools like Elasticsearch, MongoDB, and Redis that can similarly be useful for arranging and searching data. You’ll probably have to learn some JSON tools for formatting the data. Reverse engineering tools A cybersecurity specialty is “reverse engineering”. Some want to reverse engineer the target software being hacked, to understand vulnerabilities. This is needed for commercial software and device firmware where the source code is hidden. Others use these tools to analyze viruses/malware. The ‘file’ command uses heuristics to discover the type of a file. There’s a whole skillset for analyzing PDF and Microsoft Office documents. I play with pdf-parser. There’s a long list at this website: https://zeltser.com/analyzing-malicious-documents/ There’s a whole skillset for analyzing executables. Binwalk is especially useful for analyzing firmware images. Qemu is useful is a useful virtual-machine. It can emulate full systems, such as an IoT device based on the MIPS processor. Like some other tools mentioned here, it’s more a full subsystem than a simple command-line tool. On a live system, you can use ‘strace’ to view what system calls a process is making. Use ‘lsof’ to view which files and network connections a process is making. Password crackers A common cybersecurity specialty is “password cracking”. There’s two kinds: online and offline password crackers. Typical online password crackers are ‘hydra’ and ‘medusa’. They can take files containing common passwords and attempt to log on to various protocols remotely, like HTTP, SMB, FTP, Telnet, and so on. I used ‘hydra’ recently in order to find the default/backdoor passwords to many IoT devices I’ve bought recently in my test lab. Online password crackers must open TCP connections to the target, and try to logon. This limits their speed. They also may be stymied by systems that lock accounts, or introduce delays, after too many bad password attempts. Typical offline password crackers are ‘hashcat’ and ‘jtr’ (John the Ripper). They work off of stolen encrypted passwords. They can attempt billions of passwords-per-second, because there’s no network interaction, nothing slowing them down. Understanding offline password crackers means getting an appreciation for the exponential difficulty of the problem. A sufficiently long and complex encrypted password is uncrackable. Instead of brute-force attempts at all possible combinations, we must use tricks, like mutating the top million most common passwords. I use hashcat because of the great GPU support, but John is also a great program. WiFi hacking A common specialty in cybersecurity is WiFi hacking. The difficulty in WiFi hacking is getting the right WiFi hardware that supports the features (monitor mode, packet injection), then the right drivers installed in your operating system. That’s why I use Kali rather than some generic Linux distribution, because it’s got the right drivers installed. The ‘aircrack-ng’ suite is the best for doing basic hacking, such as packet injection. When the parents are letting the iPad babysit their kid with a loud movie at the otherwise quite coffeeshop, use ‘aircrack-ng’ to deauth the kid. The ‘reaver’ tool is useful for hacking into sites that leave WPS wide open and misconfigured. Remote exploitation A common specialty in cybersecurity is pentesting. Nmap, curl, and netcat (described above) above are useful tools for this. Some useful DNS tools are ‘dig’ (described above), dnsrecon/dnsenum/fierce that try to enumerate and guess as many names as possible within a domain. These tools all have unique features, but also have a lot of overlap. Nikto is a basic tool for probing for common vulnerabilities, out-of-date software, and so on. It’s not really a vulnerability scanner like Nessus used by defenders, but more of a tool for attack. SQLmap is a popular tool for probing for SQL injection weaknesses. Then there is ‘msfconsole’. It has some attack features. This is humor – it has all the attack features. Metasploit is the most popular tool for running remote attacks against targets, exploiting vulnerabilities. Text editor Finally, there is the decision of text editor. I use ‘vi’ variants. Others like ‘nano’ and variants. There’s no wrong answer as to which editor to use, unless that answer is ‘emacs’. Conclusion Obviously, not every cybersecurity professional will be familiar with every tool in this list. If you don’t do reverse-engineering, then you won’t use reverse-engineering tools. On the other hand, regardless of your specialty, you need to know basic crypto concepts, so you should know something like the ‘openssl’ tool. You need to know basic networking, so things like ‘nmap’ and ‘tcpdump’. You need to be comfortable processing large dumps of data, manipulating it with any tool available. You shouldn’t be frightened by a little sysadmin work. The above list is therefore a useful starting point for cybersecurity professionals. Of course, those new to the industry won’t have much familiarity with them. But it’s fair to say that I’ve used everything listed above at least once in the last year, and the year before that, and the year before that. I spend a lot of time on StackExchange and Google searching the exact options I need, so I’m not an expert, but I am familiar with the basic use of all these things. from The command-line, for cybersec
0 notes
lbcybersecurity · 8 years ago
Text
The command-line, for cybersec
On Twitter I made the mistake of asking people about command-line basics for cybersec professionals. A got a lot of useful responses, which I summarize in this long (5k words) post. It’s mostly driven by the tools I use, with a bit of input from the tweets I got in response to my query. bash By command-line this document really means bash. There are many types of command-line shells. Windows has two, 'cmd.exe' and 'PowerShell'. Unix started with the Bourne shell ‘sh’, and there have been many variations of this over the years, ‘csh’, ‘ksh’, ‘zsh’, ‘tcsh’, etc. When GNU rewrote Unix user-mode software independently, they called their shell “Bourne Again Shell” or “bash” (queue "JSON Bourne" shell jokes here). Bash is the default shell for Linux and macOS. It’s also available on Windows, as part of their special “Windows Subsystem for Linux”. The windows version of ‘bash’ has become my most used shell. For Linux IoT devices, BusyBox is the most popular shell. It’s easy to clear, as it includes feature-reduced versions of popular commands. man ‘Man’ is the command you should not run if you want help for a command. Man pages are designed to drive away newbies. They are only useful if you already mostly an expert with the command you desire help on. Man pages list all possible features of a program, but do not highlight examples of the most common features, or the most common way to use the commands. Take ‘sed’ as an example. It’s used most commonly to do a search-and-replace in files, like so: $ sed 's/rob/dave/' foo.txt This usage is so common that many non-geeks know of it. Yet, if you type ‘man sed’ to figure out how to do a search and replace, you’ll get nearly incomprehensible gibberish, and no example of this most common usage. I point this out because most guides on using the shell recommend ‘man’ pages to get help. This is wrong, it’ll just endlessly frustrate you. Instead, google the commands you need help on, or better yet, search StackExchange for answers. You might try asking questions, like on Twitter or forum sites, but this requires a strategy. If you ask a basic question, self-important dickholes will respond by telling you to “rtfm” or “read the fucking manual”. A better strategy is to exploit their dickhole nature, such as saying “too bad command xxx cannot do yyy”. Helpful people will gladly explain why you are wrong, carefully explaining how xxx does yyy. If you must use 'man', use the 'apropos' command to find the right man page. Sometimes multiple things in the system have the same or similar names, leading you to the wrong page. apt-get install yum Using the command-line means accessing that huge open-source ecosystem. Most of the things in this guide do no already exist on the system. You have to either compile them from source, or install via a package-manager. Linux distros ship with a small footprint, but have a massive database of precompiled software “packages” in the cloud somewhere. Use the "package manager" to install the software from the cloud. On Debian-derived systems (like Ubuntu, Kali, Raspbian), type “apt-get install masscan” to install “masscan” (as an example). Use “apt-cache search scan” to find a bunch of scanners you might want to install. On RedHat systems, use “yum” instead. On BSD, use the “ports” system, which you can also get working for macOS. If no pre-compiled package exists for a program, then you’ll have to download the source code and compile it. There’s about an 80% chance this will work easy, following the instructions. There is a 20% chance you’ll experience “dependency hell”, for example, needing to install two mutually incompatible versions of Python. Bash is a scripting language Don’t forget that shells are really scripting languages. The bit that executes a single command is just a degenerate use of the scripting language. For example, you can do a traditional for loop like: $ for i in $(seq 1 9); do echo $i; done In this way, ‘bash’ is no different than any other scripting language, like Perl, Python, NodeJS, PHP CLI, etc. That’s why a lot of stuff on the system actually exists as short ‘bash’ programs, aka. shell scripts. Few want to write bash scripts, but you are expected to be able to read them, either to tweek existing scripts on the system, or to read StackExchange help. File system commands The macOS “Finder” or Windows “File Explorer” are just graphical shells that help you find files, open, and save them. The first commands you learn are for the same functionality on the command-line: pwd, cd, ls, touch, rm, rmdir, mkdir, chmod, chown, find, ln, mount. The command “rm –rf /” removes everything starting from the root directory. This will also follow mounted server directories, deleting files on the server. I point this out to give an appreciation of the raw power you have over the system from the command-line, and how easy you can disrupt things. Of particular interest is the “mount” command. Desktop versions of Linux typically mount USB flash drives automatically, but on servers, you need to do it automatically, e.g.: $ mkdir ~/foobar $ mount /dev/sdb ~/foobar You’ll also use the ‘mount’ command to connect to file servers, using the “cifs” package if they are Windows file servers: # apt-get install cifs-utils # mkdir /mnt/vids # mount -t cifs -o username=robert,password=foobar123  //192.168.1.11/videos /mnt/vids Linux system commands The next commands you’ll learn are about syadmin the Linux system: ps, top, who, history, last, df, du, kill, killall, lsof, lsmod, uname, id, shutdown, and so on. The first thing hackers do when hacking into a system is run “uname” (to figure out what version of the OS is running) and “id” (to figure out which account they’ve acquired, like “root” or some other user). The Linux system command I use most is “dmesg” (or ‘tail –f /var/log/dmesg’) which shows you the raw system messages. For example, when I plug in USB drives to a server, I look in ‘dmesg’ to find out which device was added so that I can mount it. I don’t know if this is the best way, it’s just the way I do it (servers don’t automount USB drives like desktops do). Networking commands The permanent state of the network (what gets configured on the next bootup) is configured in text files somewhere. But there are a wealth of commands you’ll use to via the current state of networking, make temporary changes, and diagnose problems. The ‘ifconfig’ command has long been used to via the current TCP/IP configuration and make temporary changes. Learning how TCP/IP works means playing a lot with ‘ifconfig’. Use “ifconfig –a” for even more verbose information. Use the “route” command to see if you are sending packets to the right router. Use ‘arp’ command to make sure you can reach the local router. Use ‘traceroute’ to make sure packets are following the correct route to their destination. You should learn the nifty trick it’s based on (TTLs). You should also play with the TCP, UDP, and ICMP options. Use ‘ping’ to see if you can reach the target across the Internet. Usefully measures the latency in milliseconds, and congestion (via packet loss). For example, ping NetFlix throughout the day, and notice how the ping latency increases substantially during “prime time” viewing hours. Use ‘dig’ to make sure DNS resolution is working right. (Some use ‘nslookup’ instead). Dig is useful because it’s the raw universal DNS tool – every time they add some new standard feature to DNS, they add that feature into ‘dig’ as well. The ‘netstat –tualn’ command views the current TCP/IP connections and which ports are listening. I forget what the various options “tualn” mean, only it’s the output I always want to see, rather than the raw “netstat” command by itself. You’ll want to use ‘ethtool –k’ to turn off checksum and segmentation offloading. These are features that break packet-captures sometimes. There is this new fangled ‘ip’ system for Linux networking, replacing many of the above commands, but as an old timer, I haven’t looked into that. Some other tools for diagnosing local network issues are ‘tcpdump’, ‘nmap’, and ‘netcat’. These are described in more detail below. ssh In general, you’ll remotely log into a system in order to use the command-line. We use ‘ssh’ for that. It uses a protocol similar to SSL in order to encrypt the connection. There are two ways to use ‘ssh’ to login, with a password or with a client-side certificate. When using SSH with a password, you type “ssh username@servername”. The remote system will then prompt you for a password for that account. When using client-side certificates, use “ssh-keygen” to generate a key, then either copy the public-key of the client to the server manually, or use “ssh-copy-id” to copy it using the password method above. How this works is basic application of public-key cryptography. When logging in with a password, you get a copy of the server’s public-key the first time you login, and if it ever changes, you get a nasty warning that somebody may be attempting a man in the middle attack. $ ssh [email protected] @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! When using client-side certificates, the server trusts your public-key. This is similar to how client-side certificates work in SSL VPNs. You can use SSH for things other than loging into a remote shell. You can script ‘ssh’ to run commands remotely on a system in a local shell script. You can use ‘scp’ (SSH copy) to transfer files to and from a remote system. You can do tricks with SSH to create tunnels, which is popular way to bypass the restrictive rules of your local firewall nazi. openssl This is your general cryptography toolkit, doing everything from simple encryption, to public-key certificate signing, to establishing SSL connections. It is extraordinarily user hostile, with terrible inconsistency among options. You can only figure out how to do things by looking up examples on the net, such as on StackExchange. There are competing SSL libraries with their own command-line tools, like GnuTLS and Mozilla NSS that you might find easier to use. The fundamental use of the ‘openssl’ tool is to create public-keys, “certificate requests”, and creating self-signed certificates. All the web-site certificates I’ve ever obtained has been using the openssl command-line tool to create CSRs. You should practice using the ‘openssl’ tool to encrypt files, sign files, and to check signatures. You can use openssl just like PGP for encrypted emails/messages, but following the “S/MIME” standard rather than PGP standard. You might consider learning the ‘pgp’ command-line tools, or the open-source ‘gpg’ or ‘gpg2’ tools as well. You should learn how to use the “openssl s_client” feature to establish SSL connections, as well as the “openssl s_server” feature to create an SSL proxy for a server that doesn’t otherwise support SSL. Learning all the ways of using the ‘openssl’ tool to do useful things will go a long way in teaching somebody about crypto and cybersecurity. I can imagine an entire class consisting of nothing but learning ‘openssl’. netcat (nc, socat, cyptocat, ncat) A lot of Internet protocols are based on text. That means you can create a raw TCP connection to the service and interact with them using your keyboard. The classic tool for doing this is known as “netcat”, abbreviated “nc”. For example, connect to Google’s web server at port and type the HTTP HEAD command followed by a blank line (hit [return] twice): $ nc www.google.com 80 HEAD / HTTP/1.0 HTTP/1.0 200 OK Date: Tue, 17 Jan 2017 01:53:28 GMT Expires: -1 Cache-Control: private, max-age=0 Content-Type: text/html; charset=ISO-8859-1 P3P: CP="This is not a P3P policy! See https://www.google.com/support/accounts/answer/151657?hl=en for more info." Server: gws X-XSS-Protection: 1; mode=block X-Frame-Options: SAMEORIGIN Set-Cookie: NID=95=o7GT1uJCWTPhaPAefs4CcqF7h7Yd7HEqPdAJncZfWfDSnNfliWuSj3XfS5GJXGt67-QJ9nc8xFsydZKufBHLj-K242C3_Vak9Uz1TmtZwT-1zVVBhP8limZI55uXHuPrejAxyTxSCgR6MQ; expires=Wed, 19-Jul-2017 01:53:28 GMT; path=/; domain=.google.com; HttpOnly Accept-Ranges: none Vary: Accept-Encoding Another classic example is to connect to port 25 on a mail server to send email, spoofing the “MAIL FROM” address. There are several versions of ‘netcat’ that work over SSL as well. My favorite is ‘ncat’, which comes with ‘nmap’, as it’s actively maintained. In theory, “openssl s_client” should also work this way. nmap At some point, you’ll need to port scan. The standard program for this is ‘nmap’, and it’s the best. The classic way of using it is something like: # nmap –A scanme.nmap.org The ‘-A’ option means to enable all the interesting features like OS detection, version detection, and basic scripts on the most common ports that a server might have open. It takes awhile to run. The “scanme.nmap.org” is a good site to practice on. Nmap is more than just a port scanner. It has a rich scripting system for probing more deeply into a system than just a port, and to gather more information useful for attacks. The scripting system essentially contains some attacks, such as password guessing. Scanning the Internet, finding services identified by ‘nmap’ scripts, and interacting with them with tools like ‘ncat’ will teach you a lot about how the Internet works. BTW, if ‘nmap’ is too slow, using ‘masscan’ instead. It’s a lot faster, though has much more limited functionality. Packet sniffing with tcpdump and tshark All Internet traffic consists of packets going between IP addresses. You can capture those packets and view them using “packet sniffers”. The most important packet-sniffer is “Wireshark”, a GUI. For the command-line, there is ‘tcpdump’ and ‘tshark’. You can run tcpdump on the command-line to watch packets go in/out of the local computer. This performs a quick “decode” of packets as they are captured. It’ll reverse-lookup IP addresses into DNS names, which means its buffers can overflow, dropping new packets while it’s waiting for DNS name responses for previous packets. # tcpdump –p –i eth0 A common task is to create a round-robin set of files, saving the last 100 files of 1-gig each. Older files are overwritten. Thus, when an attack happens, you can stop capture, and go backward in times and view the contents of the network traffic using something like Wireshark: # tcpdump –p -i eth0 -s65535 –C 1000 –W 100 –w cap Instead of capturing everything, you’ll often set “BPF” filters to narrow down to traffic from a specific target, or a specific port. The above examples use the –p option to capture traffic destined to the local computer. Sometimes you may want to look at all traffic going to other machines on the local network. You’ll need to figure out how to tap into wires, or setup “monitor” ports on switches for this to work. A more advanced command-line program is ‘tshark’. It can apply much more complex filters. It can also be used to extract the values of specific fields and dump them to a text files. Base64/hexdump/xxd/od These are some rather trivial commands, but you should know them. The ‘base64’ command encodes binary data in text. The text can then be passed around, such as in email messages. Base64 encoding is often automatic in the output from programs like openssl and PGP. In many cases, you’ll need to view a hex dump of some binary data. There are many programs to do this, such as hexdump, xxd, od, and more. grep Grep searches for a pattern within a file. More important, it searches for a regular expression (regex) in a file. The fu of Unix is that a lot of stuff is stored in text files, and use grep for regex patterns in order to extra stuff stored in those files. The power of this tool really depends on your mastery of regexes. You should master enough that you can understand StackExhange posts that explain almost what you want to do, and then tweek them to make them work. Grep, by default, shows only the matching lines. In many cases, you only want the part that matches. To do that, use the –o option. (This is not available on all versions of grep). You’ll probably want the better, “extended” regular expressions, so use the –E option. You’ll often want “case-insensitive” options (matching both upper and lower case), so use the –i option. For example, to extract all MAC address from a text file, you might do something like the following. This extracts all strings that are twelve hex digits. $ grep –Eio ‘[0-9A-F]{12}’ foo.txt Text processing Grep is just the first of the various “text processing filters”. Other useful ones include ‘sed’, ‘cut’, ‘sort’, and ‘uniq’. You’ll be an expert as piping output of one to the input of the next. You’ll use “sort | uniq” as god (Dennis Ritchie) intended and not the heresy of “sort –u”. You might want to master ‘awk’. It’s a new programming language, but once you master it, it’ll be easier than other mechanisms. You’ll end up using ‘wc’ (word-count) a lot. All it does is count the number of lines, words, characters in a file, but you’ll find yourself wanting to do this a lot. csvkit and jq You get data in CSV format and JSON format a lot. The tools ‘csvkit’ and ‘jq’ respectively help you deal with those tools, to convert these files into other formats, sticking the data in databases, and so forth. It’ll be easier using these tools that understand these text formats to extract data than trying to write ‘awk’ command or ‘grep’ regexes. strings Most files are binary with a few readable ASCII strings. You use the program ‘strings’ to extract those strings. This one simple trick sounds stupid, but it’s more powerful than you’d think. For example, I knew that a program probably contained a hard-coded password. I then blindly grabbed all the strings in the program’s binary file and sent them to a password cracker to see if they could decrypt something. And indeed, one of the 100,000 strings in the file worked, thus finding the hard-coded password. tail -f So ‘tail’ is just a standard Linux tool for looking at the end of files. If you want to keep checking the end of a live file that’s constantly growing, then use “tail –f”. It’ll sit there waiting for something new to be added to the end of the file, then print it out. I do this a lot, so I thought it’d be worth mentioning. tar –xvfz, gzip, xz, 7z In prehistorical times (like the 1980s), Unix was backed up to tape drives. The tar command could be used to combine a bunch of files into a single “archive” to be sent to the tape drive, hence “tape archive” or “tar”. These days, a lot of stuff you download will be in tar format (ending in .tar). You’ll need to learn how to extract it: $ tar –xvf something.tar Nobody knows what the “xvf” options mean anymore, but these letters most be specified in that order. I’m joking here, but only a little: somebody did a survey once and found that virtually nobody know how to use ‘tar’ other than the canned formulas such as this. Along with combining files into an archive you also need to compress them. In prehistoric Unix, the “compress” command would be used, which would replace a file with a compressed version ending in ‘.z’. This would found to be encumbered with patents, so everyone switched to ‘gzip’ instead, which replaces a file with a new one ending with ‘.gz’. $ ls foo.txt* foo.txt $ gzip foo.txt $ ls foo.txt* foo.txt.gz Combined with tar, you get files with either the “.tar.gz” extension, or simply “.tgz”. You can untar and uncompress at the same time: $ tar –xvfz something .tar.gz Gzip is always good enough, but nerds gonna nerd and want to compress with slightly better compression programs. They’ll have extensions like “.bz2”, “.7z”, “.xz”, and so on. There are a ton of them. Some of them are supported directly by the ‘tar’ program: $ tar –xvfj something.tar.bz2 Then there is the “zip/unzip” program, which supports Windows .zip file format. To create compressed archives these days, I don’t bother with tar, but just use the ZIP format. For example, this will recursively descend a directory, adding all files to a ZIP file that can easily be extracted under Windows: $ zip –r test.zip ./test/ dd I should include this under the system tools at the top, but it’s interesting for a number of purposes. The usage is simply to copy one file to another, the in-file to the out-file. $ dd if=foo.txt of=foo2.txt But that’s not interesting. What interesting is using it to write to “devices”. The disk drives in your system also exist as raw devices under the /dev directory. For example, if you want to create a boot USB drive for your Raspberry Pi: # dd if=rpi-ubuntu.img of=/dev/sdb Or, you might want to hard erase an entire hard drive by overwriting random data: # dd if=/dev/urandom of=/dev/sdc Or, you might want to image a drive on the system, for later forensics, without stumbling on things like open files. # dd if=/dev/sda of=/media/Lexar/infected.img The ‘dd’ program has some additional options, like block size and so forth, that you’ll want to pay attention to. screen and tmux You log in remotely and start some long running tool. Unfortunately, if you log out, all the processes you started will be killed. If you want it to keep running, then you need a tool to do this. I use ‘screen’. Before I start a long running port scan, I run the “screen” command. Then, I type [ctrl-a][ctrl-d] to disconnect from that screen, leaving it running in the background. Then later, I type “screen –r” to reconnect to it. If there are more than one screen sessions, using ‘-r’ by itself will list them all. Use “-r pid” to reattach to the proper one. If you can’t, then use “-D pid” or “-D –RR pid” to forced the other session to detached from whoever is using it. Tmux is an alternative to screen that many use. It’s cool for also having lots of terminal screens open at once. curl and wget Sometimes you want to download files from websites without opening a browser. The ‘curl’ and ‘wget’ programs do that easily. Wget is the traditional way of doing this, but curl is a bit more flexible. I use curl for everything these days, except mirroring a website, in which case I just do “wget –m website”. The thing that makes ‘curl’ so powerful is that it’s really designed as a tool for poking and prodding all the various features of HTTP. That it’s also useful for downloading files is a happy coincidence. When playing with a target website, curl will allow you do lots of complex things, which you can then script via bash. For example, hackers often write their cross-site scripting/forgeries in bash scripts using curl. node/php/python/perl/ruby/lua As mentioned above, bash is its own programming language. But it’s weird, and annoying. So sometimes you want a real programming language. Here are some useful ones. Yes, PHP is a language that runs in a web server for creating web pages. But if you know the language well, it’s also a fine command-line language for doing stuff. Yes, JavaScript is a language that runs in the web browser. But if you know it well, it’s also a great language for doing stuff, especially with the “nodejs” version. Then there are other good command line languages, like the Python, Ruby, Lua, and the venerable Perl. What makes all these great is the large library support. Somebody has already written a library that nearly does what you want that can be made to work with a little bit of extra code of your own. My general impression is that Python and NodeJS have the largest libraries likely to have what you want, but you should pick whichever language you like best, whichever makes you most productive. For me, that’s NodeJS, because of the great Visual Code IDE/debugger. iptables, iptables-save I shouldn’t include this in the list. Iptables isn’t a command-line tool as such. The tool is the built-in firewalling/NAT features within the Linux kernel. Iptables is just the command to configure it. Firewalling is an important part of cybersecurity. Everyone should have some experience playing with a Linux system doing basic firewalling tasks: basic rules, NATting, and transparent proxying for mitm attacks. Use ‘iptables-save’ in order to persistently save your changes. MySQL Similar to ‘iptables’, ‘mysql’ isn’t a tool in its own right, but a way of accessing a database maintained by another process on the system. Filters acting on text files only goes so far. Sometimes you need to dump it into a database, and make queries on that database. There is also the offensive skill needed to learn how targets store things in a database, and how attackers get the data. Hackers often publish raw SQL data they’ve stolen in their hacks (like the Ashley-Madisan dump). Being able to stick those dumps into your own database is quite useful. Hint: disable transaction logging while importing mass data. If you don’t like SQL, you might consider NoSQL tools like Elasticsearch, MongoDB, and Redis that can similarly be useful for arranging and searching data. You’ll probably have to learn some JSON tools for formatting the data. Reverse engineering tools A cybersecurity specialty is “reverse engineering”. Some want to reverse engineer the target software being hacked, to understand vulnerabilities. This is needed for commercial software and device firmware where the source code is hidden. Others use these tools to analyze viruses/malware. The ‘file’ command uses heuristics to discover the type of a file. There’s a whole skillset for analyzing PDF and Microsoft Office documents. I play with pdf-parser. There’s a long list at this website: https://zeltser.com/analyzing-malicious-documents/ There’s a whole skillset for analyzing executables. Binwalk is especially useful for analyzing firmware images. Qemu is useful is a useful virtual-machine. It can emulate full systems, such as an IoT device based on the MIPS processor. Like some other tools mentioned here, it’s more a full subsystem than a simple command-line tool. On a live system, you can use ‘strace’ to view what system calls a process is making. Use ‘lsof’ to view which files and network connections a process is making. Password crackers A common cybersecurity specialty is “password cracking”. There’s two kinds: online and offline password crackers. Typical online password crackers are ‘hydra’ and ‘medusa’. They can take files containing common passwords and attempt to log on to various protocols remotely, like HTTP, SMB, FTP, Telnet, and so on. I used ‘hydra’ recently in order to find the default/backdoor passwords to many IoT devices I’ve bought recently in my test lab. Online password crackers must open TCP connections to the target, and try to logon. This limits their speed. They also may be stymied by systems that lock accounts, or introduce delays, after too many bad password attempts. Typical offline password crackers are ‘hashcat’ and ‘jtr’ (John the Ripper). They work off of stolen encrypted passwords. They can attempt billions of passwords-per-second, because there’s no network interaction, nothing slowing them down. Understanding offline password crackers means getting an appreciation for the exponential difficulty of the problem. A sufficiently long and complex encrypted password is uncrackable. Instead of brute-force attempts at all possible combinations, we must use tricks, like mutating the top million most common passwords. I use hashcat because of the great GPU support, but John is also a great program. WiFi hacking A common specialty in cybersecurity is WiFi hacking. The difficulty in WiFi hacking is getting the right WiFi hardware that supports the features (monitor mode, packet injection), then the right drivers installed in your operating system. That’s why I use Kali rather than some generic Linux distribution, because it’s got the right drivers installed. The ‘aircrack-ng’ suite is the best for doing basic hacking, such as packet injection. When the parents are letting the iPad babysit their kid with a loud movie at the otherwise quite coffeeshop, use ‘aircrack-ng’ to deauth the kid. The ‘reaver’ tool is useful for hacking into sites that leave WPS wide open and misconfigured. Remote exploitation A common specialty in cybersecurity is pentesting. Nmap, curl, and netcat (described above) above are useful tools for this. Some useful DNS tools are ‘dig’ (described above), dnsrecon/dnsenum/fierce that try to enumerate and guess as many names as possible within a domain. These tools all have unique features, but also have a lot of overlap. Nikto is a basic tool for probing for common vulnerabilities, out-of-date software, and so on. It’s not really a vulnerability scanner like Nessus used by defenders, but more of a tool for attack. SQLmap is a popular tool for probing for SQL injection weaknesses. Then there is ‘msfconsole’. It has some attack features. This is humor – it has all the attack features. Metasploit is the most popular tool for running remote attacks against targets, exploiting vulnerabilities. Text editor Finally, there is the decision of text editor. I use ‘vi’ variants. Others like ‘nano’ and variants. There’s no wrong answer as to which editor to use, unless that answer is ‘emacs’. Conclusion Obviously, not every cybersecurity professional will be familiar with every tool in this list. If you don’t do reverse-engineering, then you won’t use reverse-engineering tools. On the other hand, regardless of your specialty, you need to know basic crypto concepts, so you should know something like the ‘openssl’ tool. You need to know basic networking, so things like ‘nmap’ and ‘tcpdump’. You need to be comfortable processing large dumps of data, manipulating it with any tool available. You shouldn’t be frightened by a little sysadmin work. The above list is therefore a useful starting point for cybersecurity professionals. Of course, those new to the industry won’t have much familiarity with them. But it’s fair to say that I’ve used everything listed above at least once in the last year, and the year before that, and the year before that. I spend a lot of time on StackExchange and Google searching the exact options I need, so I’m not an expert, but I am familiar with the basic use of all these things. from The command-line, for cybersec
0 notes