Skip to content

useful commands

Marcel Schmalzl edited this page Sep 18, 2024 · 38 revisions

Content


ddrescue

Recover hard disks, scratched CDs, DVDs, ...

ddrescue -d -r 3 -b 2048 /dev/sr1 /media/sdd1/mystuff.iso /media/sdd1/mystuff.log ^ ^ ^^^^^^ ^ ^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^ | | | | rescue file = output log file (kind of index try 3 times before | device of what is recovered) giving up | CD/DVD/.. | | | Set block size | 2048 = standard | for CD/DVD | | | -d: direct access to the drive without any caching

For more info: https://html5.litten.com/how-to-fix-recover-data-from-a-scratched-or-damaged-cd-or-dvd/

grep

Print lines matching a pattern. If no files are specified, or if - is given, grep searches standard input.

Interesting options

Invert match (exclude)

-v "THISwillBEexcluded" 

Treat binary files as text

In case you have binaries with some bits of text in it you wish to search

-e 

Extended regular expresions

This is shorter:

find . -type f | grep -vE ".dep|.dla|.bat" 

Instead of:

find . -type f | grep -v ".dep" | grep -v ".dla" | grep -v ".bat"

Print files with prefixed line numbers with grep

# Print files with prefixed line numbers grep -n "." ./myFile.json

Full-text search

Performs a full text search recursivly in current folder

# Performs a full text search recursivly in current folder  grep --color=auto -rnw '.' -e "$@"

Always show file name

Grep does not show the filename when it matches only one file. Force with /dev/null:

$ find . -name *.tf | grep zis-heart | grep -v version | xargs grep CI_TYPE CI_TYPE = "dummy" // Define CI type $ find . -name *.tf | grep zis-heart | grep -v version | xargs grep CI_TYPE /dev/null ./zis-heartbeat/main.tf: CI_TYPE = "dummy" // Define CI type

Hide own grep process

Hide your (own) grep process when you search for a process:

$ ps -e -o pid,args | grep [s]omeprocess 

-> [s] will get resolved (TODO check why this is so) as s by grep but not in ps

Easy table generation for markdown

Install csv2md: pip3 install csv2md.

  1. Create your table via LibreOffice Calc or MS Excel, ...
  2. Save it as .csv
  3. Run csv2md -d ';' filename (-d: use ; as delimiter)

Parallelise with GNU parallel

# apt-get install parallel # pacman -S parallel parallel -j20 -k 'echo {}; sleep 2' ::: {a..g}
  • -j: max number of parallel jobs (j0 spawns 252 jobs; j10 spawns 2520 jobs)
  • -k: Keep order of output sorted like input (if not set it prints the outputs immediately when one finishes (order can be mixed) -> no slow-down but it needs 4 file handles for each job; if it runs out of file handles, it will wait until it can fetch new handles
  • --progress: shows progress and duration of each job
  • --dry-run: shows the commands which would be executed
  • --pipe: splits an input file in (by default 1 MB; change with --block 100m -> 100 MB blocks (basically the maximum possible)) blocks; by default it detects \n as block ends (can be overwritten by using --recend

Run commands on remote machines / clusters

See here. This allows to distribute tasks on several remote machines considering CPU count and memory just by using ssh and parallel.

Sudo

Examples

See also here: https://www.gnu.org/software/parallel/parallel_examples.html

# Nested for loops parallel echo {1} {2} ::: red green blue ::: S M L XL XXL | sort

Speed-up fast jobs

# 1| fast job - lots of overhead due to process spawning seq -w 0 9999 | parallel touch pict{}.jpg # 2| -X spawns jobs groups -> command must support multiple arguments as input seq -w 0 9999 | parallel -X touch pict{}.jpg

-> more about speed-ups see here: https://www.gnu.org/software/parallel/parallel_examples.html#example-speeding-up-fast-jobs

On a normal GNU/Linux system you can spawn 32000 jobs using this technique with no problems. To raise the 32000 jobs limit raise /proc/sys/kernel/pid_max to 4194303.

from https://www.gnu.org/software/parallel/parallel_examples.html#example-speeding-up-fast-jobs

Output of 1 and 2 in a dry run:

$ seq -w 0 3 | parallel --dry-run touch pict{}.jpg touch pict0.jpg touch pict1.jpg touch pict2.jpg touch pict3.jpg $ seq -w 0 10 | parallel --dry-run -X touch pict{}.jpg touch pict00.jpg pict01.jpg touch pict02.jpg pict03.jpg touch pict04.jpg pict05.jpg touch pict06.jpg pict07.jpg touch pict08.jpg pict09.jpg touch pict10.jpg $ seq -w 0 30 | parallel --dry-run -X touch pict{}.jpg touch pict00.jpg pict01.jpg pict02.jpg pict03.jpg touch pict04.jpg pict05.jpg pict06.jpg pict07.jpg touch pict08.jpg pict09.jpg pict10.jpg pict11.jpg touch pict12.jpg pict13.jpg pict14.jpg pict15.jpg touch pict16.jpg pict17.jpg pict18.jpg pict19.jpg touch pict20.jpg pict21.jpg pict22.jpg pict23.jpg touch pict24.jpg pict25.jpg pict26.jpg pict27.jpg touch pict28.jpg pict29.jpg pict30.jpg 

Explore disk usage - ncdu

ncdu (NCurses Disk Usage) is a command line tool to explore the current disk usage:

# Install sudo apt install ncdu # Usage ncdu -x . # ^^ ^---------- path # || # Handle only local filesystem 

diff's

Of course Kdiff3 and Meld are great tools. When it comes to huge amounts of data and speed other tools might be more beneficial (note: in my case - diff'ing 2 x 1 TiB of data let Kdiff3 hang and Meld crash):

  • diff -qr dir1/ dir2/
    • Reads both folders in parallel (Kdiff3 and Meld do sequential reads 👎 )
    • Good if you only want to see if a file differs
  • git diff --no-index --summary dir1/ dir2
    • Good if you want to compare file contents
    • You can also just get a summary (--summary (just tells you if a file differs) or --compact-summary (additions/deletions style)) where you can hide the actual content diff's

Find exec

Some examples with find execute:

Replace invalid NTFS characters

Invalid characters

# Find all invalid file names find ./ \( -iname "*:*" -o -iname "*|*" -o -iname "*\?*" -o -iname "*\**" \) # ^^^---------- ^^-- OR ------^^---------------^^ ^^ # |----------- expressions ------------------------------- # (see: https://unix.stackexchange.com/a/50613/116710) # Replacement example with perl-rename (for some distributions it's just `rename`) find ./ -iname "*:*" -exec perl-rename -n 's/:/-/' {} \; # dry-run ---^^ ^^^^^^----- perl substitute pattern

For perl substitute patterns see also https://perldoc.perl.org/perlre.html.

Find files smaller than x KiB

Here we also specify the file type (via -name) to make sure we don't delete anything else:

find . -type f -name "*.jpg" -size -2k -delete
  • -2k: Smaller than 2 KiB
  • 2k: Exactly 2 KiB
  • +2k: Greater than 2 KiB

Remove files from file

while IFS= read -r file ; do rm -- "$file" ; done < /path/to/file/which/contains/files/to/delete.txt

Wacom dual monitor config

Say your monitor is right to your other (main) display and you only want to use your graphics tablet on your right monitor:

#!/bin/bash xsetwacom --set 20 MapToOutput "2560x1440+1920+0"

Columnise console/bash output

$ printf "flah ;klj jk;lj " | column -t -s ";" flah klj jk lj
  • -t : creates a table
  • -s : separator

Recursive/partial copy - cp

cp -r --parents ./source ./dest 
  • --parents : Copies only folders needed until file
  • -r : recursive - copies every file (found left and right) in this path

cut

Extract/cut information column-wise.

Usage

Input file:

## text.txt ## root:x:0:0:root:/root:/bin/bash daemon:x:1:1:daemon:/usr/sbin:/bin/sh bin:x:2:2:bin:/bin:/bin/sh … 

Example usage:

$ cut -d : -f 1 text.txt root daemon bin …

Useful options

  • -c, --characters=LIST select only these characters
  • -d, --delimiter=DELIM use DELIM instead of TAB for field delimiter
  • -f, --fields=LIST select only these fields; also print any line that contains no delimiter character, unless the -s option is specified
  • -s, --only-delimited do not print lines not containing delimiters
  • --output-delimiter=STRING use STRING as the output delimiter the default is to use the input delimiter

Get date/time

$ date +%F\ %T 2018-05-15 16:50:11

Password generators

apg

Generates random and memorisable passwords:

Sample output:

$ apg Please enter some random data (only first 16 are significant) (eg. your old password):> ust1Glayduf (ust-ONE-Glayd-uf) ojNumhis6 (oj-Num-his-SIX) RemkibIr4 (Rem-kib-Ir-FOUR) QuicHap7 (Quic-Hap-SEVEN) MulOcnowit2 (Mul-Oc-now-it-TWO) yegEkEc6 (yeg-Ek-Ec-SIX)

pwgen

An alternative ist pwgen:

$ pwgen 40 shee8joM5zeic8iohoosohrahfimi3aer9OazuCh Food1olei6feeghohHeneechaiTifaiz9Eighaec

readlink

Get an absolute path of a file:

readlink -f file.txt

tee

Double output to file (>> to stdout, .. + file) (name comes from T-Splitter used in plumbing):

ls -l / | tee out.file | grep b # __>>__________v>__________>>____ # vv # vv # out.file

touch

"Touch" file access.

  • Changes date to now
  • Creates empty file if not existing -> This is a very convenient side-effect
touch -c <filename>
  • -c : Do not create new files

Execute commands periodically:

watch [options] command Options (extract): -c, --color interpret ANSI color and style sequences -d, --differences[=<permanent>] highlight changes between updates -n, --interval <secs> seconds to wait between updates -p, --precise attempt run command in precise intervals -t, --no-title turn off header 

Example 1

Print every second the current second ;)

$ watch -n 1 -pd 'date +%S' # Clears screen after execution Every 1.0s: date +%S PZI13774: Thu Aug 10 13:16:42 2017 42

Example 2

Monitor files in folder (>> monitor ls). For instance you copy something using rsync and want to check whats already copied.

watch ls 

Get path of executable

Find paths of executables

$ whereis gparted gparted: /usr/bin/gparted /usr/share/man/man8/gparted.8.gz $ whereis cp cp: /usr/bin/cp /usr/share/man/man1/cp.1p.gz /usr/share/man/man1/cp.1.gz

or:

$ which cp /bin/cp

xargs

xargs reads items from the standard input, delimited by blanks and executes the command (default is /bin/echo) one or more times with any initial arguments followed by items read from standard input.

command1 | xargs [options] [command2]

Example

Find files in the current directory, exclude some file endings (using grep) and output each file content using cat:

find . -type f | grep -vE ".dep|.dla|.bat" | xargs cat

curl

Proxy

Configuration if you are behind a proxy server:

# ~/.curlrc proxy = 192.0.2.111:2222 proxy-user = "<uname>:<password>"

netrc

  • Allow different logins for destination/machine/sites
  • Save credentials for commands like curl
# ~/.netrc machine url.or.ip.org	login <uname>	password <password> machine 192.168.0.123	login <uname>	password <password> # This is a comment # If you have the same credentials for multiple destinations/machines you have to have multiple entries

Notes:

  • Use curl -u to use .netrc (-k to disable ssl verify)
    • Alternatively you can use curl -netrc-file /path/to/netrc/file
  • Windows
    • On Windows it is _netrc (not .netrc) except you use MSYS2, etc. (then it is .netrc).
    • You must set the HOME environment variable on Windows to your home directory (where .netrc lies) that -u works

wget

Credentials for wget:

# Tell wget that it should use a proxy use_proxy=yes http_proxy=http://<uname>:<password>@192.0.1.123:1234 https_proxy=https://<uname>:<password>@192.0.1.123:1234 proxy_user=<uname> proxy_password=<password> # Credentials for access http_user=<uname> http_password=<password>

Get open connections - netstat

sudo netstat -tulpn

Results in:

Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 780/cupsd tcp6 0 0 ::1:631 :::* LISTEN 780/cupsd udp 0 0 0.0.0.0:5353 0.0.0.0:* 6267/chromium udp 0 0 0.0.0.0:5353 0.0.0.0:* 296/avahi-daemon: r udp 0 0 192.168.1.200:68 0.0.0.0:* 298/NetworkManager udp 0 0 0.0.0.0:33896 0.0.0.0:* 296/avahi-daemon: r udp6 0 0 :::5353 :::* 6267/chromium udp6 0 0 :::5353 :::* 296/avahi-daemon: r udp6 0 0 fe80::368:3588:6aa5:546 :::* 298/NetworkManager udp6 0 0 :::58036 :::* 296/avahi-daemon: r 

Check actual traffic

To see if it works you can execute the following command on the listener to check on your desired port:

tcpdump -i eth0 -s 1500 port 443

Enables data transfers between two addresses using multiple protocols (e.g. TCP, UDP, SOCKS4, RAW IP, OpenSSL, other ifaces, ...).

 ┌──────────────────────┐ ┌──────────────────────┐ │ <Connector> │ │ <Listener> │ │ PROTO:ADDRESS:PORT │◄──►socat◄──►│ PROTO:ADDRESS:PORT │ │ │ │ │ └──────────────────────┘ └──────────────────────┘ Syntax: socat [GENERAL_OPTIONS] <PROTO>:<PORT> <AddressType>:<ARGS>, <OPTIONS> ARGS are usually prefixed by `CW` in the documentation (you need to strip away the `CW` part) e.g. socat -d -d TCP-LISTEN:2223 OPEN:inputDump.txt, creat 

Transfer messages

Create TCP listener to std::out

socat -d -d TCP-LISTEN:2223 -
  • -d: verbosity (repeat to increase: -d -d: fatal, error, notice)
  • -: print to std::out

Connect to listener via IP and port and use std::in

socat -d -d TCP-CONNECT:191.1.1.123:2223 - Send message to listener: hello listener

Transfer files

Create a file:

socat -d -d TCP-LISTEN:2223 OPEN:inputDump.txt, creat # <CMD>:<ARGS>, <OPTION>
  • flags
    • creat: Creat file
    • trunc: Overwrite file if present
    • excl: Do not overwrite if file present
    • append: Append to file

Transfer file to listener

socat -d -d TCP-CONNECT:191.1.1.123:2223 FILE:/path/to/some/file/to/transfer.txt

Protocol converter / Port forwarding

Use-cases:

  • Circumvent firewall protocol/port restrictions (e.g. proxy UDP to TPC connection)
  • Encryption

Create an openssl key:

openssl req -newkey rsa:2048 -nodes -keyout cert.key -x509 -days 1000 -out cert.crt # Combine both files to make it usable for socat: cat cert.key cert.crt > sslkey.pem

Add OpenSSL listener:

socat -d -d OPENSSL-LISTEN:2223, cert=sslkey.pem, verify=0
  • verify=0: since it's self-signed

Connector (create a bash shell on the other host):

socat -d -d OPENSSL-CONNECT:191.1.1.123:2223, verify=0 EXEC:/bin/bash

ssh via HTTPS/OPENSSH

Let the connector connect to the listener via ssh:

# Create cert openssl req -newkey rsa:2048 -nodes -keyout sshsocatcert.key -x509 -days 1000 -out sshsocatcert.crt # Combine both files to make it usable for socat: cat sshsocatcert.key sshsocatcert.crt > sshsocatcert.pem # Listener sudo socat -ddd OPENSSL-LISTEN:443,fork,cipher=aNULL,verify=0,cert=sshsocatcert.pem,compress=auto TCP-CONNECT:localhost:22 # Connector ssh -o ProxyCommand='socat -dd STDIO OPENSSL-CONNECT:%h:443,compress=auto,cipher=HIGH,verify=0' 192.1.1.123

Alternatively (you must do it if you use sshfs because it does not support the -o option) put the proxy config in the ~/.ssh.config file:

Host 192.1.1.123  ProxyCommand socat STDIO OPENSSL-CONNECT:%h:443,cipher=aNULL,verify=0,compress=auto

Now you could mount a folder via sshfs like this (requires that you added the config (-> ~/.ssh/config) options like before):

sshfs 192.1.1.123:/ /tmp/mountFolder

Note: As an alternative you could use rsync too; it reads the settings from ~/.ssh/config and applies it accordingly

Verify that port 443 is used:

sudo tcpdump -ni any port 443 # Now do something (like navigating within your mounted folder) # Assuming you don't have any other traffic on 443 it should be very silent otherwise (in comparison to scanning port 22)

Troubleshooting

no shared cipher

No cert is provided; create a cert and add the relevant option. I.e.:

Create openssl key (if you haven't already):

openssl req -newkey rsa:2048 -nodes -keyout socatsshcert.key -x509 -days 100 -out socatsshcert.crt # Combine both files to make it usable for socat: cat socatsshcert.key socatsshcert.crt > socatsshcert.pem

For the OpenSsl listener run with cert:

socat OPENSSL-LISTEN:2223, cert=socatsshcert.pem, verify=0 # or similar

Python 3

(un)fold
Snippets
General
Libs

Linux/bash

(un)fold
Guides
Scripts

Git

(un)fold

C/C++

(un)fold

Video

(un)fold

Databases

(un)fold

Misc

(un)fold

Windows

(un)fold

Mac

(un)fold

SW recommendations

(un)fold

(Angular) Dart

(un)fold

Clone this wiki locally