While I like the acronym so, this tool would actually be better described as se: an interface to the StackExchange network. In particular one thing that differentiates it from similar tools is that you can simultaneously search any number of sites in the StackExchange network:
# search using your default configuration $ so how do i reverse a list in python # search for a latex solution $ so --site tex how to put tilde over character # use google to search stackoverflow.com, askubuntu.com, and unix.stackexchange.com $ so -e google -s askubuntu -s stackoverflow -s unix how do i install linuxYou can install the AUR package so (tracks latest release) or so-git (tracks master), e.g.
yay -S so-git You can install the package so via
pkg install so You can install the package so via
pkgin install so You can install the homebrew formula
brew install so Alternatively, you can use MacPorts to install so:
sudo port install so If you have scoop you can install via the extras bucket:
# add extras bucket scoop bucket add extras # install so scoop install soFor any OS you can install the crate so directly:
# everything but windows cargo install so # windows cargo install so --no-default-features --features windows For more information on the feature flags, see selecting a backend.
Static binaries are available on the releases page for common Linux, MacOS, and Windows targets. You can quickly install the one you need to directory DEST with:
curl --proto '=https' --tlsv1.2 -sSf https://samtay.github.io/so/install.sh \ | bash -s -- --to DESTRight now I'm only building the most common targets, but in theory it should be easy to add more, so if you don't see what you are looking for just open an issue and I can add it. Here's a list of the supported targets. If you don't know what you need, you can install rustc and open an issue with the output of rustc -Vv | grep host | cut -d' ' -f2.
The configuration files for e.g. a user Alice can be found in the following directories:
- Linux:
/home/alice/.config/so - Windows:
C:\Users\Alice\AppData\Roaming\Sam Tay\so - MacOS:
/Users/Alice/Library/Preferences/io.Sam-Tay.so
The config.yml file lets you specify your CLI defaults. So if you dislike the lucky prompt, always search serverfault.com and unix.stackexchange.com, and want the fastest search engine, you can set your config file like this:
# config.yml --- api_key: ~ limit: 10 lucky: false sites: - serverfault - unix search_engine: stackexchangeRun so --help to see your current defaults.
In the same directory you'll find colors.toml which is self-documented. The default theme attempts to blend in with your default terminal theme, but you can change it as necessary. In particular, you may want to change the highlight_text if the current selection is difficult to read. There are some themes in the themes directory as well.
There's a very primitive integration in place to copy the contents of the currently focused question or answer to the system clipboard. This requires some command in your PATH that can accept stdin and pipe to the clipboard. On mac & windows, this will work out of the box with the default set to pbcopy & clip respectively. On Linux, I've made the assumption that xclip is likely the most popular, but if you use something else (e.g. wl-copy on wayland), you'll need to set the command directly:
# config.yml --- copy_cmd: copy --option-to-take-stdinIf you want to use your own StackExchange API Key you can set it via
so --set-api-key <KEY> You can also choose to use no key by editing your configuration to api_key: ~. If for some reason my API key is globally throttled, you can hit the StackExchange API with no key up to 300 times per day per IP, which I imagine is fine for most users.
The available search engines are StackExchange, DuckDuckGo, and Google. StackExchange will always be the fastest to search because it doesn't require an additional request or any HTML parsing; however, it is also very primitive. DuckDuckGo is in second place for speed, as its response HTML is much smaller than Google's. I've found that it performs well for my queries, so it is the default search engine.
DuckDuckGo sometimes blocks requests, so it is no longer the default.
As stated in the docs,
If a single IP is making more than 30 requests a second, new requests will be dropped.
So, don't go crazy with the multi-site search, since it is all done in parallel. In particular, if you specify more than 30 sites, SE will likely ban you for a short time.
If you're installing from source, you can choose from a number of available backend rendering engines. Note that the package default and windows feature flags do not have an ncurses dependency, for the sake of portability. The default backend is termion, a bindless library in pure Rust which seems to work quite well on Linux, MacOS, BSD, and Redox. The windows backend is by default crossterm, and while its level of support is awesome, it does comes at a price in performance. On my machine, the app kind of flashes between draws. So if you are on Mac, Linux, or Redox, your best bet is to compile with default features which uses the termion backend. If you are on windows, use crossterm, but know it will be slightly jumpy.
If the crossterm folks figure out a fix for allowing ncurses to receive resize events, and you have ncurses installed on your system, then the ncurses and pancurses backends are likely the most performant. Just know that currently if you choose this option, and you run the --lucky prompt, you won't be able to resize the terminal window while the TUI is open.
Available backends:
termion-backendncurses-backendpancurses-backendcrossterm-backend
E.g. to use ncurses-backend:
cargo install so --no-default-features --features ncurses-backend See more information about this choice here.
Warning: this was my first time writing Rust and there is very likely some non-idiomatic and straight up ugly code throughout this project, so don't come looking here for a good Rust example! That being said, I would love to improve the codebase. Feel free to check out the contributing guidelines and submit any refactoring issues or pull requests.
Credit to my good friend Charles for logo design.

