This was added a long time ago, but I feel I should warn you all that for every pipeline you add, you will be increasing process time.
If you are processing thousands of large files over a network, this sort of scripting will slow things down exponentially.
You are far better off writing a C++ binary that can output the data for you using a library such as Qt or libssl.
Otherwise you will want to let the shell handle the output instead. Use a flexible shell such as zsh for this:
mysum=${$(sha1sum $filename)[1]}
This will only grab the sum and not the filename (which could have been accessed with [2].
That said, you are far better off running shasum againt ALL the files you need verified, and save the bulk of the results in a single data chunk, then extract the sums in one go. In zsh you could do it like this:
sums=( ``sha1sum *(.)`` ) for sum filename in $sums { # each item is neatly tucked into $sum and $filename for your choosing! }
This allowing you easily get at the shasum for each filename, and cuts the read time down considerably. The first *(.) zsh-ism is a regular glob that selects only regular file elements (not directories or symlinks). You can reverse this by specifying *(^.) to get directories and symlinks or *(/) to get just directories and not symbolic links at all.
If you are wondering if zsh is less efficient or has more overhead than bash, it really does not. I have replaced all my bash and sh scripts with zsh, and in many cases things run faster, especially since you can compile any zsh script into a binary using zcompile quite easily.
I have made this transition to zsh even on my potato devices (such as my very old 2-core laptop) with nothing but improvement.
sha512sum testfile | awk '{print $1}'