$ cat test-c.bash #! /bin/bash echo -e "Enter file name: \c" read file_name if [ -c $file_name ] then echo Character special file $file_name found else echo Character special file $file_name not found fi $ bash test-c.bash Enter file name: /dev/tty Character special file /dev/tty found $ cat test-b.bash #! /bin/bash echo -e "Enter file name: \c" read file_name if [ -b $file_name ] then echo Block special file $file_name found else echo Block special file $file_name not found fi $ bash test-b.bash Enter file name: /dev/sda Block special file /dev/sda found
Except for the missing fi in your programs, they do what one would expect.
Your assumption...
test.txt is a character special file
img.jpg is a block special file
...probably is wrong if you have not created them to fulfil this e.g. using mknod.
(See man mknod.)
So if your test files are what the names are hinting to, they are just plain normal (regular) files.
What were you looking for?
A way to distinguish between text files and binary files like in DOSish operating systems?
This is a relict of CP/M days or even older. The filesystem did track the file sizes in blocks and therefor text files needed an end of file character to flag the end of valid text.
The consequence is, that concatenation of binary files and text files has to be done differently.
Unixish filesystems track the file's size in blocks and in actually used bytes, so there is no need to flag the text end using a special end character.
If you want to guess what's inside a file, you can use the file command:
$ file 20170130-094911-GMT.png 20170130-094911-GMT.png: PNG image data, 744 x 418, 8-bit/color RGBA, non-interlaced $ file calendar.txt calendar.txt: ASCII text
HTH!