You need to traverse the whole directory tree and check the size of each file in order to find the largest one.
In zsh, there's an easy way to sort files by size, thanks to the o glob qualifier:
print -rl -- **/*(D.oL)
To see just the largest files:
echo **/*(D.oL[-1])
To see the 10 largest files:
print -rl -- **/*(D.oL[-10,-1])
You can also use ls -S to sort the files by size. For example, this shows the top 10 largest files. In bash, you need to run shopt -s globstar first to enable recursive globbing with **; in ksh93, run set -o globstar first, and in zsh this works out of the box. This only works if there aren't so many files that the combined length of their names goes over the command line limit.
ls -Sd **/* | head -n 10
If there are lots of large files, collecting the information can take a very long time, and you should traverse the filesystem only once and save the output to a text file. Since you're interested in individual files, use the -S option of GNU du in addition to -a; this way, the display for directories doesn't include the size files in subdirectories, only files directly in that directory, which reduces the noise.
du -Sak >du sort -k1n du | head -n 2
If you only want the size of files, you can use GNU find's -printf action.
find -type f -printf '%s\t%P\n' | sort -k1n >file-sizes.txt tail file-sizes.txt
Note that if you have file names that contain newlines, this will mess up automated processing. Most GNU utilities have a way to use null bytes (which cannot appear in file names) instead, e.g. du -0, sort -z, \0 instead of \n, etc.