You seem to have left out the hours.
Assuming you have GNU date, you can deal with it by using the date calculations. Do you have to worry about switches between winter and summer (standard and daylight saving) time? If so, there'll be some entertainment to be had with gaps of an hour in the spring and a period in the fall when the raw date/time values repeat.
$ /opt/gnu/bin/date -d '1981-01-01 00:00:00' +'%s %Y-%m-%d %H:%M:%S' 347184000 1981-01-01 00:00:00 $ /opt/gnu/bin/date -d '2000-12-31 23:50:00' +'%s %Y-%m-%d %H:%M:%S' 978335400 2000-12-31 23:50:00 $
That gives you start and end times in Unix timestamp notation (and in the US/Pacific time zone — adjust to suit your needs). You could then use a loop such as:
now=347184000 end=978335400 while [ "$now" -le "$end" ] do url=$(date -d "@$now" +'www.example.com/%y/%m/%d/%H/%M.txt') echo wget "$url" now=$(($now + 600)) done
There are multiple ways of writing that. I've assumed that there's a directory of hourly files, and within that the 10-minute files, but you can tweak the format to suit your requirements. The use of @ in the -d is crucial.
You might prefer to use a scripting language such as Perl or Python instead of repeatedly invoking date as shown.
Note that you have a vast number of files to collect. With about 31 million seconds per year, and 600 seconds per 10 minute interval, you're looking at over 50,000 files per year for 20 years, or 1 million files in total. The target (victim) web site might not be happy with you running that flat out. You'd probably need to pace the retrieval operations — check their terms and conditions.