I'd also like to refer to the NNGroup article @Neil posted. However, enforcing a timeout just for the sake of believing that it will heighten the user experience is beyond foolish.
Fast requests heighten the user experience, not timeouts generating error messages.
Saying something along the lines of "The request timed out, try again." is not likely at all to create any happy users. In fact it's likely going to create more infuriation since your job is to make sure that the request works properly. This is to say that the timeout has not increased the user experience in any conceivable way.
Now, let's move on to another area: What happens if the requests actually start slowing down due to something very simple such as increased load on the servers? Your users will be fine so long as a request takes less than one second.
What happens if they start taking more than one second? Well, all of a sudden you have a lot of users with requests which are timing out - And they are seeing error messages which are beyond their control to deal with. What happens when requests start timing out? From my personal experience people start refreshing/clicking buttons a lot. This is very likely to only heighten your load problems.
The question becomes: what's a good timeout value and how can you determine one from data rather than feeling like you're shooting in the dark?
This depends a lot on the request. Specifically, is it logical that a request may take a lot of time? Users are often willing to wait a while for things they expect to take time. Generating a report seems like a typical task where one might expect a delay of at least a few seconds but in these cases you can typically help the user anticipate some wait.
As a finishing point, I think setting 1 second as an ambition for the average request time to achieve is great but don't impose such an arbitrary rule upon your users.