A few thoughts. Consider range partitioning on "timestamp", it will reduce the amount of work your query have to do. Further optimization might be to calculate and store the agg for "closed" partitions. You will of course have to recalculate this when you modify historical information.
As a bonus it will be much easier to roll-out historical data that is no longer needed.
As mentioned this is just some thoughts and reflections, it may or it may not fit your situation. One pitfall may be the number of different intervals that you use for reporting. If larger intervals is not multiples of the partitioning interval this wont work well.
Edit: How is the plan affected by creating an temp table and an index on "timestamp"?
create index X on market_trades ( timestamp ); create temp table T ( time_range tsrange ); insert into T ( time_range ) WITH vals AS ( SELECT '2013-08-19 0:00'::timestamp AS frame_start, '2013-08-26 0:00'::timestamp AS frame_end, '17h'::interval AS interval_length ), intervals AS ( SELECT tsrange(start_time, lead(start_time, 1, frame_end) OVER (ORDER BY start_time NULLS FIRST)) AS time_range FROM ( SELECT generate_series(frame_start, frame_end, interval_length) AS start_time, frame_end FROM vals ) _ WHERE start_time < frame_end ) SELECT time_range FROM intervals i; create index x on T ( time_range ); analyze t; SELECT time_range, count(td.id) AS agg FROM T LEFT JOIN market_trades td ON td.timestamp <@ T.time_range GROUP BY T.time_range ORDER BY T.time_range;