1

I have had an online archive service for over a year now. Unfortunately, I didn't put in the infrastructure to keep statistics. All I have now are the archive access logs.

For every hour there are two audio files (0-30 min in one and 30-60 min in other). I've currently used MySQL to put in the counts. Looks something like this:

| DATE | TIME | COUNT | | 2012-06-12 | 20:00 | 39 | | 2012-06-12 | 20:30 | 26 | | 2012-06-12 | 21:00 | 16 | 

and so on...

That makes 365 days * 24 hrs * 2 (2 halves in an hour) > 17500 rows. This makes read/write slow and I feel a lot of space is wasted storing it this way.

So do you know of any other database that will store this data more efficiently and is faster?

7
  • 17500 is not a lot. And you can store the date and time in one column as a DATETIME. If it is slow to read, consider adding an index to the DATETIME. Commented Jun 13, 2012 at 0:16
  • 1
    Worry when you reach billions of rows. 175k is quite small, considering there's databases with literally trillions of rows. Commented Jun 13, 2012 at 0:18
  • @Michael Right. I could use indexes. But there isn't any way to shorten the write times, is there? Commented Jun 13, 2012 at 0:21
  • @Marc Oh haha. Clearly I had no idea. But just out of curiosity are there other db's that can store it in a better way? Commented Jun 13, 2012 at 0:22
  • @Ram if the write times are slow, it is likely because of the time it takes to seek to the row you need (hence the index) or because you have more throughput than you can deal with. If it's a question of throughput, then don't store the counts in real time. Instead store the log as you always have, and use a nightly script to compile the stats. Commented Jun 13, 2012 at 0:26

1 Answer 1

1

That's not too many rows. If it's properly indexed, reads should be pretty fast (writes will be a little slower, but even with tables of up to about half a million rows I hardly notice).

If you are selecting items from the database using something like

select * from my_table where date='2012-06-12'

Then you need to make sure that you have an index on the date column. You can also create multiple column indexes if you are using more than one column in your where statement. That will make your read statements very fast (like I said up to on the order of a million rows).

If you're unacquainted with indexes, see here:

MySQL Indexes

Sign up to request clarification or add additional context in comments.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.