31

I don't know how to explain this well, but I will try.

  • I use Google Cloud SQL second edition with 20 GB disk size.
  • I have several wp. databases with 166.5 MB Total size.

And right now my Storage usage is 9.52 GB! (With 166.5 MB SQL data...) and increasing still going faster...

What to do?!

gcloud-sql-problem-increasing-size

enter image description here

UPDATE :

I solve this with :

  • I made an export in bucket
  • I created a new instance Cloud SQL
  • Import from bucket
  • And delete instance with problem.

(And changed ip from my applications)

I don't know sure where problem come from but could be a "Storage overhead from binary logs".

Next time will check binary logs with : mysql> SHOW BINARY LOGS;

What I think Google is missing is a purge binary logs! (an easy way!)

UPDATE FINAL :

With binary logs active, storage of your cloud SQL will expand continuously.

For anyone in the same situation, you can edit the instance and uncheck binary logs, after that the current binary logs will purge.

Sorry for my noob problem! :D (I'm a beginner in Server administration.)

Thanks Vadim!

9
  • There is really no way at all we could help with this. We have no way at all of knowing what your code is doing. You probably should contact Google. Commented Jan 29, 2016 at 21:26
  • Ok, I will do this. thanks, but what I don't understand is where is size from, how to see that files... Commented Jan 29, 2016 at 21:29
  • 1
    Do you have binary logs enabled? Can you show us what other flags you have enabled? For example, do you have the mysql general log enabled? Commented Jan 29, 2016 at 21:45
  • 1
    You can use "SHOW BINARY LOGS;" to see how much space is used by the binary logs. Commented Jan 29, 2016 at 22:24
  • 1
    I don't know where to run "SHOW BINARY LOGS", in shell? I manage my database with heidiSQL... After some reasearch could be "Storage overhead from binary logs" Commented Jan 30, 2016 at 0:27

2 Answers 2

17

If you have binary logs enabled, mysql will make a record of all changes, which is required for replication or point-in-time recovery.

If you have no need for these features, you can disable binary logs which will purge any existing logs from your instance.

If binary logs are enabled, they will not grow indefinitely. Binary logs older than the oldest automatic backup (7 days) are purged automatically.

Sign up to request clarification or add additional context in comments.

7 Comments

In google cloud sql second generation, point in time recovery is not active. And I don't know why suddenly size is increasing without any changes in settings. (Maybe Google still work on that) And sometime increasing must be purget, but when? In print screen you can see that is just increasing in size. They must specific a minimum size of disk when you check binary logs and how that it works with increasing in size and maybe purget after x days.
Binary log growth depends on your write queries. Did you start writing more data to the database around that time? Did you receive more traffic around that time?
Traffic increasing around 40%, but nothing that big anyway... So, the growth was too big and sudden! I Think is something that Google maybe work on it (a bug maybe). I think is a bug because point in time is not currently active in second generation and maybe that isn't ready yet.
I've updated my answer to be more specific. Binary logs are purged as older backups roll off.
@vadim, I'd also tell users how to disable the feature in your answer. For those interested you have two options using the cloud console: 1. Go to console.cloud.google.com/sql/instances?project=<your-project> and select the database instance 2a) select edit in the top action bar, expand enable auto backups and uncheck enable binary logging, save your changes 2b) open the backups tab, click manage automated backups, uncheck enable binary logging, save. Keep in mind that these changes require a restart of your database instance!
|
0

I had the same issue with our GCP SQL instance. In my case the storage went from 10GB that we first configured to 728GB (with an increase in billing accordingly).

In my case the culprit was archived WAL Logs:

Metrics explorer

Just viewing the current log size did not show this because the logs had already been cleared.

The issue is that GCP has a hidden option:

Automatically increase storage option for GCP SQL

This is checked by default (IIRC). So the storage will increase automatically in events of high log output (in my case this happened when upgrading Postgresql server version.

The catch is that once increased, the storage cannot be decreased, ending up with increased monthly operational costs.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.