0

I have a Java app on Openshift (Tomcat7) and I need a plenty of cheap storage (TBs). Obviously Openshift would be too expensive to use, so I was thinking of Amazon S3.

  1. What would be the optimal way of getting access to a plenty of storage while having an app on Openshift?

  2. Is it possible to somehow connect postgreSQL running on Openshift to Amazon S3, so that postgreSQL would run on Openshift but save everything on Amazon S3? Basically I am looking for what is cheaper to use and that's why I am not sure about setting up postgreSQL on AWS directly instead of having it on Openshift.

Basically the main issue is getting a plenty of storage having an app on Openshift (Or other cheap hosting for Java-Tomcat project). What DB, technology, service is used - does not matter as long as it is free or cheap.

4
  • 2
    Possible duplicate of Mysql Data Directory on S3 Commented Jul 10, 2016 at 22:26
  • Not really, I just need a way of getting a plenty of storage connected to Openshift this or that way - this is the main issue of this question Commented Jul 10, 2016 at 22:41
  • @NikitaVlasenko Are you planning to use AWS S3 as the storage disc for the DB in openshift? Commented Jul 10, 2016 at 23:02
  • 1
    What is the 50GB of user data made up of? Files? Millions of rows of data? Commented Jul 10, 2016 at 23:31

1 Answer 1

-3

What would be the optimal way of getting access to a plenty of storage while having an app on Openshift?

AWS S3 is the best option for you as per the price goes and also from durability and reliability point of view.

Is it possible to somehow connect postgreSQL running on Openshift to Amazon S3, so that postgreSQL would run on Openshift but save everything on Amazon S3? Basically I am looking for what is cheaper to use and that's why I am not sure about setting up postgreSQL on AWS directly instead of having it on Openshift.

Yes it is definitely possible to use AWS S3 with postgresql running on Openshift while saving the data to S3 follow the steps in this blog to configure AWS S3 in open shift.

https://blog.openshift.com/how-to-configure-an-aws-s3-bucket-store-for-openshift/

From your comments if you use AWS Dynamo DB that will be very good choice and for huge data storage use S3 even AWS best practice suggest usage of S3 for huge data storage. Check this link http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GuidelinesForItems.html#GuidelinesForItems.StoringInS3

But looking at your use case cost is going to be the most important entity that you need to take care or it will shoot up so using S3 is the best option for keeping the cost in control.

Sign up to request clarification or add additional context in comments.

29 Comments

I believe it's highly unlikely that Postgres will be happy with its data directory being on S3. Same reasons as stackoverflow.com/questions/33295619/mysql-data-directory-on-s3.
Not to mention latency would be horrendous.
@error2007s I downvoted because you are giving bad information. You can't run a database using S3 as the "disk".
It's not possible for a running database to store the "active" data on S3. That's just not going to work. S3 is great for backups though.
S3 is not suitable for a database's primary storage. It's fine for backups, but it won't work as primary storage for MySQL, Postgres, MongoDB, etc. That said, if the 50GB of data is files like movies or something, S3 is suitable for that, because you shouldn't be storing the raw file binaries in a DB anyways.
|

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.