The app I'm developing displays a large-ish (about 1M vertices), static 2D image. Due to memory limitation issues I have been filling the VBOs with new data every time the user scrolls or zooms, giving the impression that the entire image "exists", even though it doesn't.
There are two problems with this approach: 1) although the responsiveness is "good enough", it would be better if I could make the scrolling zooming faster and less choppy. 2) I've been sticking to the 64k vertex limit that can be done in a single draw, which puts a hard limit of how much of the image can be shown at a time. It would be nice to be able to see more of the image, or even all of it at the same time. Although the performance at the moment is, again, good enough because we are at the prototype stage and have set up the data to work with the limitations, to go to the product level we will have to get rid of this limitation.
Recently I discovered that by using the "android:largeHeap" option I can get 256 MB of heap space on a Motorola Xoom, which means that I can store the entire image in VBOs. In my ideal world I would simply pass the the OpenGL engine the VBOs and either tell it that the camera has moved, or use glScale/glTranslate to zoom/scroll.
My questions are these: am I on the right track? Should I always "draw" all of the chunks and let OpenGl figure out which will actually be seen, or figure out which chunks are visible myself? Any difference between using something like gluLookAt and glScale/glTranslate?
I don't care about aspect ratio distortion (the image is mathematically generated, not a photo), it is much wider than it is high, and in the future the number of vertexes could get much, much larger (e.g. 60M). Thanks for your time. 