I am trying to translate an old (2013) bit of code from Apple from objective C to Swift with some modifications I need for my simulation.
The code I am translating can be found here
To make translation easier I have broken the simulation into parts. The part I am working on now (and am stuck on) is thus: I simply need to draw your general full screen texture coords shader however I need it to be broken up into a grid. If I have a grid scale factor 1 then each pixel on the screen will get a triangle and that will make up the mesh. However if I put in 10 then there will be a triangle every 10 pixels. Essentially this is the resolution of my simulation.
My problem is these images (the numbers indicate the scale factor of the pond):
For some reason rather then drawing a full screen texture coord drawing it draws this weirdness (Some of these honestly belong in a modern art museum). I really have no clue where to look in my code to find this issue, my guess is it is the indices.
You would see the individual triangles on the top of the 1st and second image but I killed the resolution so it wasn't difficult to upload to SO.
So first off here is how I define all of the arrays that make up this simulation:
var rSimWidth:Int = 0 var rSimHeight:Int = 0 var mesh:[GLfloat] = [] var tex:[GLfloat] = [] var ind:[Indicie] = [] var realWidth:Int = 0 var realHeight:Int = 0 //In this case width = 568, height = 320, mpp varies look at top of picture init(width: Int, height: Int, mpp: Int) { rSimWidth = width / mpp rSimHeight = height / mpp rSimWidth += 1; rSimHeight += 1 realWidth = width realHeight = height mesh = [GLfloat](count: rSimWidth * rSimHeight * 2, repeatedValue: 0) tex = [GLfloat](count: rSimWidth * rSimHeight * 2, repeatedValue: 0) ind = [Indicie](count: (rSimHeight-1)*(rSimWidth*2+2), repeatedValue: 0) print("Screen is (\(width) x \(height))\tPond is (\(rSimWidth) x \(rSimHeight))\tScreenSA: \(width * height)\tPond sa: \(rSimWidth * rSimHeight)\tMP: \(mpp) \((rSimWidth * rSimHeight) * mpp)") } Then I fill up the indices. I had difficulties with this because of the data type. I believe for indices I am supposed to use GLushort however on my system the maximum value is 65,535 . This value was quickly overflowed by the indice generation so I had to switch it to "Int". There is a typedef "Indicie" so I can change the type of data in the indices easily. What I find odd is when I ramped up the simulation resolution in the apple code I never had an overflow issue.
var index:Int = 0 for i in 0..<rSimHeight - 1 { for j in 0..<rSimWidth { if (i%2==0)//X is even { if (j == 0) { //DEGENRATE TRIANGLE ind[index] = Indicie(i*rSimWidth + j) index += 1 } ind[index] = Indicie(i * rSimWidth + j) index += 1 ind[index] = Indicie((i + 1) * rSimWidth + j) index += 1 //(114 + 1) * 569 + 101 // if (j == rSimWidth - 1) { //DEGENERATE TRIANGLE ind[index] = Indicie((i+1)*rSimWidth + j) index += 1 } } else { if (j == 0) { //DEGENRATE TRIANGLE ind[index] = Indicie((i+1)*rSimWidth + j) index += 1 } ind[index] = Indicie((i + 1) * rSimWidth + j) index += 1 ind[index] = Indicie(i * rSimWidth + j) index += 1 if (j == rSimWidth - 1) { //DEGENRATE TRIANGLE ind[index] = Indicie(i*rSimWidth + j) index += 1 } } } } In case you wish to see the position and tc code
for yy in 0..<rSimHeight {let y = GLfloat(yy); for xx in 0..<rSimWidth {let x = GLfloat(xx); let index = (yy*rSimWidth + xx) * 2 tex[index] = x / GLfloat(rSimWidth - 1) tex[index + 1] = y / GLfloat(rSimHeight - 1) mesh[index] = tex[index] * GLfloat(realWidth) mesh[index + 1] = tex[index + 1] * GLfloat(realHeight) } } Here is the drawing code
glUseProgram(shade.progId) let posLoc = GLuint(glGetAttribLocation(shade.progId, "pos")) let texLoc = GLuint(glGetAttribLocation(shade.progId, "tc")) glBindBuffer(GLenum(GL_ARRAY_BUFFER), texVBO); glBufferData(GLenum(GL_ARRAY_BUFFER), sim.tex.count * sizeof(GLfloat), sim.tex, GLenum(GL_DYNAMIC_DRAW)); glVertexAttribPointer(texLoc, 2, GLenum(GL_FLOAT), GLboolean(GL_FALSE), 2, BUFFER_OFFSET(0)) glEnableVertexAttribArray(texLoc) glBindBuffer(GLenum(GL_ARRAY_BUFFER), posVBO) glVertexAttribPointer(posLoc, 2, GLenum(GL_FLOAT), GLboolean(GL_FALSE), 2, BUFFER_OFFSET(0)) glEnableVertexAttribArray(posLoc) let uniOrtho = glGetUniformLocation(shade.progId, "matrix") glUniformMatrix4fv(uniOrtho, 1, GLboolean(GL_FALSE), &orthographicMatrix) glBindBuffer(GLenum(GL_ELEMENT_ARRAY_BUFFER), indVBO) glDrawElements(GLenum(GL_TRIANGLE_STRIP), GLsizei(sim.ind.count), GLenum(GL_UNSIGNED_SHORT), nil) glBindBuffer(GLenum(GL_ARRAY_BUFFER), 0) glBindBuffer(GLenum(GL_ELEMENT_ARRAY_BUFFER), 0) glDisableVertexAttribArray(posLoc) glDisableVertexAttribArray(texLoc) I really have 0 clues as to why something THIS weird is drawing rather then something that makes sense. Any suggestions of what it might be or tips are appreciated!
Extra note: I am so sorry but Swift likes to think it is special so it ditched c style for loops. Basically for all of these loops think
for (var i = <value before ..>; i < <value after dot>; <happens automatically>) EDIT: Here is the matrix code
var orthographicMatrix:[GLfloat] = [] func buildMatrix() { orthographicMatrix = glkitmatrixtoarray(GLKMatrix4MakeOrtho(0, GLfloat(width), 0, GLfloat(height), -100, 100)) //Storage.upScaleFactor } func glkitmatrixtoarray(mat: GLKMatrix4) -> [GLfloat] { var buildme:[GLfloat] = [] buildme.append(mat.m.0) buildme.append(mat.m.1) buildme.append(mat.m.2) buildme.append(mat.m.3) buildme.append(mat.m.4) buildme.append(mat.m.5) buildme.append(mat.m.6) buildme.append(mat.m.7) buildme.append(mat.m.8) buildme.append(mat.m.9) buildme.append(mat.m.10) buildme.append(mat.m.11) buildme.append(mat.m.12) buildme.append(mat.m.13) buildme.append(mat.m.14) buildme.append(mat.m.15) return buildme } Now the issue is not the abstract art but that the rectangle only goes half way nomater the scale factor. Notice the jagged edge 
In apples frame debugger you can see the outline of all the triangles that it draws it looks like this:
Could it be this line that is causing all the trouble let index = (yy*rSimWidth + xx) * 2
A test: I tried the code out by passing 3, 3, and 1 into the initialize function. This should in theory make a 3x3 pixel square, however it just made a mess. Here is all the data on the CPU, and here (according to frame capture take as grain of salt) is how the gpu understands it. 