The other answers have good advice: you can consume less memory and time by keeping track of only the information you need, your naming could be improved, you could refactor the problem into smaller methods.
I would add to that one small performance consideration. Think about what this does:
if(matrix[i][j]==0){ flagArr[i][j]=true; }
or in the more efficient algorithm:
if (matrix[r][c] == 0) { rowzero[r] = true; colzero[c] = true; }
I am going to propose a micro-optimization; this might not be justified because (1) your program might already be fast enough overall, (2) this optimization might be in a portion of the program that is not significantly contributing to the total performance cost, and (3) the optimization might actually make things worse; there's no substitute for measuring carefully to find out. All that said, I would be inclined to write these as:
flagArr[i][j] = (matrix[i][j] == 0);
or
rowzero[r] |= (matrix[r][c] == 0); colzero[c] |= (matrix[r][c] == 0);
A common belief amongst programmers is code does work, so if I can conditionally avoid the work the program will be faster. Though this is often true, it is also true that conditions are also work and any time the branch predictor on the chip predicts wrong, your potentially mess up the processor state. It can therefore be a good idea to do "unnecessary" work unconditionally if doing so is extremely cheap, and thereby skip both the cost of the condition and the risk of branch prediction failure. Setting bools to true or false is insanely fast on modern architecture; the work you are trying to save here is the fastest thing you can possibly do. It can therefore in many cases be completely worthwhile to simply do the work unconditionally rather than spending two nanoseconds trying to avoid work that takes one nanosecond.