In my app, I have to draw some images on a view, in real-time, in positions and scaling that frequently change over time. These images are a subset of those contained in a dictionary. Here is the code, a bit summarized:
-(void)drawObjects:(NSArray*)objects withImages:(NSDictionary*)images <etc> { // Get graphic context and save it CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSaveGState(context); // Display objects in reverse order for (int i = [objects count] - 1; i >= 0; i--) { MyObject *object = [objects objectAtIndex:i]; // Check if object shall be visible if (<test to check whether the image shall be visible or not>) { // Get object image UIImage* image = <retrieve image from "images", based on a key stored in "object">; // Draw image float x = <calculate image destination x>; float y = <calculate image destination y>; float imageWidth = <calculate image destination width>; float imageHeight = <calculate image destination height>; CGRect imageRect = CGRectMake(x - imageWidth / 2, y - imageHeight / 2, imageWidth, imageHeight); [image drawInRect:imageRect]; } } // Restore graphic context CGContextRestoreGState(context); } My problem is that this code is slow; it takes about 700 to 800 ms to perform the loop, when the number of images is about 15 (iPhone 4, iOS 4.3.5). I have experimented a bit, and it seems that the bottleneck is the drawInRect call, since if I exclude it, everything speeds up drammatically.
Does anyone have suggestions to offer?