1

I need to process(export subsections) of a large image(33600x19200) and I'm not sure how to start.

I've tried simply allocating an image using openframeworks but I got this error:

terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc 

I'm not experienced with with processing images this large. Where should I start ?

3
  • The image would need approximately 2GB of memory if you want to fully load it. Commented Jan 13, 2014 at 0:35
  • You don't mention what OS or bit depth the image is. If it is a RGB 8-bit per color file, it should fit in 1.8 GB of ram. If it has alpha, 2.4 -- if it's 16 bits per color, double those. Any of these should be able to fit in ram in a modern machine, so chances are you have some sort of os-imposed per process limit. Commented Jan 13, 2014 at 2:12
  • @MichaelGraff good point, sorry forgot to mention: I'm dealing with a .tif file on Windows: ibm/windows byte order, rgb colourspace, 8-bit per channel, 33600x19200 pixels and the filezie is actually exactly 1.8 GB on disk Commented Jan 13, 2014 at 11:59

3 Answers 3

5

I maintain vips, an image processing library which is designed to work on large images. It will automatically load, process and write an image in sections using many CPU cores. You can write your algorithm in C, C++, Python, PHP and Ruby. It's fast, free and cross-platform.

There's a very simple benchmark on the vips website: load an image, crop 100 pixels off every edge, shrink by 10%, sharpen, and save. For a 5k x 5k RGB TIF on my laptop, ImageMagick takes 1.6s and 500MB of RAM, vips takes 0.4s and 25MB.

ImageMagick is great, of course, it's free, well-documented, produces high-quality output and it's easy to use. vips is faster on large images.

Sign up to request clarification or add additional context in comments.

5 Comments

awesome! that looks like exactly what I was looking for back then. I'll give this a go. At that time I ended up writing a Photoshop Script to deal with it, but it was quite slow for 33600x19200 pixels. I'll try and revisit the tool, thanks for this and well done on vips(+1)
@user894763 I have a similar problem to the one above and I've decided to try vips. Unfortunately I can't find a x86 build of the DLLs, only a 64 bit version for Windows. Is 32-bit supported?
Hi @Alex , current Windows builds are only 64 bit. You'd have to make your own, or go back in time some way. It looks like 8.1 had the last official 32-bit Windows binary: vips.ecs.soton.ac.uk/supported/8.1/win32
Thank you, this is exactly what I ended up doing. The rest of the work was trying to figure out how to actually do this stuff with vips, and to be honest, I had a hard time figuring out how to process the images the way I wanted. It's a shame, really, because I LOVE this library - it's really, really fast, and has a good API when you get the hang of it. The documentation could REALLY use more sample code - googling most of the functions results in a reference to the docs and then a ghost town. Maybe too few people use the API? Still, a very good library once you figure out how to use it!
I'm glad you were able to get it working. I'd love to improve the docs, but I don't know what kind of examples would be helpful -- I've been looking at it so long it all seems obvious to me :-( If you could open an issue on the libvips tracker with a few example questions, I'll add sample code. github.com/jcupitt/libvips/issues
2

std::bad_alloc occurs, because you dont have enough memory available to hold the whole image.

In order to work with such big things, one has to split them, e.g threat the picture as a set of subsections / subpictures with a well defined size (e.g 1000x1000) and process them one by one.

The other solution is simply to throw as much memory into your system as you can. If you have the money and the program should only run on one specific machine, its surely an option, but I think its clear which of the both solutions is the better one ;)

4 Comments

so I should parse the file in chunks (header + a bunch of pixels first > process these, write results and remember where I left off, cleanup, move to the next chunk and repeat) ?
Exactly, this way only a small potion of the original image resides in the memory.
Another solution I forgot to mention is to directly operate on the disk. This can be achieved by either normal io (complex for this task IMO) or by using OS specific methods to create a http://en.wikipedia.org/wiki/Memory-mapped_file. The drawback is of course that you will get a huge performance impact.
Thanks(+1), will try your first suggestion for now.
2

I used to encounter a problem like this, I find to a library GDAL , it saved me, it provides a function GDALDataset::RasterIO, which can read/write any part of the image by any resolution, I didn't find replacement in the introduction of openframeworks ,maybe someone would provide one for openframeworks.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.