1

My lambda function was taking about 120ms with 1024mb memory size. When I checked the log, it was using only 22mb at max, so I tried optimizing it, reducing to 128mb.

But, when I did this, the ~120ms of processing went up to about ~350ms, but still, only 22 mb was being used.

I'm a bit confused, if I just used 22mb, then why having 128 or 1024mb available impact the processing time?

1 Answer 1

6

The underlying CPU power is directly proportional to the memory footprint that you select. So basically that memory knob controls your CPU allocation as well.

So that is the reason why you are seeing that reducing the memory causes Lambda to take more time for execution

Following is what is documented on AWS Docs for Lambda

Compute resources that you need – You only specify the amount of memory you want to allocate for your Lambda function. AWS Lambda allocates CPU power proportional to the memory by using the same ratio as a general purpose Amazon EC2 instance type, such as an M3 type. For example, if you allocate 256 MB memory, your Lambda function will receive twice the CPU share than if you allocated only 128 MB.

Sign up to request clarification or add additional context in comments.

3 Comments

interesting, is there a table to compare and optimize? I checked but couldn't find anything of value
You can have a look at engineering.opsgenie.com/…
@raphadko look at the specs of General Purpose m family instances. It's a 4:1 ratio of GiB to vCPU (vCPU = hyperthread = 2 × #cores) @~ 2.4 GHz, so allocating 1 GiB = 0.25 × HT ≅ 600 MHz. 128 MiB ≅ 75 MHz. Only one concurrent invocation occurs in each container, so this number -- sounding small in isolation -- is not shared across invocations. The difference in runtime was not 8x because of time your function spends waiting, not actively CPU-bound.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.