This seems counter-intuitive, but there's a logical explanation:
Reducing memory also reduces the available CPU cycles. You're paying for very short term use of a fixed fraction of the resources of an EC2 instance, which has a fixed ratio of CPU to memory.
Q: How are compute resources assigned to an AWS Lambda function?
In the AWS Lambda resource model, you choose the amount of memory you want for your function, and are allocated proportional CPU power and other resources. For example, choosing 256MB of memory allocates approximately twice as much CPU power to your Lambda function as requesting 128MB of memory and half as much CPU power as choosing 512MB of memory. You can set your memory in 64MB increments from 128MB to 1.5GB.
https://aws.amazon.com/lambda/faqs/
So, how much CPU capacity are we talking about?
AWS Lambda allocates CPU power proportional to the memory by using the same ratio as a general purpose Amazon EC2 instance type, such as an M3 type.
http://docs.aws.amazon.com/lambda/latest/dg/lambda-introduction-function.html
We can extrapolate.
In the M3 class, regardless of instance size, the provisioning factors look like this:
CPU = Xeon E5-2670 v2 (Ivy Bridge) × 8 cores Relative Compute Performance = 26 ECU Memory = 30 GiB
An ECU is an EC2 (or possibly "Elastic" or "Equivalent") Compute Unit, where 1.0 ECU is approximately equivalent to the compute capacity of a 1GHz Opteron. It's a dimensionless quantity for simplifying comparison of the relative CPU capacity of differing instance types.
So the provisioning ratios look like this:
8/30 Cores/GiB 26/30 ECU/GiB
So at 512 MiB memory, your Lambda function's container's share of this machine would be...
8 ÷ 30 ÷ (1024/512) = 0.133 of 1 core (~13.3% CPU) 26 ÷ 30 ÷ (1024/512) = 0.433 ECU (~433 MHz equivalent)
At 128 MiB, it's only about 1/4 of that.
These numbers seem really small, but they are not inappropriate for the typical Lambda use-case -- single-threaded, asynchronous actions that are not CPU intensive.