75

I am developing one lambda function, which use the ResumeParser library made in the python 2.7. But when I deploy this function including the library on the AWS it's throwing me following error:

Unzipped size must be smaller than 262144000 bytes

8 Answers 8

49

Perhaps you did not exclude development packages which made your file to grow that big.

I my case, (for NodeJS) I had missing the following in my serverless.yml:

package: exclude: - node_modules/** - venv/** 

See if there are similar for Python or your case.

Sign up to request clarification or add additional context in comments.

7 Comments

Wow, I spent all afternoon looking for a fix for this. Thank you so much!
Wouldn't removing "node_modules/**" result in runtime errors when your code tries to load/use the packages?
Excluding all dependencies for a Node function shouldn't really be necessary, especially with a tool like Serverless since it already excludes development dependencies. Making sure packages such as aws-sdk are installed as dev dependencies will mean that you get code completion without the bloat in your package.
@epetousis Are you sure we need to have aws-sdk as dev dependency but not as actual dependency?
@kittu it's not really documented anywhere, but aws-sdk is indeed preinstalled in the Lambda image. Installing aws-sdk as a production dependency will simply shadow the Lambda's preinstalled aws-sdk, which may or may not be desireable, for example if you need to lock a package version for some reason. There's a thread on SO that discusses the node.js image's dependencies: stackoverflow.com/questions/53566478/…)
|
18

The best solution to this problem is to deploy your Lambda function using a Docker container that you've built and pushed to AWS ECR. Lambda container images have a limit of 10 gb.

Here's an example using Python flavored AWS CDK

from aws_cdk import aws_lambda as _lambda self.lambda_from_image = _lambda.DockerImageFunction( scope=self, id="LambdaImageExample", function_name="LambdaImageExample", code=_lambda.DockerImageCode.from_image_asset( directory="lambda_funcs/LambdaImageExample" ), ) 

An example Dockerfile contained in the directory lambda_funcs/LambdaImageExample alongside my lambda_func.py and requirements.txt:

FROM amazon/aws-lambda-python:latest LABEL maintainer="Wesley Cheek" RUN yum update -y && \ yum install -y python3 python3-dev python3-pip gcc && \ rm -Rf /var/cache/yum COPY requirements.txt ./ RUN pip install -r requirements.txt COPY lambda_func.py ./ CMD ["lambda_func.handler"] 

Run cdk deploy and the Lambda function will be automagically bundled into an image along with its dependencies specified in requirements.txt, pushed to an AWS ECR repository, and deployed.

This Medium post was my main inspiration

Edit:
(More details about this solution can be found in my Dev.to post here)

6 Comments

cheers man, you're a lifesaver. I can confirm this does indeed work to get around the 250MB unzipped size limit for lambda. In my project I had a lot of dependencies that take up size, such as pandas and scipy for example, max image size shows as 1.5GB. I was able to deploy to aws lambda using the DockerImageFunction approach as outlined here.
Adding a complication like docker is unlikely to be the best solution. There are many things to try first. Including exclusion of development/env files.
@PhilAndrews I agree with you in many standard cases but in the case of OP and many others, their lambda function needs to hold a large library to function (like Tensorflow in my case). In this case, it's not going to be possible to remove enough development/env files to get under the size limitation and a Docker deployment will (probably) be necessary.
@WesleyCheek Yeah, you're right. Having just ran into this issue trying to deploy a big model, containers are a good way to go if you truly need a large file package such as for ML use cases. Though there may be better deployment options for ML endpoints in general.
Thanks Wesley, life saver. For those on an M1, make sure to specify platform=Platform.LINUX_ARM64 in the from_image_asset method for the cdk.
|
13

This is a hard limit which cannot be changed:

AWS Lambda Limit Errors

Functions that exceed any of the limits listed in the previous limits tables will fail with an exceeded limits exception. These limits are fixed and cannot be changed at this time. For example, if you receive the exception CodeStorageExceededException or an error message similar to "Code storage limit exceeded" from AWS Lambda, you need to reduce the size of your code storage.

You need to reduce the size of your package. If you have large binaries place them in s3 and download on bootstrap. Likewise for dependencies, you can pip install or easy_install them from an s3 location which will be faster than pulling from pip repos.

5 Comments

Thanks, in my case, I don't think we have large binaries. The pip packages which are being installed is what is adding to the total zip size. If I were to try and install via pip (s3). Won't that still be downloaded when I package the project?
If it's not in the package then it will not be rejected by AWS, installing using pip is OK.
now I have Unzipped size must be smaller than 130091036 bytes :(
@Kennyhyun see Wesley's answer stackoverflow.com/a/71920166/3153152, the best solution nowadays is really to run docker images as lambdas, no need for workarounds.
@Raf thanks, I agree. I would like to try eks and k8s based serverless solutions as well
8

As stated by Greg Wozniak, you may just have imported useless directories like venv and node_modules.

package.exclude is now deprecated and removed in serverless 4, you should now use package.patterns instead:

package: patterns: - '!node_modules/**' - '!venv/**' 

3 Comments

Where does this file live?
@conor909 this would live where ever/however you are packaging the content for deploy. I use travis so this content lives in my travis.yml file.
I already had a package: exclude: and adding - "**/venv/**" totally worked, huzzah
6

Note that boto3 is included in a Lambda by AWS so you shouldn't include it explicitly in the requirements file. It is a common reason of causing a Lambda to exceed its max limit.

1 Comment

Removing boto3 reduced the size of my zip archive by 120MB!
5

A workaround that worked for me: Install pyminifier:

 pip install pyminifier 

Go to the library folder that you want to zip. In my case I wanted to zip the site-packages folder in my virtual env. So I created a site-packages-min folder at the same level where site-packages was. Run the following shell script to minify the python files and create identical structure in the site-packages-min folder. Zip and upload these files to S3.

 #/bin/bash for f in $(find site-packages -name '*.py') do ori=$f res=${f/site-packages/site-packages-min} filename=$(echo $res| awk -F"/" '{print $NF}') echo "$filename" path=${res%$filename} mkdir -p $path touch $res pyminifier --destdir=$path $ori >> $res || cp $ori $res done 

HTH

2 Comments

Is there a way to do this with serverless configurations? How did you upload the zip to S3?
0

In case you're using CloudFormation, in your template yaml file, make sure your 'CodeUri' property includes only your necessary code files and does not contain stuff like the .aws-sam directory (which is big) etc.

Comments

0

Just if someone is still having this issue.

I was receiving this error when deploying a single function using Serverless 3.37.0 and the esbuild plugin.

I tried everything found related to node-modules, but nothing worked. What worked for me was to delete the .zip files of the lambda functions on my project located at the .Serverless and .esbuild/.Serverless folders.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.