Building Docker Image from Existing Container

I was aware that I can build a new docker image using Dockerfile, and once you have the image you cann spawn the container. However I didn’t imagine a usecase where you have a running container, and you can build image out of it.

Why is building a image out of running container is important? Lets say you downloaded an image and made changes to configuration from inside the container using shell. The changes made will vanish once the container restarts.

Let me show you what am talking about..

Replicating the Problem

Lets execute the following command in docker.

docker run --name alpine -it alpine

What this will do is download an alpine image and runs in interactive mode. You will be taken to a shell. Now create a file called “Test.txt” using the following command.

echo "Sample Text File" >> Test.txt

When you list the files using ls command you should see Test.txt.

Now exit the shell using exit command. Your docker container is stopped. Remove the docker container with the following command.

docker rm alpine

This is what happens in most of the cloud applications. When you stop the application, container is destroyed.

When you start the application again, it is like you are starting a new container with docker run command. However this time when you go into the shell and search for the file Test.txt you created, it is gone. You need to recreate the file.

Creating Image from Container

Now open the command prompt and execute the following command to run the alpine image.

docker run --name alpine -it alpine

Create a Test.txt file again with random content. Now while you are in the shell, open another command prompt and execute the following docker command.

docker commit alpine my-alpine

You will see a hash printed on your screen. Now list the images with the following command.

docker image ls

Now exit the alpine shell running in the first command prompt. Also remove the alpine container.

Creating Container From My Alpine Image

Lets now create the container from my-alpine image by running the following command.

docker run --name my-alpine -it my-alpine

Once you are in the console, you can now list the file. You will see Test.txt in the root folder. This file came from new image.

Whats the advantage you ask? If you host this image as a cloud application, and restart the application, your Test.txt file will stay (as it is now part of the image).

Ending Note

If you want to customize a container app and want to preserve the changes, you can do it using the docker commit command.

Closures in Python

Recently I came across python closures. I know closures in javascript. However I didnt know closure exists in python too.

A closure in Python is a function object that has access to variables in the outer (enclosing) function’s scope, even after the outer function has finished executing.

A concise video on closure is here on youtube.

Whats the Use of Closure?

One of the usecase I could think of is caching. Lets say you make a call to an API and you want to cache the result and work off the cache, this is going to be very usefule.

Following is an example of it.

from collections import namedtuple

def get_api_handler():

cache:str = None

if(cache == None):
print("Fetching from API and loading the cache")
cache = "Hello World"

def get_value():
print("Fetching from cache")
return cache

api_handler = namedtuple('api_handler',['get_response'])
return api_handler(get_value)

def main():
api_handler = get_api_handler()
api_handler.get_response()
api_handler.get_response()

if __name__ == "__main__":
main()

First time when you do api_handler.get_response(), API is called and cache is loaded. Subsequent requests will be served from the cache.

Now, I am also using another concept of python called namedtuple to treat closure like object. You can refer to this youtube video for the same.

Ending Note

Closure is a very powerful feature offered by languages. Different use cases can be implemented with the help of closure. We explored caching as one of the usecase. Other usecases as I know are implementing function factories, encapsulation of private variables, decorators.

Managaing Command Line Arguments in Python

I recently came across a program where we had to manage the command line arguments. And the easy way to go do that was using argparse package.

argparse is a Python module in the standard library that provides a mechanism for parsing command-line arguments. It makes it easy to write user-friendly command-line interfaces for your Python programs.

What can you do with it? Let me show you some of the capabilities.

Command Line Help

Following is a simple program with argparse.

import argparse

def main() -> None:
parser = argparse.ArgumentParser(
prog='Sample Program',
description='Demonstrated Argparse Functionality',
epilog='Happy coding')
parser.add_argument('-f', '--filename') # positional argument
parser.add_argument('-c', '--count') # option that takes a value
parser.add_argument('-v', '--verbose') # on/off flag

args = parser.parse_args()
print(args.filename, args.count, args.verbose)

if __name__ == '__main__':
main()

You can run this in as a python program. When you execute the following command, you will see

python main.py --help
usage: Sample Program [-h] [-f FILENAME] [-c COUNT] [-v VERBOSE]

Demonstrated Argparse Functionality

options:
-h, --help show this help message and exit
-f FILENAME, --filename FILENAME
-c COUNT, --count COUNT
-v VERBOSE, --verbose VERBOSE

Happy coding

Required Arguments

You can make certain arguments as required like shown below.

 parser.add_argument(‘-f’, ‘–filename’, required=True)  

When you execute the program with out filename it will show you the message like below.

usage: Sample Program [-h] -f FILENAME [-c COUNT] [-v VERBOSE]
Sample Program: error: the following arguments are required: -f/--filename

Fixed Argument Values

Lets say filename has to be “text1.txt” or “text1.csv”. It can not be any value other than that. Then you can specify the choice to restrict the values like shown below.

parser.add_argument('-f', '--filename', required=True, choices=["text1.txt","text1.csv"]) 

If you try to run the program with invalid choice, you will get the following error.

usage: Sample Program [-h] -f {text1.txt,text1.csv} [-c COUNT] [-v VERBOSE]
Sample Program: error: argument -f/--filename: invalid choice: 'somefile.txt' (choose from 'text1.txt', 'text1.csv')

Constant Values

Suppose I do not want to pass the count, and set a default count to 1.

parser.add_argument('-c', '--count', const=1,action='store_const') 

When the execute the program with the following command

python main.py -f text1.txt -c -v "verbose"

I see the values as

text1.txt 1 verbose

Notice that I am not passing argument after -c. It is a constant which is considered internally.

Closing Note

Argparse has a lot of features around command line arguments and execution. This is few of the ones which I noticed on the surface. There are more features like adding type safety to the arguments. I will be sure to post as I come across an interesting feature. Until then bye, see you around, and thanks for reading the blog.

SQS and Lambda on LocalStack

In my last post I had shown how to install LocalStack on your local machine. Link is here.

In this post, lets create a queue which on recieving a message will trigger a lambda. All on LocalStack.

Creating Infrastructure

There are various ways to create infrastructure setup. I am going to use boto3 library on python.

Copy the following code to create a queue.

import boto3

# Create a Boto3 client for SQS.
sqs = boto3.client('sqs', endpoint_url='http://localhost:4566',region_name='us-west-2', aws_access_key_id="dummy", aws_secret_access_key= "dummy")
# Create a queue.
queue = sqs.create_queue(QueueName='input-queue')
print(queue)

Note the region is us-west-2. If you navigate into LocalStack desktop you should see the queue created under SQS.

Creating Lambda

Lets now create a lambda which we will evenually trigger from the queue. Lets first create a lambda. Create a python file with the following content. In my case file name is app.py

def lambda_handler(event,context): 
return {
"statusCode": 200,
"body": "Lambda triggered from SQS"
}

Lambda doesnt do much. It will just return status code as success. My folder structure is as shown below.

Create_infra.py is a file where I have kept the code to create queue, lambda, and I am going to bind them together.

Create a zip folder from app.py

Lets now write the code to create the lambda.

lambda_client = boto3.client('lambda',endpoint_url='http://localhost:4566',region_name='us-west-2', aws_access_key_id="dummy", aws_secret_access_key= "dummy")

lambda_client.create_function(
FunctionName="test_lambda",
Runtime='python3.8',
Role='arn:aws:iam::000000000000:role/lambda-role',
Handler="app.lambda_handler",
Code=dict(ZipFile=open("app.zip", 'rb').read())
)

Note you need to specify the zip file you created and that needs to be under same folder. Once lambda gets created you will see that in the lambda section of LocalStack desktop.

Binding Queue with Lambda

Lets now bind queue with lambda so that when a message arrives in queue, lambda is triggered. Execute the following code. Note that function name (the name of the function we created), and queue arn have to be specified. You can get the arn from the SQS section navigating to the queue.

response = lambda_client.create_event_source_mapping(
EventSourceArn='arn:aws:sqs:us-west-2:000000000000:input-queue',
FunctionName='test_lambda'
)

Validating the Setup

Lets now send a message into the queue. Lets navigate to SQS, and select the input-queue.

Switch to Messages in the tab.

In the bottom bar you should see the “Send Message” button. Click on it. Select the Queue Url and enter Message Body. The click Send button.

Message shows up in the list. However it will be picked up by the lambda.

Now navigate to the Lambda from the main dashboard. Select test-lambda.

Switch to Logs tab. You should see Start, End and Repor.

This means lambda is triggered by the message we sent on the queue.

Closing Note

LocalStack is one of the easy way to build the AWS workflow. Complex workflows can be built easily as POC and then migrated to production systems.

Debugging AWS Lambda in Local

AWS Lambda is a serverless compute service provided by Amazon Web Services (AWS). It allows you to run code without provisioning or managing servers. With Lambda, you can upload your code (written in languages such as Node.js, Python, Java, Go, and more) and AWS Lambda takes care of provisioning and managing the servers, scaling automatically to handle requests.

This blog post is about running AWS Lambda in local. We need the following pre-requisites to be installed in order to run AWS Lambda in local.

  1. sam cli – Download SAM CLI from here.
  2. VS Code
  3. Python
  4. Docker
  5. AWS Toolkit for VS Code

AWS SAM CLI

After installing run the following command to verify.

sam --version

VS Code

Create a folder and open it in VS Code. Add app.py as shown below, and add the following code.

def lambda_handler(event,context):
return {
"statusCode": 200,
"body": "Hello from lambda"
}

Select Run and Debug. And click on “create a launch.json” link. That will create a new launch.json file.

In the launch.json settings add the following content.

{
"version": "0.2.0",
"configurations": [
{
"type": "aws-sam",
"request": "direct-invoke",
"name": "Invoke Lambda",
"invokeTarget": {
"target": "code",
"lambdaHandler": "app.lambda_handler",
"projectRoot": "${workspaceFolder}"
},
"lambda": {
"runtime": "python3.10",
"payload": {
"json": {}
}
}
},
{
"name": "Python Debugger: Current File",
"type": "debugpy",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal"
}
]
}

Make sure to set your version of python in the lambda runtime.

Validation

Now go to Run and Debug, and select “Invoke Lambda” as debug profile and click run.

If you dont have docker running you will see the following error.

Error: Running AWS SAM projects locally requires Docker. Have you got it installed and running?

If everything is fine, docker image is built and lambda is invoked.

You can even debug your code keeping breakpoint.

Closing Note

I started looking for ways to run my program in local without being dependent on AWS. And one of the thing I came across is lambda functions. After doing some research over the internet I skimmed through several youtube links and articles. I could find this youtube link “Tech talk with Eric” very useful in achieving the result, from which I have shared the notes above. I am sharing other resources below if you want to drill deep.

Resources

Debugging AWS Lambda Locally: Step-by-Step Guide with AWS SAM CLI and VSCode (Part 1)

Locally Debug Lambda Functions with the AWS Toolkit for VS Code

AWS Links

Step-through debugging Lambda functions locally

Tutorial: Deploying a Hello World application

Hurdles in Containerizing Django API and Mongodb

In my previous post I had provided the containerized Django API and MongoDb. In my closing note I had mentioned that there were few hurdles to get the setup working. Link to my blog post is here.

Although I could get the API working in the development environment, I faced few challenges when came to containerizing it. The first and foremost is the database connection.

Problem with Db Connection

In the development environment I could connect to mongo db via localhost because of port binding. When I hosted the API inside the container, it now needs to connect to the mongo db container via its IP. This means everytime the mongo db container is destroyed and created, the container will be assigned a new IP.

Docker compose came handy in this scenario.

Docker Compose to the Rescue

My docker compose file looked like shown below.

version: '3.7' 
services:
web:
image: my-django-app
ports:
- "8000:8000"
environment:
- DB_HOST=db
- DB_NAME=user

depends_on:
- db

deploy:
resources:
limits:
memory: 100M
reservations:
memory: 20M
db:
image: mongo
restart: always
ports:
- "27017:27017"
deploy:
resources:
limits:
memory: 100M
reservations:
memory: 20M

volumes:
postgres_data:

If you see the highlighted, I have set the environment variables DB_HOST and DB_NAME. Then there is a depends_on section where I have mentioned db. db is the step name which creates the mongo db container, and until mongo db container is up the web step will not be executed.

In order to pass the environment variables into the program I had to change the settings.py file in the main django project (i.e., my_django_project).

Environment Variable Settings

Following is the content that is put in the settings file.

DATABASES = { 

'default': {
'ENGINE': 'djongo',
'NAME': os.getenv('DB_NAME'),
'ENFORCE_SCHEMA': False,
'CLIENT': {
'host': 'mongodb://' + os.getenv('DB_HOST')
}
}
}

I have installed djongo package to interact with mongodb. Then you can see os.getenv statement for setting up name. Also host is set with the help of os.getenv statement. For calling os.getenv, package python-dotenv is installed.

At the top of settings.py there is a statement to load the environment variables.

from dotenv import load_dotenv 

load_dotenv()

This technique I learnt from the stackoverflow thread by KrazyMax. Link to the thread in here.

I have created a .env file which contains the settings if you want to run the API in development mode.

Package Related Errors

I encountered an exception as shown below.

Djongo NotImplementedError: Database objects do not implement truth value testing or bool()

For this I followed this mongodb community thread where Chandrashekhar_Pattar mentioned to install specific version of pymongo package.

pip install pymongo==3.12.3

Other Resources

Following are the resources I referred while implementing the integrations.

Post Method Request in Django

How to Use Django with MongoDB

MongoDB Compass Download

Stackoverflow Thread on Docker Compose

Python Django with MongoDb as Containers

Previous blog post shows how to create Python Django API. Link in here.

Also created docker container hosting python django. Link in here.

In this blog post using the previous knowledge, I build a python django app which exposes user API. A simple api where user can be added and listed.

You can download the code from github for your reference from here.

Background

I created a very simple API using python django. API can be written in any langauge. API interacts with mongo db. Setup looks something like shown below.

API and Mongo DB, both will run inside their respective containers.

How to Run the Application?

Download the code into a folder from this github link.

Once you downloaded the code, navigate into the folder django_mongo_docker in the terminal.

Create a virtual environment and activate it.

python -m venv venv
.\venv\Scripts\activate

Install Dependencies

In order to install dependencies, run the following command.

pip install -r .\my_django_project\requirements.txt

Build Django Docker Image

Build the docker image of django with the following command. You need to navigate into the folder of DockerFile and execute the build command.

cd .\my_django_project\
docker build -t my-django-app .

Run the Docker Compose

Run both mongo db server and django app using the docker compose file with the following command.

docker-compose up

Validation

Open the browser and navigate to the URL http://localhost:8000/userapi/users

You should see the screen as shown below.

Adding User

Navigate to the url http://localhost:8000/userapi/user

Enter the name and date of birth, and post the details. You should see user created.

List Newly Created User

Navigate back again to to the URL http://localhost:8000/userapi/users

You should see

Conclusion

There are challenges with containerized applications. The database application when spawn will run on a different ip and port. This configuration has to go into the API application. Challenges faced while hosting these applications and some useful resources to overcome, I will share them in my next blog post.

Containerizing Django API

In my previous blog posts I had created a django api. The link is here.

In this post I will be creating a docker container hosting the django API from the earlier post.

Creating the Dockerfile

Create the Dockerfile in the root directory of the application.

Put the following content into the Dockerfile.

# Dockerfile 

# The first instruction is what image we want to base our container on
# We Use an official Python runtime as a parent image
FROM python:3.10

# Allows docker to cache installed dependencies between builds
COPY requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt

# Mounts the application code to the image
COPY . code
WORKDIR /code

EXPOSE 8000

# runs the production server
ENTRYPOINT ["python", "manage.py"]
CMD ["runserver", "0.0.0.0:8000"]

If you read through the file we dont have a requirements.txt. So lets create one.

Create requirements.txt

In order to create requirements.txt file run the following command in the root folder.

pip freeze > requirements.txt

You should see the requirements.txt created in the root folder.

Building the Docker Image

Now lets build docker image with the following command.

docker build -t python-django-app .

Once the image is built, lets go ahead and spawn the container.

Running the App

Run the following command to run the django api inside the container.

docker run -it -p 8000:8000 python-django-app

Validating the Setup

Now open the url http://localhost:8000/sampleapi/ in the browser. You should see the API

Conclusion

Containerization of an app is becoming popular after the advent of cloud platforms. The containers are easy way to run applications without the need for intense resources or hazzle of setup. In this blog we hosted api inside the docker container to see the basics of hosting django app.

Dependency Injection in Python

Dependency Injection helps in keeping project modular, composable, testable and clean. The real value is understood as the project evolves and the cost of making code changes stays minimal as against the one which did not follow the modular design.

Following is a code in python using injector package. Before you run the program add injector via pip install.

from injector import inject, Module, singleton 

class Engine:
def start(self):
print("Engine started")

class Car:
@inject
def __init__(self, engine: Engine):
self.engine = engine

def start(self):
print("Car starting...")
self.engine.start()

class CarModule(Module):
def configure(self, binder):
binder.bind(Engine, to=Engine, scope=singleton)

# Using Dependency Injection with injector
from injector import Injector

injector = Injector(modules=[CarModule()])
car = injector.get(Car)
car.start()

Car with an Engine

The program above is simple where we have a car, an engine, and CarModule which binds the engine to the car.

When you run the program you will see the following output.

Car starting... 
Engine started

New Feature Added

The car was good with the given engine. However the manufacturers decided to add a V8 engine to make the car faster. With dependency injection in place, approach will be to replace the engine with V8 engine in the car module. Something like shown below.

from injector import inject, Module, singleton 

class Engine:
def start(self):
print("Engine started")

class V8Engine:
def start(self):
print("V8 Engine started")


class Car:
@inject
def __init__(self, engine: Engine):
self.engine = engine

def start(self):
print("Car starting...")
self.engine.start()

class CarModule(Module):
def configure(self, binder):
binder.bind(Engine, to=V8Engine, scope=singleton)

# Using Dependency Injection with injector
from injector import Injector

injector = Injector(modules=[CarModule()])
car = injector.get(Car)
car.start()

If you see the program, a new class is introduced called V8Engine. And new engine is bound in the CarModule. You can see the code in block letters.

On running the program, following is the output.

Car starting... 
V8 Engine started

Conclusion

Dependency injection and Solid principles make the code robust and lowers the cost of development. As it is shown, replacing an existing code is super easy than without DI. In real world, mocks and stubs will replaces real component which talk to external dependencies during unit testing. This blog is a quick look at the DI in action in python language.

Keycloak IAM with Python

In my last blog post I had given an introduction to keycloak IAM. You can find the post here.

I came across this nice blog post which shows interactions with Keycloak IAM. It shares insights into how to connect and work with keycloak using python.

I have created a Jupyter file in case you want to start using Keycloak or want to work with it. Link to github is here.

Gotchas

While I was trying to create user, some times I used to get 401 unauthorized. This was mainly due to token expiry. You can go to the “Get Access Token” step in the jupyter and execute it once. It will set the token using which you can create the client or user.