Closures in Python

Recently I came across python closures. I know closures in javascript. However I didnt know closure exists in python too.

A closure in Python is a function object that has access to variables in the outer (enclosing) function’s scope, even after the outer function has finished executing.

A concise video on closure is here on youtube.

Whats the Use of Closure?

One of the usecase I could think of is caching. Lets say you make a call to an API and you want to cache the result and work off the cache, this is going to be very usefule.

Following is an example of it.

from collections import namedtuple

def get_api_handler():

cache:str = None

if(cache == None):
print("Fetching from API and loading the cache")
cache = "Hello World"

def get_value():
print("Fetching from cache")
return cache

api_handler = namedtuple('api_handler',['get_response'])
return api_handler(get_value)

def main():
api_handler = get_api_handler()
api_handler.get_response()
api_handler.get_response()

if __name__ == "__main__":
main()

First time when you do api_handler.get_response(), API is called and cache is loaded. Subsequent requests will be served from the cache.

Now, I am also using another concept of python called namedtuple to treat closure like object. You can refer to this youtube video for the same.

Ending Note

Closure is a very powerful feature offered by languages. Different use cases can be implemented with the help of closure. We explored caching as one of the usecase. Other usecases as I know are implementing function factories, encapsulation of private variables, decorators.

Managaing Command Line Arguments in Python

I recently came across a program where we had to manage the command line arguments. And the easy way to go do that was using argparse package.

argparse is a Python module in the standard library that provides a mechanism for parsing command-line arguments. It makes it easy to write user-friendly command-line interfaces for your Python programs.

What can you do with it? Let me show you some of the capabilities.

Command Line Help

Following is a simple program with argparse.

import argparse

def main() -> None:
parser = argparse.ArgumentParser(
prog='Sample Program',
description='Demonstrated Argparse Functionality',
epilog='Happy coding')
parser.add_argument('-f', '--filename') # positional argument
parser.add_argument('-c', '--count') # option that takes a value
parser.add_argument('-v', '--verbose') # on/off flag

args = parser.parse_args()
print(args.filename, args.count, args.verbose)

if __name__ == '__main__':
main()

You can run this in as a python program. When you execute the following command, you will see

python main.py --help
usage: Sample Program [-h] [-f FILENAME] [-c COUNT] [-v VERBOSE]

Demonstrated Argparse Functionality

options:
-h, --help show this help message and exit
-f FILENAME, --filename FILENAME
-c COUNT, --count COUNT
-v VERBOSE, --verbose VERBOSE

Happy coding

Required Arguments

You can make certain arguments as required like shown below.

 parser.add_argument(‘-f’, ‘–filename’, required=True)  

When you execute the program with out filename it will show you the message like below.

usage: Sample Program [-h] -f FILENAME [-c COUNT] [-v VERBOSE]
Sample Program: error: the following arguments are required: -f/--filename

Fixed Argument Values

Lets say filename has to be “text1.txt” or “text1.csv”. It can not be any value other than that. Then you can specify the choice to restrict the values like shown below.

parser.add_argument('-f', '--filename', required=True, choices=["text1.txt","text1.csv"]) 

If you try to run the program with invalid choice, you will get the following error.

usage: Sample Program [-h] -f {text1.txt,text1.csv} [-c COUNT] [-v VERBOSE]
Sample Program: error: argument -f/--filename: invalid choice: 'somefile.txt' (choose from 'text1.txt', 'text1.csv')

Constant Values

Suppose I do not want to pass the count, and set a default count to 1.

parser.add_argument('-c', '--count', const=1,action='store_const') 

When the execute the program with the following command

python main.py -f text1.txt -c -v "verbose"

I see the values as

text1.txt 1 verbose

Notice that I am not passing argument after -c. It is a constant which is considered internally.

Closing Note

Argparse has a lot of features around command line arguments and execution. This is few of the ones which I noticed on the surface. There are more features like adding type safety to the arguments. I will be sure to post as I come across an interesting feature. Until then bye, see you around, and thanks for reading the blog.

Being Frugal with Containers

In my previous post I created an API with mongodb as database. Link in here. API has two methods. GetUser and PostUser. With two containers one for API and one for mongo db I got interested to know the minimal configuration with which they can function.

I tried few memory and CPU settings for the same, and arrived at the following configurations. Containers will take some time like 3 to 4 minutes before making API live because of low CPU and memory.

API - CPU 0.01 and Memory 40M
MongoDB - CPU 0.03 and Memory 50M

For API, CPU is limited to 1% of 1 CPU core and memory size of 40 MB.

MongoDB is limited to 3% of 1 CPU core and memory of 50 MB. I had to disable logging for mongodb in order to go to 3% CPU from 4%.

Following is the docker stats.

If you see memory I am at ~99% usage. CPU usage is very minimal. But if I lower the CPU, the program may not start. I guess minimum CPU is needed to start the program, rather than process the requests.

With my configuration, following is the docker console stats.

CPU max shows 1.18% of 1000%, memory remained at 89MB.

Ending Note

I am not sure why CPU usage is low despite reducing its limit. This is something I am still looking for answer. Will share the result as soon as I know what is happening.

Hurdles in Containerizing Django API and Mongodb

In my previous post I had provided the containerized Django API and MongoDb. In my closing note I had mentioned that there were few hurdles to get the setup working. Link to my blog post is here.

Although I could get the API working in the development environment, I faced few challenges when came to containerizing it. The first and foremost is the database connection.

Problem with Db Connection

In the development environment I could connect to mongo db via localhost because of port binding. When I hosted the API inside the container, it now needs to connect to the mongo db container via its IP. This means everytime the mongo db container is destroyed and created, the container will be assigned a new IP.

Docker compose came handy in this scenario.

Docker Compose to the Rescue

My docker compose file looked like shown below.

version: '3.7' 
services:
web:
image: my-django-app
ports:
- "8000:8000"
environment:
- DB_HOST=db
- DB_NAME=user

depends_on:
- db

deploy:
resources:
limits:
memory: 100M
reservations:
memory: 20M
db:
image: mongo
restart: always
ports:
- "27017:27017"
deploy:
resources:
limits:
memory: 100M
reservations:
memory: 20M

volumes:
postgres_data:

If you see the highlighted, I have set the environment variables DB_HOST and DB_NAME. Then there is a depends_on section where I have mentioned db. db is the step name which creates the mongo db container, and until mongo db container is up the web step will not be executed.

In order to pass the environment variables into the program I had to change the settings.py file in the main django project (i.e., my_django_project).

Environment Variable Settings

Following is the content that is put in the settings file.

DATABASES = { 

'default': {
'ENGINE': 'djongo',
'NAME': os.getenv('DB_NAME'),
'ENFORCE_SCHEMA': False,
'CLIENT': {
'host': 'mongodb://' + os.getenv('DB_HOST')
}
}
}

I have installed djongo package to interact with mongodb. Then you can see os.getenv statement for setting up name. Also host is set with the help of os.getenv statement. For calling os.getenv, package python-dotenv is installed.

At the top of settings.py there is a statement to load the environment variables.

from dotenv import load_dotenv 

load_dotenv()

This technique I learnt from the stackoverflow thread by KrazyMax. Link to the thread in here.

I have created a .env file which contains the settings if you want to run the API in development mode.

Package Related Errors

I encountered an exception as shown below.

Djongo NotImplementedError: Database objects do not implement truth value testing or bool()

For this I followed this mongodb community thread where Chandrashekhar_Pattar mentioned to install specific version of pymongo package.

pip install pymongo==3.12.3

Other Resources

Following are the resources I referred while implementing the integrations.

Post Method Request in Django

How to Use Django with MongoDB

MongoDB Compass Download

Stackoverflow Thread on Docker Compose

Python Django with MongoDb as Containers

Previous blog post shows how to create Python Django API. Link in here.

Also created docker container hosting python django. Link in here.

In this blog post using the previous knowledge, I build a python django app which exposes user API. A simple api where user can be added and listed.

You can download the code from github for your reference from here.

Background

I created a very simple API using python django. API can be written in any langauge. API interacts with mongo db. Setup looks something like shown below.

API and Mongo DB, both will run inside their respective containers.

How to Run the Application?

Download the code into a folder from this github link.

Once you downloaded the code, navigate into the folder django_mongo_docker in the terminal.

Create a virtual environment and activate it.

python -m venv venv
.\venv\Scripts\activate

Install Dependencies

In order to install dependencies, run the following command.

pip install -r .\my_django_project\requirements.txt

Build Django Docker Image

Build the docker image of django with the following command. You need to navigate into the folder of DockerFile and execute the build command.

cd .\my_django_project\
docker build -t my-django-app .

Run the Docker Compose

Run both mongo db server and django app using the docker compose file with the following command.

docker-compose up

Validation

Open the browser and navigate to the URL http://localhost:8000/userapi/users

You should see the screen as shown below.

Adding User

Navigate to the url http://localhost:8000/userapi/user

Enter the name and date of birth, and post the details. You should see user created.

List Newly Created User

Navigate back again to to the URL http://localhost:8000/userapi/users

You should see

Conclusion

There are challenges with containerized applications. The database application when spawn will run on a different ip and port. This configuration has to go into the API application. Challenges faced while hosting these applications and some useful resources to overcome, I will share them in my next blog post.

Containerizing Django API

In my previous blog posts I had created a django api. The link is here.

In this post I will be creating a docker container hosting the django API from the earlier post.

Creating the Dockerfile

Create the Dockerfile in the root directory of the application.

Put the following content into the Dockerfile.

# Dockerfile 

# The first instruction is what image we want to base our container on
# We Use an official Python runtime as a parent image
FROM python:3.10

# Allows docker to cache installed dependencies between builds
COPY requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt

# Mounts the application code to the image
COPY . code
WORKDIR /code

EXPOSE 8000

# runs the production server
ENTRYPOINT ["python", "manage.py"]
CMD ["runserver", "0.0.0.0:8000"]

If you read through the file we dont have a requirements.txt. So lets create one.

Create requirements.txt

In order to create requirements.txt file run the following command in the root folder.

pip freeze > requirements.txt

You should see the requirements.txt created in the root folder.

Building the Docker Image

Now lets build docker image with the following command.

docker build -t python-django-app .

Once the image is built, lets go ahead and spawn the container.

Running the App

Run the following command to run the django api inside the container.

docker run -it -p 8000:8000 python-django-app

Validating the Setup

Now open the url http://localhost:8000/sampleapi/ in the browser. You should see the API

Conclusion

Containerization of an app is becoming popular after the advent of cloud platforms. The containers are easy way to run applications without the need for intense resources or hazzle of setup. In this blog we hosted api inside the docker container to see the basics of hosting django app.

Dependency Injection in Python

Dependency Injection helps in keeping project modular, composable, testable and clean. The real value is understood as the project evolves and the cost of making code changes stays minimal as against the one which did not follow the modular design.

Following is a code in python using injector package. Before you run the program add injector via pip install.

from injector import inject, Module, singleton 

class Engine:
def start(self):
print("Engine started")

class Car:
@inject
def __init__(self, engine: Engine):
self.engine = engine

def start(self):
print("Car starting...")
self.engine.start()

class CarModule(Module):
def configure(self, binder):
binder.bind(Engine, to=Engine, scope=singleton)

# Using Dependency Injection with injector
from injector import Injector

injector = Injector(modules=[CarModule()])
car = injector.get(Car)
car.start()

Car with an Engine

The program above is simple where we have a car, an engine, and CarModule which binds the engine to the car.

When you run the program you will see the following output.

Car starting... 
Engine started

New Feature Added

The car was good with the given engine. However the manufacturers decided to add a V8 engine to make the car faster. With dependency injection in place, approach will be to replace the engine with V8 engine in the car module. Something like shown below.

from injector import inject, Module, singleton 

class Engine:
def start(self):
print("Engine started")

class V8Engine:
def start(self):
print("V8 Engine started")


class Car:
@inject
def __init__(self, engine: Engine):
self.engine = engine

def start(self):
print("Car starting...")
self.engine.start()

class CarModule(Module):
def configure(self, binder):
binder.bind(Engine, to=V8Engine, scope=singleton)

# Using Dependency Injection with injector
from injector import Injector

injector = Injector(modules=[CarModule()])
car = injector.get(Car)
car.start()

If you see the program, a new class is introduced called V8Engine. And new engine is bound in the CarModule. You can see the code in block letters.

On running the program, following is the output.

Car starting... 
V8 Engine started

Conclusion

Dependency injection and Solid principles make the code robust and lowers the cost of development. As it is shown, replacing an existing code is super easy than without DI. In real world, mocks and stubs will replaces real component which talk to external dependencies during unit testing. This blog is a quick look at the DI in action in python language.

Keycloak IAM with Python

In my last blog post I had given an introduction to keycloak IAM. You can find the post here.

I came across this nice blog post which shows interactions with Keycloak IAM. It shares insights into how to connect and work with keycloak using python.

I have created a Jupyter file in case you want to start using Keycloak or want to work with it. Link to github is here.

Gotchas

While I was trying to create user, some times I used to get 401 unauthorized. This was mainly due to token expiry. You can go to the “Get Access Token” step in the jupyter and execute it once. It will set the token using which you can create the client or user.

Python Sentiment Analysis with Vader

I was researching on sentiment analysis and stumbled upon the suggestion to use Vader. Vader stands for Valence Aware Dictionary and sEntiment Reasoner. It is a is a pre-built sentiment analysis tool specifically designed for social media text.

VADER is particularly useful for short and informal text, such as social media posts or reviews. For more sophisticated sentiment analysis tasks, one may consider using machine learning models trained on larger datasets.

Code Setup

Install nltk package using the following command.

pip install nltk

Create a file called sentiment_analysis.py and put the following code.

from nltk.sentiment.vader import SentimentIntensityAnalyzer 

import nltk
nltk.download('vader_lexicon')

# Create a SentimentIntensityAnalyzer object
sid = SentimentIntensityAnalyzer()

# Example sentence
sentence = "VADER is a great tool for sentiment analysis!"

# Get sentiment scores
sentiment_scores = sid.polarity_scores(sentence)

# Print sentiment scores
compound_score = sentiment_scores['compound']

if(compound_score > 0.05):
print('Positive sentiment')
elif(compound_score < -0.05):
print('Negative sentiment')
else:
print('Neutral sentiment')

Validation

Run the program with the following command.

python sentiment_analysis.py

It will print the following output.

Positive sentiment

Now lets change the example sentance as shown below.

# Example sentence 
sentence = "VADER is not so great tool for sentiment analysis!"

Run the program again, and you will see

Negative sentiment

Now lets change the example sentance as shown below. Its like a statement made, neither positive nor negative.

# Example sentence 
sentence = "VADER is a tool for sentiment analysis!"

When you check the result, you will see..

Neutral sentiment

Conclusion

If you want to run sentiment analysis on small text, then you can use Vader. If you want to fine tune the vader lexicon, you may need to override in times. Lets see some fun with vader tool in the upcoming posts.

Celery Task Status via Django API

This blog is a followup of my previous post, django with celery. I am going to build on top of the existing project.

Sqlite3 Installation

For this setup I am going to use sqlite3. This link from youtube has instructions on installing sqlite3.

Install Sqlalchemy

This step is needed to interact with Sqlite3

pip install sqlalchemy

Celery Settings

Let us go update the settings under both django_celery and my_first_django_project (main project).

Go to settings.py in django_celery app, and add the following line.

CELERY_RESULT_BACKEND = "db+sqlite:///db1.sqlite3"

We are asking celery to store the job status and details in the celery backend which in this case is sqlite3.

Go to settings.py in my_first_django_project, and add or change the celery result backend to as shown below.

CELERY_RESULT_BACKEND = "db+sqlite:///db1.sqlite3"

Creating View to Fetch Status

Now lets go create a view in sampleapi app. Put the code in views.py in sampleapi app. The code is as below. Highlighted in bold are the new or updated codes.

from rest_framework.response import Response
from rest_framework.decorators import api_view
from django_celery.tasks import send_email
from .celery import app as celery_app

# Create your views here.

@api_view(['GET'])

def getData(request):
result = send_email.delay("nitin@sampleemail.com")
return Response(result.task_id)

@api_view(['GET'])
def check_task_status(request, task_id):
# Check the status of the Celery task
response = celery_app.AsyncResult(task_id)
return Response({"status":response.state, "result": response.result})

Update the Routing

In urls.py of sampleapi app, add the following in the urlpatterns.

from django.urls import path
from . import views
urlpatterns = [
    path('', views.getData),
    path('taskstatus/<str:task_id>/', views.check_task_status, name='check_task_status'),
]

Testing and Validation

Open a terminal and with the following command run the celery worker.

celery -A django_celery.celery worker --loglevel=info --pool=solo

Open another terminal and run django server with the following command.

python manage.py runserver

Navigate to http://127.0.0.1:8000/sampleapi/

You should see the task id returned as shown in the screen below.

Validating the Status of the Task

Now we will open the Url http://127.0.0.1:8000/sampleapi/taskstatus/73cb1123-9edb-44ac-9ff5-77e8a7bd7b8c/

The guid in the URL is the one we got during task creation. You can refresh the URL. You will see the “Pending” status initially and “Success” once the task is completed.

When the status is pending, you will get the result back as null.

On success, you should see the result and status as shown below.

Conclusion

For long running tasks it is a common practive to handover the execution to an offline process and celery does it well. Django celery integration allows client to poll for the status of the job and display result on completion.

Code is checked into github under the branch task_status_revision.