4

Consider a case where a python module contains multiple functions. Each function takes an id.

def f1(id): log into file f1/{id}.txt def f2(id): log into file f2/{id}.txt 

Assume the ids are always unique that are passed to each functions. Like if 1 is passed to f1, 1 cant be requested again with f1. Same with other functions.

I want logging per function not module. So that each function logs into unique file like function_name/id.txt

So after the function is executed there is no need to open the function_name/id.txt for logging by function because next request will contain different id. So file handlers to that file should be closed after the function is executed

How logging per module can be implemented in python so that all exceptions are caught properly per module?

I am trying this approach:

 def setup_logger( name, log_file, level=logging.DEBUG ): handler = logging.FileHandler(log_file) handler.setFormatter(logging.Formatter('[%(asctime)s][%(levelname)s]%(message)s')) logger = logging.getLogger(name) logger.setLevel(level) logger.addHandler(handler) return logger def f1(id): logger = setup_logger('f1_id_logger', f'f1/{id}.txt', level=logging.DEBUG) def f2(id): logger = setup_logger('f2_id_logger', f'f2/{id}.txt', level=logging.DEBUG) 

But my concerns are:

  • Is it really necessary to create so many loggers?
  • Will the logger be able to handle exceptions per function?
  • Will the file opened remains opened after function is done or when it catches some exception?
2
  • What have you tried so far and where did you have problems with your approach? Commented Apr 27, 2018 at 0:01
  • Update the question with approach and concerns Commented Apr 27, 2018 at 0:10

2 Answers 2

2

This is a great case for using decorators.

import logging from os import mkdir from os.path import exists from sys import exc_info # for retrieving the exception from traceback import format_exception # for formatting the exception def id_logger_setup(level=logging.DEBUG): def setup_logger(func): if not exists(func.__name__): # makes the directory if it doesn't exist mkdir(func.__name__) logger = logging.getLogger("{}_id_logger".format(func.__name__)) logger.setLevel(level) def _setup_logger(id, *args, **kwargs): handler = logging.FileHandler("{}/{}.txt".format(func.__name__, id)) # a unique handler for each id handler.setFormatter(logging.Formatter("[%(asctime)s][%(levelname)s]%(message)s")) logger.addHandler(handler) try: rtn = func(id, logger=logger, *args, **kwargs) except Exception: # if the function breaks, catch the exception and log it logger.critical("".join(format_exception(*exc_info()))) rtn = None finally: logger.removeHandler(handler) # remove ties between the logger and the soon-to-be-closed handler handler.close() # closes the file handler return rtn return _setup_logger return setup_logger @id_logger_setup(level=logging.DEBUG) # set the level def f1(id, *, logger): logger.debug("In f1 with id {}".format(id)) @id_logger_setup(level=logging.DEBUG) def f2(id, *, logger): logger.debug("In f2 with id {}".format(id)) @id_logger_setup(level=logging.DEBUG) def f3(id, *, logger): logger.debug("In f3 with id {}".format(id)) logger.debug("Something's going wrong soon...") int('opps') # raises an error f1(1234) f2(5678) f1(4321) f2(8765) f3(345774) 

From the code sample, you get the following:

f1 - | 1234.txt 4321.txt f2 - | 5678.txt 8765.txt f3 - | 345774.txt 

Where in the first four txt files you get something like this:

[2018-04-26 18:49:29,209][DEBUG]In f1 with id 1234 

and in f3/345774.txt, you get:

[2018-04-26 18:49:29,213][DEBUG]In f3 with id 345774 [2018-04-26 18:49:29,213][DEBUG]Something's going wrong soon... [2018-04-26 18:49:29,216][CRITICAL]Traceback (most recent call last): File "/path.py", line 20, in _setup_logger rtn = func(id, logger=logger, *args, **kwargs) File "/path.py", line 43, in f3 int('opps') ValueError: invalid literal for int() with base 10: 'opps' 

Here are the answers to your questions:

  1. is it really necessary to create so many loggers?

Using decorators, you're only creating one logger. So no, one logger is enough for every function. Since your logger's is in this format "{func-name}_id_logger", which means that there must be a unique logger for every distinct function.

  1. Will the logger be able to handle exceptions per function?

Yes, the logger will catch any exceptions that are a subclass of Exception. Although you exception will be caught regardless, you should still make an attempt at catching + handling the exception within the function.

  1. Will the file opened remains opened after function is done or when it catches some exception?

No, it will be closed appropriately.

Sign up to request clarification or add additional context in comments.

Comments

1

You shouldn't have to set up the loggers for each case separately. You should set them up once so that you have two loggers and each outputs to a different file. Then use the two different loggers in the two functions.

For example, you can configure the loggers this way*:

import logging.config logging.config.dictConfig({ 'version': 1, 'formatters': { 'simple_formatter': { 'format': '%(asctime)s - %(name)s - %(levelname)s - %(message)s' } }, 'handlers': { 'first_handler': { 'class' : 'logging.FileHandler', 'formatter': 'simple_formatter', 'filename': 'C:\\Temp\\log1.txt' }, 'second_handler': { 'class' : 'logging.FileHandler', 'formatter': 'simple_formatter', 'filename': 'C:\\Temp\\log2.txt' } }, 'loggers': { 'first_logger': { 'handlers': ['first_handler'] }, 'second_logger': { 'handlers': ['second_handler'] } } }) 

Then, simply use one or the other logger where you need them:

def f1(): logger = logging.getLogger('first_logger') logger.warning('Hello from f1') def f2(): logger = logging.getLogger('second_logger') logger.warning('Hello from f2') 

*There are different ways to configure loggers, see https://docs.python.org/3.6/library/logging.config.html for other options.

5 Comments

There is id passed to functions. So logging should be f1/id.txt not f1.txt. id is dynamic so we cant hardcode dictconfig
@AnkitVallecha In that case, you are misusing/misunderstanding logging completely. Logging is meant for keeping track of what your code is doing, so it is organised the same way as the code (a file per application, module or function), not by data.
@AnkitVallecha If you want to just write to a file and not log, then do with open('f1/{}.txt'.format(id)) as f: f.write(something).
Let me give u practical example of what i am doing. Suppose these id is the user. The functions are rest calls. I want to track the error encoutered per user per rest call . So i would need seperate files. Because in a single file other users data is also there.
@AnkitVallecha I still would not separate the files by user. The logging is meant for finding out what went wrong with the application, i.e. used seldomly. You can filter the file by user id later on. For more-than-just-logging, I'd use a different solution, e.g. a data base. Anyway, what you want could probably still be done, for instance if you implement your own handler class which uses the information about the currently-logged-in-user from some thread-local storage or something, but that seems quite over the top for your problem.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.