Consider a case where a python module contains multiple functions. Each function takes an id.
def f1(id): log into file f1/{id}.txt def f2(id): log into file f2/{id}.txt Assume the ids are always unique that are passed to each functions. Like if 1 is passed to f1, 1 cant be requested again with f1. Same with other functions.
I want logging per function not module. So that each function logs into unique file like function_name/id.txt
So after the function is executed there is no need to open the function_name/id.txt for logging by function because next request will contain different id. So file handlers to that file should be closed after the function is executed
How logging per module can be implemented in python so that all exceptions are caught properly per module?
I am trying this approach:
def setup_logger( name, log_file, level=logging.DEBUG ): handler = logging.FileHandler(log_file) handler.setFormatter(logging.Formatter('[%(asctime)s][%(levelname)s]%(message)s')) logger = logging.getLogger(name) logger.setLevel(level) logger.addHandler(handler) return logger def f1(id): logger = setup_logger('f1_id_logger', f'f1/{id}.txt', level=logging.DEBUG) def f2(id): logger = setup_logger('f2_id_logger', f'f2/{id}.txt', level=logging.DEBUG) But my concerns are:
- Is it really necessary to create so many loggers?
- Will the logger be able to handle exceptions per function?
- Will the file opened remains opened after function is done or when it catches some exception?