171

I'm trying to use the standard library to debug my code:

This works fine:

import logging logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) logger.info('message') 

I can't make work the logger for the lower levels:

logging.basicConfig(level=logging.DEBUG) logger = logging.getLogger(__name__) logger.info('message') logging.basicConfig(level=logging.DEBUG) logger = logging.getLogger(__name__) logger.debug('message') 

I don't get any response for neither of those.

5 Answers 5

230

What Python version? That works for me in 3.4. But note that basicConfig() won't affect the root handler if it's already setup:

This function does nothing if the root logger already has handlers configured for it.

To set the level on root explicitly do logging.getLogger().setLevel(logging.DEBUG). But ensure you've called basicConfig() before hand so the root logger initially has some setup. I.e.:

import logging logging.basicConfig() logging.getLogger().setLevel(logging.DEBUG) logging.getLogger('foo').debug('bah') logging.getLogger().setLevel(logging.INFO) logging.getLogger('foo').debug('bah') 

Also note that "Loggers" and their "Handlers" both have distinct independent log levels. So if you've previously explicitly loaded some complex logger config in you Python script, and that has messed with the root logger's handler(s), then this can have an effect, and just changing the loggers log level with logging.getLogger().setLevel(..) may not work. This is because the attached handler may have a log level set independently. This is unlikely to be the case and not something you'd normally have to worry about.

Sign up to request clarification or add additional context in comments.

1 Comment

You can send basicConfig a force=True now to make it work even if it's already set up.
24

I use the following setup for logging.

Yaml based config

Create a yaml file called logging.yml like this:

version: 1 formatters: simple: format: "%(name)s - %(lineno)d - %(message)s" complex: format: "%(asctime)s - %(name)s - %(lineno)d - %(message)s" handlers: console: class: logging.StreamHandler level: DEBUG formatter: simple file: class: logging.handlers.TimedRotatingFileHandler when: midnight backupCount: 5 level: DEBUG formatter: simple filename : Thrift.log loggers: qsoWidget: level: INFO handlers: [console,file] propagate: yes __main__: level: DEBUG handlers: [console] propagate: yes 

Python - The main

The "main" module should look like this:

import logging.config import logging import yaml with open('logging.yaml','rt') as f: config=yaml.safe_load(f.read()) logging.config.dictConfig(config) logger=logging.getLogger(__name__) logger.info("Contest is starting") 

Sub Modules/Classes

These should start like this

import logging class locator(object): def __init__(self): self.logger = logging.getLogger(__name__) self.logger.debug('{} initialized') 

Hope that helps you...

3 Comments

Is qsoWidget just the name of your app?
Is it not best practice to include all logging configs in __init__.py? This is a question.
I would say - you should try and put your logging setup at the highest possible point in your projects. And then get the dependent modules to use it. This way you have consistent logging. Doing it module per module will result in different level, different log formatted output. This will make downstream log processing (Humio, Splunk, PIG) much more difficult.
12

In my opinion, this is the best approach for the majority of cases.

Configuration via an INI file

Create a filename logging.ini in the project root directory as below:

[loggers] keys=root [logger_root] level=DEBUG handlers=screen,file [formatters] keys=simple,verbose [formatter_simple] format=%(asctime)s [%(levelname)s] %(name)s: %(message)s [formatter_verbose] format=[%(asctime)s] %(levelname)s [%(filename)s %(name)s %(funcName)s (%(lineno)d)]: %(message)s [handlers] keys=file,screen [handler_file] class=handlers.TimedRotatingFileHandler interval=midnight backupCount=5 formatter=verbose level=WARNING args=('debug.log',) [handler_screen] class=StreamHandler formatter=simple level=DEBUG args=(sys.stdout,) 

Then configure it as below:

import logging from logging.config import fileConfig fileConfig('logging.ini') logger = logging.getLogger('dev') name = "stackoverflow" logger.info(f"Hello {name}!") logger.critical('This message should go to the log file.') logger.error('So should this.') logger.warning('And this, too.') logger.debug('Bye!') 

If you run the script, the sysout will be:

2021-01-31 03:40:10,241 [INFO] dev: Hello stackoverflow! 2021-01-31 03:40:10,242 [CRITICAL] dev: This message should go to the log file. 2021-01-31 03:40:10,243 [ERROR] dev: So should this. 2021-01-31 03:40:10,243 [WARNING] dev: And this, too. 2021-01-31 03:40:10,243 [DEBUG] dev: Bye! 

And debug.log file should contain:

[2021-01-31 03:40:10,242] CRITICAL [my_loger.py dev <module> (12)]: This message should go to the log file. [2021-01-31 03:40:10,243] ERROR [my_loger.py dev <module> (13)]: So should this. [2021-01-31 03:40:10,243] WARNING [my_loger.py dev <module> (14)]: And this, too. 

All done.

Comments

8

I wanted to leave the default logger at warning level but have detailed lower-level loggers for my code. But it wouldn't show anything. Building on the other answer, it's critical to run logging.basicConfig() beforehand.

import logging logging.basicConfig() logging.getLogger('foo').setLevel(logging.INFO) logging.getLogger('foo').info('info') logging.getLogger('foo').debug('info') logging.getLogger('foo').setLevel(logging.DEBUG) logging.getLogger('foo').info('info') logging.getLogger('foo').debug('debug') 

Outputs expected

INFO:foo:info INFO:foo:info DEBUG:foo:debug 

For a logging solution across modules, I did this

# cfg.py import logging logging.basicConfig() logger = logging.getLogger('foo') logger.setLevel(logging.INFO) logger.info(f'active') # main.py import cfg cfg.logger.info(f'main') 

Comments

5

This works for me; and also for IPython/ Jupyter notebooks. Note Python 3.10

import logging as log from datetime import datetime time_hash=str(datetime.now()).strip() outfile = "./out_" +time_hash +".log" # if you need to log to file log.basicConfig( level=log.INFO, #format="%(asctime)s [%(levelname)s] %(message)s", # format="[%(levelname)s] %(message)s",# dont need timing handlers=[ #log.FileHandler(outfile),# dont need file logs log.StreamHandler() ], force = True ) 

Note - the last one force=True from here https://stackoverflow.com/a/72292519/429476 for Ipython notebooks

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.