Skip to main content
added 240 characters in body
Source Link
codeforester
  • 391
  • 1
  • 5
  • 29

If you are dealing with application logs, here are the three broad categories of information that a good log file should cover:

  • basic context (timestamp, log level, application name, source file name, source module/function, source line number, log message)
  • server context (host name, data center name unless host name has it encoded, cluster/pod info, container info)
  • user context (request URI, username or a hash, request ID etc)

If we are using JSON for logging, care must be taken to make sure that the keys are short enough to reduce the redundant overhead and also make it easier to query. Using all lowercase and underscores for keys could be a better idea because some of the log aggregators like Splunk are case sensitive.

In the structured format, stack traces become a little tricky to handle. One solution is to use a marker line to indicate the end of a long line rather than just the newline.

If you are dealing with application logs, here are the three broad categories of information that a good log file should cover:

  • basic context (timestamp, log level, application name, source file name, source module/function, source line number, log message)
  • server context (host name, data center name unless host name has it encoded, cluster/pod info, container info)
  • user context (request URI, username or a hash, request ID etc)

If we are using JSON for logging, care must be taken to make sure that the keys are short enough to reduce the redundant overhead and also make it easier to query.

In the structured format, stack traces become a little tricky to handle.

If you are dealing with application logs, here are the three broad categories of information that a good log file should cover:

  • basic context (timestamp, log level, application name, source file name, source module/function, source line number, log message)
  • server context (host name, data center name unless host name has it encoded, cluster/pod info, container info)
  • user context (request URI, username or a hash, request ID etc)

If we are using JSON for logging, care must be taken to make sure that the keys are short enough to reduce the redundant overhead and also make it easier to query. Using all lowercase and underscores for keys could be a better idea because some of the log aggregators like Splunk are case sensitive.

In the structured format, stack traces become a little tricky to handle. One solution is to use a marker line to indicate the end of a long line rather than just the newline.

added 75 characters in body
Source Link
codeforester
  • 391
  • 1
  • 5
  • 29

If you are dealing with application logs, here are the three broad categories of information that a good log file should cover:

  • basic context (timestamp, log level, application name, source file name, source module/function, source line number, log message)
  • server context (host name, data center name unless host name has it encoded, cluster/pod info, container info)
  • user context (request URI, username or a hash, request ID etc)

If we are using JSON for logging, care must be taken to make sure that the keys are short enough to reduce the redundant overhead and also make it easier to query.

In the structured format, stack traces become a little tricky to handle.

If you are dealing with application logs, here are the three broad categories of information that a good log file should cover:

  • basic context (timestamp, log level, application name, source file name, source module/function, source line number, log message)
  • server context (host name, data center name unless host name has it encoded, cluster/pod info, container info)
  • user context (request URI, username or a hash, request ID etc)

If we are using JSON for logging, care must be taken to make sure that the keys are short enough to reduce the redundant overhead and also make it easier to query.

If you are dealing with application logs, here are the three broad categories of information that a good log file should cover:

  • basic context (timestamp, log level, application name, source file name, source module/function, source line number, log message)
  • server context (host name, data center name unless host name has it encoded, cluster/pod info, container info)
  • user context (request URI, username or a hash, request ID etc)

If we are using JSON for logging, care must be taken to make sure that the keys are short enough to reduce the redundant overhead and also make it easier to query.

In the structured format, stack traces become a little tricky to handle.

Source Link
codeforester
  • 391
  • 1
  • 5
  • 29

If you are dealing with application logs, here are the three broad categories of information that a good log file should cover:

  • basic context (timestamp, log level, application name, source file name, source module/function, source line number, log message)
  • server context (host name, data center name unless host name has it encoded, cluster/pod info, container info)
  • user context (request URI, username or a hash, request ID etc)

If we are using JSON for logging, care must be taken to make sure that the keys are short enough to reduce the redundant overhead and also make it easier to query.