Sessions
Many interactions with LLM applications span multiple traces and observations. Sessions in Langfuse are a special way to group these observations across traces together and see a simple session replay of the entire interaction. Get started by propagating the sessionId attribute across observations.
Propagate a sessionId across observations that span multiple traces. The sessionId can be any US-ASCII character string less than 200 characters that you use to identify the session. All observations with the same sessionId will be grouped together including their enclosing traces. If a session ID exceeds 200 characters, it will be dropped.
When using the @observe() decorator:
from langfuse import observe, propagate_attributes @observe() def process_request(): # Propagate session_id to all child observations with propagate_attributes(session_id="your-session-id"): # All nested observations automatically inherit session_id result = process_chat_message() return resultWhen creating observations directly:
from langfuse import get_client, propagate_attributes langfuse = get_client() with langfuse.start_as_current_observation( as_type="span", name="process-chat-message" ) as root_span: # Propagate session_id to all child observations with propagate_attributes(session_id="chat-session-123"): # All observations created here automatically have session_id with root_span.start_as_current_observation( as_type="generation", name="generate-response", model="gpt-4o" ) as gen: # This generation automatically has session_id pass- Values must be strings ≤200 characters
- Call early in your trace to ensure all observations are covered. This way you make sure that all Metrics in Langfuse are accurate.
- Invalid values are dropped with a warning
Example
Try this feature using the public example project.
Example session spanning multiple traces

Other features
- Publish a session to share with others as a public link (example)
- Bookmark a session to easily find it later
- Annotate sessions by adding
scoresvia the Langfuse UI to record human-in-the-loop evaluations - How to evaluate sessions in Langfuse?