Skip to main content
added 384 characters in body
Source Link
VoiceOfUnreason
  • 34.7k
  • 2
  • 45
  • 84

The infrastructure for the event sourcing is an event store which saves the events as documents inside a MongoDB collection and then publish a corresponding message to a service bus, so that with a classic pub-sub pattern all the interested projections are able to subscribe the message and do the proper work in response to it.

(Emphasis added)

This assumption here is the one that you want to push back on. Pub/Sub can work for consumers that only care about a single message in isolation. Consumers that need state should be consuming histories, not events.

In the irredeemably non-optimized case, a consumer reads the entire history of ordered events each time it runs, and then processes them all.

An optimized version of this is that the consumer tracks where in the event history it left off, and runs an "all events since event X" query to find out what has happened. The irredeemably non-optimized case simply being the degenerate case of this: "all events since there were no events".

You might still see the pub/sub pattern applied, not to rebuild the read model, but to wake up the consumer to pull history as described above (in effect, it becomes a latency reduction mechanism).

There's nothing wrong with the consumer also having a bit of clever to recognize that the received event is the immediate successor to what is already known. This is normally accomplished with meta data attached to the event, indicating its position in the history.

So in your competing consumer scenario, you might see two different read behaviors at work. The receiver of E1 determines that E1 is the immediate successor of the previous state, and simply goes to work. The receiver of E2 sees from the metadata that at least one event is missing, so refreshes its copy of the event stream, receiving in return the sequence [E1,E2], which it then consumes.

Some references

At the DDD Europe conference, I realized that the speakers I talked with where avoiding Pub/Sub whenever possible. -- Raymond Rutjes, 2016

Greg Young, Polyglot Data (2014); Greg talks a bit about the benefits of pull.

The infrastructure for the event sourcing is an event store which saves the events as documents inside a MongoDB collection and then publish a corresponding message to a service bus, so that with a classic pub-sub pattern all the interested projections are able to subscribe the message and do the proper work in response to it.

(Emphasis added)

This assumption here is the one that you want to push back on. Pub/Sub can work for consumers that only care about a single message in isolation. Consumers that need state should be consuming histories, not events.

In the irredeemably non-optimized case, a consumer reads the entire history of ordered events each time it runs, and then processes them all.

An optimized version of this is that the consumer tracks where in the event history it left off, and runs an "all events since event X" query to find out what has happened. The irredeemably non-optimized case simply being the degenerate case of this: "all events since there were no events".

You might still see the pub/sub pattern applied, not to rebuild the read model, but to wake up the consumer to pull history as described above (in effect, it becomes a latency reduction mechanism).

There's nothing wrong with the consumer also having a bit of clever to recognize that the received event is the immediate successor to what is already known. This is normally accomplished with meta data attached to the event, indicating its position in the history.

So in your competing consumer scenario, you might see two different read behaviors at work. The receiver of E1 determines that E1 is the immediate successor of the previous state, and simply goes to work. The receiver of E2 sees from the metadata that at least one event is missing, so refreshes its copy of the event stream, receiving in return the sequence [E1,E2], which it then consumes.

The infrastructure for the event sourcing is an event store which saves the events as documents inside a MongoDB collection and then publish a corresponding message to a service bus, so that with a classic pub-sub pattern all the interested projections are able to subscribe the message and do the proper work in response to it.

(Emphasis added)

This assumption here is the one that you want to push back on. Pub/Sub can work for consumers that only care about a single message in isolation. Consumers that need state should be consuming histories, not events.

In the irredeemably non-optimized case, a consumer reads the entire history of ordered events each time it runs, and then processes them all.

An optimized version of this is that the consumer tracks where in the event history it left off, and runs an "all events since event X" query to find out what has happened. The irredeemably non-optimized case simply being the degenerate case of this: "all events since there were no events".

You might still see the pub/sub pattern applied, not to rebuild the read model, but to wake up the consumer to pull history as described above (in effect, it becomes a latency reduction mechanism).

There's nothing wrong with the consumer also having a bit of clever to recognize that the received event is the immediate successor to what is already known. This is normally accomplished with meta data attached to the event, indicating its position in the history.

So in your competing consumer scenario, you might see two different read behaviors at work. The receiver of E1 determines that E1 is the immediate successor of the previous state, and simply goes to work. The receiver of E2 sees from the metadata that at least one event is missing, so refreshes its copy of the event stream, receiving in return the sequence [E1,E2], which it then consumes.

Some references

At the DDD Europe conference, I realized that the speakers I talked with where avoiding Pub/Sub whenever possible. -- Raymond Rutjes, 2016

Greg Young, Polyglot Data (2014); Greg talks a bit about the benefits of pull.

Source Link
VoiceOfUnreason
  • 34.7k
  • 2
  • 45
  • 84

The infrastructure for the event sourcing is an event store which saves the events as documents inside a MongoDB collection and then publish a corresponding message to a service bus, so that with a classic pub-sub pattern all the interested projections are able to subscribe the message and do the proper work in response to it.

(Emphasis added)

This assumption here is the one that you want to push back on. Pub/Sub can work for consumers that only care about a single message in isolation. Consumers that need state should be consuming histories, not events.

In the irredeemably non-optimized case, a consumer reads the entire history of ordered events each time it runs, and then processes them all.

An optimized version of this is that the consumer tracks where in the event history it left off, and runs an "all events since event X" query to find out what has happened. The irredeemably non-optimized case simply being the degenerate case of this: "all events since there were no events".

You might still see the pub/sub pattern applied, not to rebuild the read model, but to wake up the consumer to pull history as described above (in effect, it becomes a latency reduction mechanism).

There's nothing wrong with the consumer also having a bit of clever to recognize that the received event is the immediate successor to what is already known. This is normally accomplished with meta data attached to the event, indicating its position in the history.

So in your competing consumer scenario, you might see two different read behaviors at work. The receiver of E1 determines that E1 is the immediate successor of the previous state, and simply goes to work. The receiver of E2 sees from the metadata that at least one event is missing, so refreshes its copy of the event stream, receiving in return the sequence [E1,E2], which it then consumes.