Skip to main content
added 147 characters in body
Source Link
JimmyJames
  • 31.1k
  • 3
  • 59
  • 111

Note: This answer was written assuming a distributed system with multiple hosts interacting with the same DB. While this is overkill for the OP's problem, I will leave it for other's who may need to solve for that scenario. That said, a uniqueness constraint on the DB explained is a good way to prevent corruption in case the ordering solution were to have a bug in it.

This is a good case for an optimistic locking scheme. Here's one approach that might work for you:

Keep a version number on each record which you increment by one on each new record. Create a unique index on the version number. When a new record comes in, you look up the last record. Add one to that and use it as the version number of your new record.

If another update has come in between retrieving the last record and adding the new record, both competing updates will retrieve the same last record and attempt to insert a new record with the same incremented version number. One of these inserts will fail due to the uniqueness constraint.

You need to figure out what should happen next. One option is to try again on the failed insert. That is, trap that specific error and then loop back to the start of the routine from the point of retrieving the last record. Note that this approach is not strictly guaranteed to ever complete. In practice, as long as there are gaps between 'bursts' of updates that are long enough to add all the new records one-by-one, it will. Another option is to fail the update back to the client. It all depends on the overall requirements.

Note: This answer was written assuming a distributed system with multiple hosts interacting with the same DB. While this is overkill for the OP's problem, I will leave it for other's who may need to solve for that scenario.

This is a good case for an optimistic locking scheme. Here's one approach that might work for you:

Keep a version number on each record which you increment by one on each new record. Create a unique index on the version number. When a new record comes in, you look up the last record. Add one to that and use it as the version number of your new record.

If another update has come in between retrieving the last record and adding the new record, both competing updates will retrieve the same last record and attempt to insert a new record with the same incremented version number. One of these inserts will fail due to the uniqueness constraint.

You need to figure out what should happen next. One option is to try again on the failed insert. That is, trap that specific error and then loop back to the start of the routine from the point of retrieving the last record. Note that this approach is not strictly guaranteed to ever complete. In practice, as long as there are gaps between 'bursts' of updates that are long enough to add all the new records one-by-one, it will. Another option is to fail the update back to the client. It all depends on the overall requirements.

Note: This answer was written assuming a distributed system with multiple hosts interacting with the same DB. While this is overkill for the OP's problem, I will leave it for other's who may need to solve for that scenario. That said, a uniqueness constraint on the DB explained is a good way to prevent corruption in case the ordering solution were to have a bug in it.

This is a good case for an optimistic locking scheme. Here's one approach that might work for you:

Keep a version number on each record which you increment by one on each new record. Create a unique index on the version number. When a new record comes in, you look up the last record. Add one to that and use it as the version number of your new record.

If another update has come in between retrieving the last record and adding the new record, both competing updates will retrieve the same last record and attempt to insert a new record with the same incremented version number. One of these inserts will fail due to the uniqueness constraint.

You need to figure out what should happen next. One option is to try again on the failed insert. That is, trap that specific error and then loop back to the start of the routine from the point of retrieving the last record. Note that this approach is not strictly guaranteed to ever complete. In practice, as long as there are gaps between 'bursts' of updates that are long enough to add all the new records one-by-one, it will. Another option is to fail the update back to the client. It all depends on the overall requirements.

added 231 characters in body
Source Link
JimmyJames
  • 31.1k
  • 3
  • 59
  • 111

Note: This answer was written assuming a distributed system with multiple hosts interacting with the same DB. While this is overkill for the OP's problem, I will leave it for other's who may need to solve for that scenario.

This is a good case for an optimistic locking scheme. Here's one approach that might work for you:

Keep a version number on each record which you increment by one on each new record. Create a unique index on the version number. When a new record comes in, you look up the last record. Add one to that and use it as the version number of your new record.

If another update has come in between retrieving the last record and adding the new record, both competing updates will retrieve the same last record and attempt to insert a new record with the same incremented version number. One of these inserts will fail due to the uniqueness constraint.

You need to figure out what should happen next. One option is to try again on the failed insert. That is, trap that specific error and then loop back to the start of the routine from the point of retrieving the last record. Note that this approach is not strictly guaranteed to ever complete. In practice, as long as there are gaps between 'bursts' of updates that are long enough to add all the new records one-by-one, it will. Another option is to fail the update back to the client. It all depends on the overall requirements.

This is a good case for an optimistic locking scheme. Here's one approach that might work for you:

Keep a version number on each record which you increment by one on each new record. Create a unique index on the version number. When a new record comes in, you look up the last record. Add one to that and use it as the version number of your new record.

If another update has come in between retrieving the last record and adding the new record, both competing updates will retrieve the same last record and attempt to insert a new record with the same incremented version number. One of these inserts will fail due to the uniqueness constraint.

You need to figure out what should happen next. One option is to try again on the failed insert. That is, trap that specific error and then loop back to the start of the routine from the point of retrieving the last record. Note that this approach is not strictly guaranteed to ever complete. In practice, as long as there are gaps between 'bursts' of updates that are long enough to add all the new records one-by-one, it will. Another option is to fail the update back to the client. It all depends on the overall requirements.

Note: This answer was written assuming a distributed system with multiple hosts interacting with the same DB. While this is overkill for the OP's problem, I will leave it for other's who may need to solve for that scenario.

This is a good case for an optimistic locking scheme. Here's one approach that might work for you:

Keep a version number on each record which you increment by one on each new record. Create a unique index on the version number. When a new record comes in, you look up the last record. Add one to that and use it as the version number of your new record.

If another update has come in between retrieving the last record and adding the new record, both competing updates will retrieve the same last record and attempt to insert a new record with the same incremented version number. One of these inserts will fail due to the uniqueness constraint.

You need to figure out what should happen next. One option is to try again on the failed insert. That is, trap that specific error and then loop back to the start of the routine from the point of retrieving the last record. Note that this approach is not strictly guaranteed to ever complete. In practice, as long as there are gaps between 'bursts' of updates that are long enough to add all the new records one-by-one, it will. Another option is to fail the update back to the client. It all depends on the overall requirements.

Source Link
JimmyJames
  • 31.1k
  • 3
  • 59
  • 111

This is a good case for an optimistic locking scheme. Here's one approach that might work for you:

Keep a version number on each record which you increment by one on each new record. Create a unique index on the version number. When a new record comes in, you look up the last record. Add one to that and use it as the version number of your new record.

If another update has come in between retrieving the last record and adding the new record, both competing updates will retrieve the same last record and attempt to insert a new record with the same incremented version number. One of these inserts will fail due to the uniqueness constraint.

You need to figure out what should happen next. One option is to try again on the failed insert. That is, trap that specific error and then loop back to the start of the routine from the point of retrieving the last record. Note that this approach is not strictly guaranteed to ever complete. In practice, as long as there are gaps between 'bursts' of updates that are long enough to add all the new records one-by-one, it will. Another option is to fail the update back to the client. It all depends on the overall requirements.