To solve the concurrency problem in the data inside the SQL database, we can lock and retrieve data from the database in two ways: optimistic or pessimistic.
The main point of these two methods is in the access control mechanism. In the pessimistic method, requests to access a common data are queued to access it in turn.
If in any case, in the pessimistic method, the data is queued and access will be in turn. So what happens to horizontal scaling under these conditions?
In answer, I must say: Yes, it is exactly like that. At that particular moment when there is a need to access the common data and you use the lock system, all the requests are queued to access the locked data in turn. It does not matter how many instances of the application you have deployed; because the lock is applied at the database level and the bottleneck remains in the database.
Another thing about the pessimistic method is that the data locked in this way is not even available for receiving (reading). This method is suitable for situations such as processing events that we do not want to process more than once. In this case, the second job can automatically switch to the next rows. In this way, you maintain both scalability and idempotency at the same time.
On the other hand, the optimistic method is suitable for data that is constantly received for reports and UI-side forms. In this way, concurrency control is applied only at the time of change:
• Only one transaction at a time can update the data for change,
• But for reading, the data is open and any thread can access the data at any time and only read the information.
In this method, an additional field called rowversion (or timestamp ) must be received each time and sent at the time of change. In case of conflict (change by another transaction), when saving data by second and subsequent threads, the system may need to retry to retrieve the data with the new rowversion and update it based on the new input.
Example: Updating qty in the inventory table to update the new inventory of the item.
© vahid arya. All Rights Reserved.