Saturday, July 26, 2014

How to deal with blocking operations

In distributed environment even simple things become complicated. One of the most trivial examples is a counter. If you have only one instance of an application running, then you can simply store the counter in a database table with the counter's id and value, for example:
| ID | VAL |
------------
| 0  |  1  |
and increase its value with a simple query:
UPDATE counters SET val=val+1 WHERE id=0
However, when there are many clients trying to increase the counter simultaneously (for example multiple frontends displaying the number of visitors to the website), you may experience some serious slowdowns. It's because updating a row in a database is a blocking operation, which means that during update the row is locked to prevent the situation when multiple process try to modify it at the same time. There are two basic types of locks: write locks and read locks. A write lock is a basic and most often used type of lock, which does not allow modifying a value while another change is in progress. You can still read the row value, but in this case two subsequent reads can return two different values, because the change could have taken place just between them. When this becomes an issue, you can use a read lock, which prevents other processes from reading the row until the change is done. Depending on how your database is configured, locking the counter row will become more or less painful, but it still needs to be done sequentially to ensure its value is correct. This way, simple operation of increasing the visitors counter can become a serious bottleneck to the whole website.

I have prepared an example application which simulates such case. You can download a zipped archive from here. It was prepared in Linux, but in case you work in Windows you should also be able to compile it with Cygwin - just make sure to install mysql development libraries (you need mysql_config.exe binary in your PATH).
First, prepare the database and set up the tables. All example assume that you have a MySQL database running on localhost, with test database called "test" and password "root" for the root user. If your environment differs, modifiy respective variables:
mysql -u root -proot test < counter.sql
Now, compile the example code which works on the counter:
make sync
This example simulates ten simultaneous blocking counter updates. Each transaction starts one second after the previous one, but it takes two seconds to complete, so the transactions overlap and block one another.
When you run the resulting binary you should see the output similar to the following:
Begin #1
Update #1
Begin #2
Update #2
Begin #3
Update #3
Commit #1
Begin #4
Update #4
Begin #5
Update #5
Commit #2
Begin #6
Update #6
Begin #7
Update #7
Commit #3
Begin #8
Update #8
Begin #9
Update #9
Commit #4
Begin #10
Update #10
Commit #5
Commit #6
Commit #7
Commit #8
Commit #9
Commit #10
Counter value: 10, time elapsed: 20.015499s
You can notice that commits do not take place immediately after updates. This is because each transaction must wait for all previous ones to complete. As a result the test takes around 20 seconds, which is the sum of time needed to finish all transactions. In this scenario, the more clients try to modifiy the counter, the longer it takes to complete. Finally, the system becomes overloaded, some of the operations time out, client processes fail to set the correct counter value, and the user experience is a disaster.

There are couple of ways to solve the problem. Some of them involve spending time and money to build a NoSQL cluster based on some fashionable technology, like Hadoop or Cassandra. A cheaper and smarter way involves implementing so called "optimistic locking". The main idea behind this technique is to decrease to the minimum the time you spend in lock. Operation based on optimistic locking takes three phases: The last operation is the only one that needs to be atomic. In short, it reads the value you want to modify and if didn't change from the first read, it replaces the old value with the new one. If comparison fails, it means that another process has already modified the value, and we need to roll back.
Optimistic locking is a good solution, but it still can lead to failures while updating counter value, and it also requires clients to implement some strategy on how to deal with update failure: "Should I repeat the operation?", "At what intervals?", "How many times before giving up?", etc.

Another way is to get rid of blocking opeartions at all, and replace them with non-blocking ones. It can be done easily by introducing queues. Putting a new value into a queue does not affect any existing values, so it doesn't need any read or write locks. With every new request you just insert a new row into the queue table, and there is a single, separate process which updates the counter and removes the rows it has read form the queue. Because there are no concurrent processes working on the counter, there is no problem of blocking. Also, the worker can read new inserts in batches instead of single records, which can give a huge performance boost with fast growing queue (if there are a hundred new tasks waiting in the queue, you can increase the counter by a hundred in one step). The queue is safe, because each insert has its own unique id, so deleting records from the queue does not interfere with inserting new ones. Also, when updating fails, the inserts are not lost, but they remain in the queue for later processing. The only drawback is that when you read the counter value it is still a little bit behind, but for purposes such as the number of visitors to the website, it can be easily accepted.

Going back to the example. Compile the code with:
make async
This will rebuild the code to use queue instead on changin the counter directly. When you now run it, you should see the output similar to the following:
Begin #1
Insert #1
Begin #2
Insert #2
Commit #1
Begin #3
Insert #3
Commit #2
Begin #4
Insert #4
Begin #5
Insert #5
Commit #3
Begin #6
Insert #6
Commit #4
Commit #5
Begin #7
Insert #7
Commit #6
Begin #8
Insert #8
Commit #7
Begin #9
Insert #9
Commit #8
Begin #10
Insert #10
Commit #9
Commit #10
Counter value: 20, time elapsed: 11.023697s
As you can see, the whole operation now took much faster to complete, because the new transactions didn't have to wait for the previous ones to finish.

No comments: