Tuesday, September 21, 2010

Concurrency -ConcurrencyMode


This issue arrises when multiple request attempts to access the same Service object at runtime. i.e. (PerSession / Singleton Instancing Mode).
For PerCall there will be no concurrency issue, as a new Service object is instantiated for every request.
However PerSession/Singleton Services are at concurrency risk candidates.

Three Modes:

·         ConcurrencyMode.Single (Default) – A single thread has access to a Service Object at a given time.
·         ConcurrencyMode.Reentrant –
·         A single thread has access to a Service Object at a given time.
·         But the thread can exit the Service and can reenter without deadlock.

·         ConcurrencyMode.Multiple –
·         Multiple request threads have access to the service object.
·         Shared resources must be manually protected from concurrent access.

ConcurrencyMode.Single with PerSession/Singleton

·         In this case, a lock is acquired for the Service Object, while a request is being processed by that object.
·         Other calls to the same Service Object are blocked & queued.
·         When the request having the lock has completed, the lock is released. Then the next request in the queue acquires the lock and starts processing.

Fig:1 –Single - PerCall
Fig:2 – Single - PerSession
Fig:3 – Single - Singleton


Reentrant mode behaves similar to Single Mode. i.e. does not allows concurrent calls from clients.
However, Renentrant mode is required when a Service issues Callbacks (Two-Way). Single mode cannot handle these callbacks. (Fig 4)

Reentrant -PerCall Fig 5·         With Reentrant, when client callback is made (Label 2), the lock on the service instance is released so that the another call (Label 3), is allowed to acquire it. Fig 5.
·         When the outgoing call is returned (Fig5. - Label 3),it is queued to acquire the lock to complete its work.

This is not possible with Single-PerCall– In Single mode, deadlock is guaranteed, where as in Reentrant mode, the callback (Fig5. - Label 3) simply reacquires the lock.e.g. Compare Figure 4 & 5

Reentrant -PerSession
Fig  6·         Reentrant – PerSession gives more thruput as compared 'Single – PerSession' (Fig 2).
·         Reason being while the callback is in progress, another call T2 (labelled 2) from the client can acquire the lock, as the first thread(T1) has released it issue a callback.
·         T1 queued up on return, waiting for T2 to complete the work.

Reentrant - Singleton

Fig 7
·         Reentrant – Singleton gives more thruput as compared 'Single – Singleton'(Fig 3).
·         Reason being while the callback is in progress, another call from any client (T2/T3/T4) can acquire the lock, as the first thread(T1) has released it issue a callback.
·         Here for e.g. T3 has acquired the lock as the first thread(T1) has released it issue a callback.
·         T1 then later queued up on return, waiting for T3 to complete the work

Fig 4 – Single – PerCall with Callback
Fig:5 –Reentrant – PerCall
Fig:6 – Reentrant - PerSession
Fig:7 – Reentrant - Singleton

Reentrant – Can highly affect the 'Integrity'

·         Renetrancy can highly affect the state of object in case of PerSession & Singleton – Figure 6,7.
·         Reason being – A new thread (T2) will be using the same service object, while another thread T1 may have not compeleted it's work (busy with callback).
·         Thus make sure that the thread's state is always persisted safely with a resource manager, such as DB before releasing the lock for callback.This same persisted state could be retrieved  upon the callback's return, if applicable.


Multiple mode provides high thruput, as the multiple threads are allowed to access to the same/shared Service object.
In this case, no locks are acquired on the Service object. All Shared State & Resources must be protected with Manual Synchronization Techniques.

In case of PerSession & Singleton, the 'State' is shared in the Service Instance.
Now here there could be two possiblilites.

·         If the State, itself references threadsafe objects, that encapsulate their own concurrency protection, then the first thread (T1) to access the State obejct, will acquire the lock for that State object (Fig 9,10). The other threads (T2 will queue, if it tries to access the State object.
·         If not, Service object should implement sycnhronization locks, before accessing those resources.

Fig:8 – Multiple - PerCall
Fig:9 – Multiple - PerSession
Fig:10 – Multiple - Singleton

Note: In case of Multiple – No thread are blocked (See Green cricles). i.e this is the case where the maximum concurrency attention is required.

Multiple -PerCall Fig 8
PerCall, are not immune to synchronization issues. Reason being, there is no possiblity of  'Shared State' in the service instance.
However, individual Service Instances, may access a 'Shared' resource. The shared resource could be:

·         Global Object – Relies on 'Manual Synchronization Techniques'
·         Shared Cache – Relies on 'Manual Synchronization Techniques'
·         Database – It relies on 'Resource Managers' & 'Transactions' to manage consistency & concurrency.

Here as shown in fig(8), manulaly locks have been applied on the Cache object. Thus once T1 acquires the lock, another thread T2 will wait. Thus avoiding Concurrency issue.

Multiple -PerSession
Fig  9PerSession, could also have the 'PerCall' scenario (Shared Cache).
However, additionally, they are also likely to have 'Shared State' in the Service Instance.

Thus here one should implement the Manual/Custom Synchronziation to control the access to the 'Shared State' object.. The synchronization should
work in a way that if T1 has lock on that state object and if another thread(T2) tries access the state, then T2 should be queued.
Multiple - Singleton

Fig 10
Same as 'PerSession'.

Additionally, it is entirely possible, that multiple types of 'Shared State' exist in the same 'Service Instance' to increase the throughput.
In such cases, different locks should be acquired for relevant type of 'State' object.
This way, multiple thread T1& T3 can complete work concurrently, since they don't share state. T2 & T4 will queue, waiting for the respectives locks to free.

Manual/Custom Synchronization:

There are many ways to implement 'Custom' Syynchronization. Few are:

·         Monitor
·         Mutex
·         Semaphore
·         ReadWriterLock
·         Interlocked
·         .Net Attributes – Used to synchronize access to a type.

[ServiceContract(Namespace = "")]
    public interface IMessagingService
        void SendMessage1(string message);

        void SendMessage2(string message);
        void SendMessage3(string message);

    public class MessagingService : IMessagingService
        int m_messageCounter;
        Dictionary<int, string> m_messages = new Dictionary<int, string>();
        Mutex m_mutex = new Mutex(false);

        int m_messageCounter2;
        Dictionary<int, string> m_messages2 = new Dictionary<int, string>();
        Mutex m_mutex2 = new Mutex(false);

        #region IMessagingService Members

        public void SendMessage1(string message)

                m_messages.Add(m_messageCounter, message);
                Trace.WriteLine(string.Format("Message {0}: {1}", m_messageCounter, message));


        public void SendMessage2(string message)
             m_messages.Add(m_messageCounter, message);
             Trace.WriteLine(string.Format("Message {0}: {1}", m_messageCounter, message));

        public void SendMessage3(string message)
           m_messages2.Add(m_messageCounter2, message);
           Trace.WriteLine(string.Format("Message {0}: {1}", m_messageCounter2, message));

Hope this helps.

Thanks & Regards,
Arun Manglick

No comments:

Post a Comment