Labels

Showing posts with label Threading. Show all posts
Showing posts with label Threading. Show all posts

Monday, June 15, 2009

Threading || Volatile

Volatile

 

Common usage of the volatile modifier is when a particular field is accessed by many threads without using the lock statement to serialize access.

So in essence the volatile modifier guarantees that a thread will retrieve the most recent value written by another thread (even if it was modified by the previous instruction from where you called).

 

Whenever a volatile is requested, the system returns the current value at the time of the request.

Similarly, All assignments are written to the object immediately.

 

You are not allowed to use volatile on just any time. The following is a list of types you can implement this modifier on:

 

  • Any reference type.
  • Any pointer type in a unsafe context
  • sbyte, byte, short, ushort, int, uint, char, float, bool.
  • An enum type with an enum base type of the following: byte, sbyte, short, ushort, int, uint.

 

 

 

public volatile int iAge;

 

 

 

Another Important Example

 

 

class Unsafe

    {

        static bool endIsNigh = false;

        static bool repented = false;

 

        static void Main()

        {

            new Thread(Wait).Start(); // Start up the spinning waiter

            Thread.Sleep(1000); // Give it a second to warm up!

 

            repented = true;

            endIsNigh = true;

            Console.WriteLine("Going...");

        }

 

        static void Wait()

        {

            while (!endIsNigh)

            {

                 // Keep Spinning

            }

            Console.WriteLine("Gone, " + repented);

        }

    }

 

Let’s understand the usage of ‘Volatile’ by this example. Consider two possiblilities from the above example.

 

  1. Is it possible for the Wait method to continue spinning in its while loop, even after the endIsNigh flag has been set to true?
  2. Furthermore, is it possible for the Wait method to write "Gone, false"?

 

The answer to both questions is, theoretically, yes, on a multi-processor machine, if the thread scheduler assigns the two threads different CPUs.

The repented and endIsNigh fields can be cached in CPU registers to improve performance, with a potential delay before their updated values are written back to memory.

And when the CPU registers are  written back to memory, it’s not necessarily in the order they were originally updated. This can result in two scenarios possible.

 

This caching can be circumvented by using the static methods Thread.VolatileRead and Thread.VolatileWrite to read and write to the fields. VolatileRead means “read the latest value”; VolatileWrite means “write immediately to memory”. The same functionality can be achieved more elegantly by declaring the field with the volatile modifier.

 

 

 

class Unsafe

    {

        volatilestatic bool endIsNigh = false;

        volatile static bool repented = false;

 

       // Same code as above

    }

 

 

 

 

Hope this helps.

 

Thanks & Regards,

Arun Manglick || Senior Tech Lead

 

 

Threading || Interlocked

Atomicity & Interlocked

 

We know that the need for synchronization arises even the simple case of assigning or incrementing a field. Although locking can always satisfy this need, a contended lock means that a thread must block, suffering the overhead and latency of being temporarily descheduled.

 

The .NET framework's Non-Blocking Synchronization constructs can perform simple operations without ever blocking, pausing, or waiting.

These involve using instructions that are Strictly Atomic, and instructing the compiler to use "volatile" read and write semantics.

 

A statement is Atomic if it executes as a single indivisible instruction. Strict atomicity means - no possibility of preemption.

In C#, a simple read or assignment on a field of 32 bits or less is atomic (assuming a 32-bit CPU). Operations on larger fields (64 bits) are non-atomic, as are statements that combine more than one read/write operation.

 

Reading and writing 64-bit fields is non-atomic on 32-bit CPUs in the sense that two separate 32- bit memory locations are involved. If thread A reads a 64-bit value while thread B is updating it, thread A may end up with a bitwise combination of the old and new values.

 

Note - Unary operators of the kind x++ are non-atomic operations – As it requires first reading a variable, then processing it, then writing it back.

 

 

class Atomicity

    {

        static int x, y;

        static long z;

        static void Test()

        {

            long myLocal;

            x = 3;                // Atomic

            z = 3;                // Non-atomic (z is 64 bits)

            myLocal = z;      // Non-atomic (z is 64 bits)

            y += x;              // Non-atomic (read AND write operation)

            x++;                  // Non-atomic (read AND write operation)

        }

    }

 

 

One way to solve to these problems is to wrap the non-atomic operations around a lock statement. Locking, in fact, simulates atomicity.

The Interlocked class, however, provides a simpler and faster solution for simple atomic operations.

 

Using Interlocked is generally more efficient that obtaining a lock, because it can never block and suffer the overhead of its thread being temporarily descheduled.

Interlocked is also valid across multiple processes – in contrast to the lock statement, which is effective only across threads in the current process.

 

 

 

class Program

    {

        static long sum;

        static void Main()

        {

            Interlocked.Increment(ref sum); // 1

            Interlocked.Decrement(ref sum); // 0

         

            Interlocked.Add(ref sum, 3); // 3

           

            Console.WriteLine(Interlocked.Read(ref sum)); // 3

           

            Console.WriteLine(Interlocked.Exchange(ref sum, 10)); // 10

           

            Interlocked.CompareExchange(ref sum, 123, 10); // 123

        }

    }

 

 

 

Hope this helps.

 

Thanks & Regards,

Arun Manglick || Senior Tech Lead

 

 

Threading || ThreadState

 

ThreadState

 

 

 

 

One can query a thread's execution status via its ThreadState property. Figure 1 shows one "layer" of the ThreadState enumeration.

ThreadState is horribly designed, in that it combines three "layers" of state using bitwise flags, the members within each layer being themselves mutually exclusive.

 

Here are all three layers:

 

  • The Running / Blocking / Aborting status (as shown in Figure 1)
  • The Background/Foreground status (ThreadState.Background)
  • The progress towards suspension via the deprecated Suspend method(ThreadState.SuspendRequested and ThreadState.Suspended)

 

In total then, ThreadState is a bitwise combination of zero or one members from each layer!.Here are some sample ThreadStates:

 

  • Unstarted
  • Running
  • WaitSleepJoin
  • Background, Unstarted
  • SuspendRequested, Background, WaitSleepJoin

 

Note - The enumeration has two members that are never used, at least in the current CLR implementation: StopRequested and Aborted.

 

 

class Terminator

    {

        static void Main()

        {

            Thread t = new Thread(Work);

            Console.WriteLine(t.ThreadState); // Unstarted

 

            t.Start();

            Thread.Sleep(1000);

            Console.WriteLine(t.ThreadState); // Running

            t.Abort();

            Console.WriteLine(t.ThreadState); // AbortRequested

            t.Join();

            Console.WriteLine(t.ThreadState); // Stopped

        }

 

        static void Work()

        {

            while (true)

            {

                try

                {

                    while (true);

                }

                catch (ThreadAbortException)

                {}               

            }

        }

    }

 

 

 

 

ThreadState is invaluable for debugging or profiling. It's poorly suited, however, to coordinating multiple threads, because no mechanism exists by which one can test a ThreadState and then act upon that information, without the ThreadState potentially changing in the interim.

 

Hope this helps.

 

Thanks & Regards,

Arun Manglick || Senior Tech Lead

 

 

Threading || Timers

Timers

 

The easiest way to execute a method periodically is using a timer.

There are three different Timer classes been provided.

 

  • System.Threading.Timer
  • System.Timers.Timer
  • System.Windows.Forms.Timer

 

 

System.Threading.Timer

 

The threading timer takes advantage of the thread pool, Allowing Many Timers To Be Created without the overhead of many threads.

Timer is a fairly simple class, with a constructor and just two methods (a delight for minimalists, as well as book authors!).

 

 

 

class Program

    {

        static void Main()

        {

            Timer tmr = new Timer(Tick, "tick...", 5000, 1000);

            Console.ReadLine();

            tmr.Dispose();

        }

        static void Tick(object data)

        {

            Console.WriteLine(data); // Writes "tick..."

        }

    }

 

Output

 

Here  timer calls the Tick method which writes "tick..." after 5 seconds

have elapsed, then every second after that – until the user presses Enter:

 

 

 

System.Timers.Timer

 

This simply wraps System.Threading.Timer, providing additional convenience while Using The Same Thread Pool – and the identical underlying engine.

Here's a summary of its added features:

 

  • A Component implementation, allowing it to be sited in the Visual Studio Designer
  • An Interval property instead of a Change method
  • An Elapsed event instead of a callback delegate
  • An Enabled property to start and pause the timer (its default value being false)
  • Start and Stop methods in case you're confused by Enabled
  • An AutoReset flag for indicating a recurring event (default value true)

 

 

public partial class Form1 : Form

{

    private System.Timers.Timer timer = null;

    public Form1()

    {

        InitializeComponent();

 

        int timerPeriod = 30000;

        timer = new System.Timers.Timer(timerPeriod);

        timer.Elapsed += new ElapsedEventHandler(this.ticker_Elapsed);

 

        label1.Text = "Not Started ...";

    }

 

    private void button1_Click(object sender, EventArgs e)

    {

        timer.Start();

        label1.Text = "Started ...";

    }

 

    private void ticker_Elapsed(object sender, ElapsedEventArgs e)

    {

           using (SqlConnection cnx = ConnectionPool.GetConnection())

            {

               // Code Here

            }

    }

 

    private void button2_Click(object sender, EventArgs e)

    {

        timer.Stop();

        label1.Text = "Stopped ...";

    }

}

 

 

 

System.Windows.Forms.Timer

 

  • While similar to System.Timers.Timer in its interface, it's radically different in the functional sense.
  • A Windows Forms timer Does Not Use The Thread Pool, instead firing its "Tick" event always on the same thread that originally created the timer.
  • Assuming this is the main thread – also responsible for instantiating all the forms and controls in the Windows Forms application – the timer's event handler is then able to interact with the forms and controls without violating thread-safety – or the impositions of apartment-threading.
  • Control.Invoke is not required.
  • The Windows timer is, in effect, a single-threaded timer.

 

Note –

 

There's an equivalent single-threaded timer for WPF, called DispatcherTimer.

 

 

Hope this helps.

 

Thanks & Regards,

Arun Manglick || Senior Tech Lead

 

 

Threading || Asynchronous Events

Asynchronous Events

 

 

 

Hope this helps.

 

Thanks & Regards,

Arun Manglick || Senior Tech Lead

 

 

Threading || Asynchronous Delegates Over Asynchronous Methods

Asynchronous Methods

 

Asynchronous Delegates is one approach of Thread Pooling, out of the three approaches. Similar to the Asynchronous Delegates, there is another similar concept – Asynchronous Methods.

 

Some types in the .NET Framework offer asynchronous versions of their methods, with names starting with "Begin" and "End". These are called asynchronous methods and have signatures similar to those of asynchronous delegates.

 

Asynchronous Methods exist to solve a much harder problem: To Allow More Concurrent Activities Than You Have Threads.

For e.g. A web or TCP sockets server, for instance, can process several hundred concurrent requests on just a handful of pooled threads if written using NetworkStream.BeginRead and NetworkStream.BeginWrite.

 

Despite of the mentioned advantage, unless you're writing a high concurrency application, you should avoid asynchronous methods for a number of reasons:

 

· Unlike asynchronous delegates, asynchronous methods may not actually execute in parallel with the caller.

· The benefits of asynchronous methods erodes or disappears if you fail to follow the pattern carefully.

· Things can get complex pretty quickly when you do follow the pattern correctly.

 

However, if you really require the parallel execution, then better you go for the below alternatives –

 

  • Calling the synchronous version of the method (e.g. NetworkStream.Read) via an Asynchronous Delegate.
  • Another option is to use ThreadPool.QueueUserWorkItem or BackgroundWorker—or simply create a new thread.

 

 

Hope this helps.

 

Thanks & Regards,

Arun Manglick || Senior Tech Lead