Chapter 20

Working with Threads


CONTENTS


A key feature of the Java programming environment and runtime system is the multithreaded architecture shared by both. Multithreading, which is a fairly recent construct in the programming community, is a very powerful means of enhancing and controlling program execution. This chapter takes a look at how the Java language supports multithreading through the use of threads. You learn all about the different classes that enable Java to be a threaded language, along with many of the issues surrounding the effective use of threads.

Thread Basics

The multithreading support in Java revolves around the concept of a thread. The question is, what is a thread? Put simply, a thread is a single stream of execution within a process. OK, maybe that wasn't so simple. It might be better to start off by exploring exactly what a process is.

A process is a program executing within its own address space. Java is a multiprocessing system, meaning that it supports many processes running concurrently in their own address spaces. You may be more familiar with the term multitasking, which describes a scenario very similar to multiprocessing. As an example, consider the variety of applications typically running at once in a graphical environment. As I write this, I am running Microsoft Word along with Internet Explorer, Windows Explorer, Inbox, CD Player, and Volume Control. These applications are all processes executing within the Windows 95 environment. In this way, you can think of processes as being analogous to applications, or stand-alone programs; each process in a system is given its own room to execute.

A thread is a sequence of code executing within the context of a process. As a matter of fact, threads cannot execute on their own; they require the overhead of a parent process to run. Within each of the processes I mentioned running on my machine, there are no doubt a variety of threads executing. For example, Word may have a thread in the background automatically checking the spelling of what I'm writing, while another thread may be automatically saving changes to the document I'm working on. Like Word, each application (process) can be running many threads that are performing any number of tasks. The significance here is that threads are always associated with a particular process. Figure 20.1 shows the relationship between threads and processes.

Figure 20.1: The relationship between threads and processes.

Note
Threads are sometimes referred to as lightweight processes, implying that they are a limited form of a process. A thread is in fact very similar to a full-blown process, with the major difference being that a thread always runs within the context of another program. Unlike processes, which maintain their own address space and operating environment, threads rely on a parent program for execution resources.

I've described threads and processes using Windows 95 as an example, so you've probably guessed that Java isn't the first system to employ the use of threads. That's true, but Java is the first major programming language to incorporate threads at the heart of the language itself. Typically, threads are implemented at the system level, requiring a platform-specific programming interface separate from the core programming language. This is the case with C/C++ Windows programming, because you have to use the Win32 programming interface to develop multithreaded Windows applications.

Java is presented as both a language and a runtime system, so the Sun architects were able to integrate threads into both. The end result is that you are able to make use of Java threads in a standard, cross-platform fashion. Trust me, this is no small feat, especially considering the fact that some systems, like Windows 3.1, don't have any native support for threads!

The Thread Classes

The Java programming language provides support for threads through a single interface and a handful of classes. The Java interface and classes that include thread functionality follow:

All of these classes are part of the java.lang package, and they are covered in great detail in Chapter 32, "Package java.lang." For now, take a brief look at what each offers in the way of thread support.

Thread

The Thread class is the primary class responsible for providing thread functionality to other classes. To add thread functionality to a class, you simply derive the class from Thread and override the run method. The run method is where the processing for a thread takes place, and it is often referred to as the thread body. The Thread class also defines start and stop methods that allow you to start and stop the execution of the thread, along with a host of other useful methods.

Runnable

Java does not directly support multiple inheritance, which involves deriving a class from
multiple parent classes. This brings up a pretty big question in regard to adding thread functionality to a class: how can you derive from the Thread class if you are already deriving from another class? The answer is: you can't! This is where the Runnable interface comes into play.

The Runnable interface provides the overhead for adding thread functionality to a class simply by implementing the interface, rather than deriving from Thread. Classes that implement the Runnable interface simply provide a run method that is executed by an associated thread object that is created separately. This is a very useful feature and is often the only outlet you have to incorporating multithreading into existing classes.

ThreadDeath

The ThreadDeath error class provides a mechanism for allowing you to clean up after a thread is asynchronously terminated. I'm calling the ThreadDeath an error class because it is derived from the Error class, which provides a means of handling and reporting errors. When the stop method is called on a thread, an instance of ThreadDeath is thrown by the dying thread as an error. You should only catch the ThreadDeath object if you need to perform cleanup specific to the asynchronous termination, which is a pretty rare situation. If you do catch the object, you must rethrow it so the thread will actually die.

ThreadGroup

The ThreadGroup class is used to manage a group of threads as a single unit. This provides you with a means to finely control thread execution for a series of threads. For example, the ThreadGroup class provides stop, suspend, and resume methods for controlling the execution of all the threads in the group. Thread groups can also contain other thread groups, allowing for a nested hierarchy of threads. Individual threads have access to their immediate thread group, but not to the parent of the thread group.

Object

Although not strictly a thread support class, the Object class does provide a few methods that are crucial to the Java thread architecture. These methods are wait, notify, and notifyAll. The wait method causes a thread to wait in a sleep state until it is notified to continue. Likewise, the notify method informs a waiting thread to continue along with its processing. The notifyAll method is similar to notify except it applies to all waiting threads. These three methods can only be called from a synchronized method; don't worry, you'll learn more about synchronized methods a little later in this chapter.

Typically, these methods are used with multiple threads, where one method waits for another to finish some processing before it can continue. The first thread waits for the other thread to notify it so it can continue. Just in case you're in the dark here, the Object class rests at the top of the Java class hierarchy, meaning that it is the parent of all classes. In other words, every Java class inherits the functionality provided by Object, including the wait, notify, and notifyAll methods. For more information on the Object class, check out Chapter 32.

Creating Threads

Threads aren't much use if you don't know how to create them. Fortunately, you have two options for creating and using threads in your own programs, which have already been alluded to when discussing the thread classes:

Both of these approaches revolve around providing a run method, which is where all the actual processing takes place. After a thread has been created and initialized, its run method is called and given the opportunity to perform whatever processing the thread is designed to provide. Because it provides all the processing power, the run method is the heart of a thread. It's not uncommon for the run method to consist of an infinite loop that performs some repetitive action like updating the frames of an animation. But enough about run for now; go ahead and create some threads!

Deriving from the Thread Class

If your class isn't derived from a specific class, you can easily make it threaded by deriving it from the Thread class. The following source code shows how to add thread functionality to a class by deriving from the Thread class:

public class ThreadMe extends Thread {
  public run() {
    // do some busy work
  }
}

It's as easy as that! To use this class in a real program and set the thread in motion, you simply create a ThreadMe object and call the start method inherited from Thread, like this:

ThreadMe me = new ThreadMe();
me.start();

The start method automatically calls run and gets the thread busy performing its processing. The thread will then run until the run method exits or the thread is stopped, suspended, or killed. If for some reason you want to stop the thread's execution, just call the stop method, like this:

me.stop();

If stopping the thread entirely is a little too abrupt, you can also temporarily pause it by calling the suspend method, like this:

me.suspend();

The suspend method puts the thread in a wait state, very similar to the state a thread enters when you call the wait method. When you decide the thread has waited long enough, just call resume to get things rolling again, like this:

me.resume();

Implementing the Runnable Interface

If your class needs to derive from a class other than Thread, you are forced to implement the Runnable interface to make it threaded. A very common situation where you have to do this is when an applet class needs to be threaded. Because applet classes must be derived from the Applet class, and Java doesn't provide a mechanism for multiple inheritance, you have no other option but to implement the Runnable interface to add threading support. You implement the Runnable interface in a class like this:

public class ThreadYou implements Runnable {
   public run() {
     // do some busy work
   }
 }

As you can see, the only syntactic difference in this approach is that you use the implements keyword instead of extends. However, notice that you can still use the extends keyword to derive from another class, like this:

public class ThreadedApp extends Applet implements Runnable {
   public run() {
     // do some busy work
   }
 }

This is a very practical scenario involving an applet class with a run method that performs some type of threaded processing. Even though the definition of a class implementing the Runnable interface is little different than directly deriving from Thread, there is a big difference when it comes to creating the thread and getting it running. Creating and running a threaded class implementing the Runnable interface is a three-part process, as the following code shows:

ThreadYou you = new ThreadYou();
Thread t = new Thread(you);
t.start();

Unlike the previous approach involving creating a thread object and calling its start method, this approach requires you to create both an instance of your class and a separate instance of the Thread class. You pass your object into the Thread class's constructor, which gives it access to your run method. You then set the thread running by calling the thread object's start method. The thread in turn executes your object's run method. The thread knows your class has a run method because it implements the Runnable interface.

If you have no need to access the thread after you get it started, the creation and starting of the thread can be combined into one statement, like this:

ThreadYou you = new ThreadYou();
new Thread(you).start();

This approach eliminates the creation of the local variable t, which makes the code a little more efficient. Of course, you may think the original code is easier to understand, in which case you should by all means stick with the clearer technique.

Scheduling and Thread Priority

You may be wondering how any software system can be truly threaded when running on a machine with a single CPU. If there is only one physical CPU in a computer system, it's impossible for more than one machine code instruction to be executed at a time. This means that no matter how hard you try to rationalize the behavior of a multithreaded system, only one thread is really being executed at a particular time. The reality is that multithreading on a single CPU system, like the systems most of us use, is at best a good illusion. The good news is that the illusion works so well most of the time that we feel pretty comfortable in the thought that multiple threads are really running in parallel.

Note
Incidentally, this same rule applies to multiprocessing systems involving a single CPU. Even though it may look as though you're downloading a file and playing a game in parallel, under the hood the CPU is busy juggling the execution of each process.

How It Works

The illusion of parallel thread execution on a system with a single CPU is often managed by giving each thread an opportunity to execute a little bit of code at regular intervals. This approach is known as timeslicing, which refers to the way each thread gets a little of the CPU's time to execute code. When you speed up this whole scenario to millions of instructions per second, the whole effect of parallel execution comes across pretty well. The general task of managing and executing multiple threads in an environment such as this is known as scheduling. So far, I've described the most basic form of timesliced scheduling, where every thread is given equal access to the processor in small increments. In reality, this turns out not to be the best approach to managing thread execution.

Undoubtedly, there are going to be situations where you would like some threads to get more of the CPU's attention than others. To accommodate this reality, most threaded systems employ some type of prioritization to allow threads to execute at different priority levels. Java employs a type of scheduling known as fixed priority scheduling, which schedules thread execution based on the relative priorities between threads. It works like this: each thread is assigned a relative priority level, which determines the order in which it receives access to the CPU. High-priority threads are given first rights to the CPU, while low-priority threads are left to execute when the CPU is idle.

One interesting thing about Java's approach to scheduling is that it doesn't employ timeslicing. In other words, the currently executing thread gets to enjoy the complete control of the CPU until it yields control to other threads. Lower-priority threads must simply wait until high-priority threads give them a chance to execute. Threads with the same priority level are given access to the CPU one after the next. Figure 20.2 shows how priority impacts the order in which threads are executed.

Figure 20.2: The relationship between thread priority and execution.

Note
The Java runtime system itself could be merely a single process within a timesliced multiprocessing system on a particular platform. In this way, the fixed priority scheduling employed in Java applies only to Java programs executing within the Java runtime environment.

A good example of a low-priority thread is the garbage collection thread in the Java runtime system. Even though garbage collection is a very important function, it is not something you want hogging the CPU. Because the garbage collection thread is a low-priority thread, it chugs along in the background freeing up memory as the processor allows it. This may result in memory being freed a little more slowly, but it allows more time-critical threads, such as the user input handling thread, full access to the CPU. You may be wondering what happens if the CPU stays busy and the garbage collector never gets to clean up memory. Does the runtime system run out of memory and crash? No-this brings up one of the neat aspects of threads and how they work. If a high-priority thread can't access a resource it needs, such as memory, it enters a wait state until memory becomes available. When all memory is gone, all the threads running will eventually go into a wait state, thereby freeing up the CPU to execute the garbage collection thread, which in turn frees up memory. And the circle of threaded life continues!

Establishing Thread Priority

When a new thread is created, it inherits its priority from the thread that created it. The Thread class defines three constants representing the relative priority levels for threads, which follow:

MIN_PRIORITY
NORM_PRIORITY
MAX_PRIORITY

The Java garbage collection thread has a priority of MIN_PRIORITY, whereas the system thread that manages user input events has a priority of MAX_PRIORITY. Knowing this, it's a good idea to take the middle road for most of your own threads and declare them as NORM_PRIORITY. Generally speaking, this should happen without your having to do anything special because the parent thread you are creating threads from will likely be set to NORM_PRIORITY. If, however, you want to explicitly set a thread's priority, it's pretty easy-the Thread class provides a method called setPriority that allows you to directly set a thread's priority.

Incidentally, the thread priority constants are actually integers that define a range of priorities. MIN_PRIORITY and MAX_PRIORITY are the lower and upper limits of the range of acceptable priority values, while NORM_PRIORITY rests squarely in the middle of the range. This means that you can offset these values to get varying priority levels. If you pass a priority value outside the legal range (MIN_PRIORITY to MAX_PRIORITY), the thread will throw an exception of type IllegalArgumentException.

Getting back to the setPriority method, you can use it to set a thread's priority like this:

t.setPriority(Thread.MAX_PRIORITY);

Likewise, you can use the getPriority method to retrieve a thread's priority like this:

int priority = t.getPriority();

Knowing that thread priority is a relative integer value, you can fine-tune a thread's priority by incrementing or decrementing its priority value, like this:

t.setPriority(t.getPriority() + 1);

This statement moves the thread up a little in the priority list. Of course, the extents of the priority range determine the effectiveness of this statement; in the current release of Java, MIN_PRIORITY is set to 1 and MAX_PRIORITY is set to 10.

Daemons

So far, you've been learning about threads that operate within the context of a parent program. Java provides support for another type of thread, called a daemon thread, that acts in many ways like an independent process. Unlike traditional threads, daemon threads belong to the runtime system itself, rather than a particular program. Daemon threads are typically used to manage some type of background service available to all programs. The garbage collection thread is a perfect example of a daemon thread; it chugs along without regard to any particular program performing a very useful service that benefits all programs.

You can set a thread as a daemon thread simply by calling the setDaemon method, which is defined in the Thread class, and passing true:

thread.setDaemon(true);

You can query a thread to see if it is a daemon thread simply by calling the isDaemon method, which is also defined in the Thread class:

boolean b = thread.isDaemon();

The Java runtime interpreter typically stays around until all threads in the system have finished executing. However, it makes an exception when it comes to daemon threads. Because daemon threads are specifically designed to provide some type of service for full-blown programs, it makes no sense to continue to run them when there are no programs running. So, when all the remaining threads in the system are daemon threads, the interpreter exits. This is still the same familiar situation of the interpreter exiting when your program finishes executing; you just may not have realized there were daemon threads out there as well.

Grouping Threads

Earlier you learned a little about the ThreadGroup class, which is used to group threads together. Grouping threads is sometimes useful because it allows you to control multiple threads as a single entity. For example, the ThreadGroup class has suspend and resume methods, which can be used to suspend and resume the entire group of threads. What you haven't learned is how to actually manage thread groups, which is the focus of this section.

Every thread in the Java runtime system belongs to a thread group. You may be wondering how this is possible, considering the fact that you saw earlier how to create threads with no mention of a thread group. If you create a thread without specifying a thread group, the thread is added to the group that the current thread belongs to. The current thread is the thread where the new thread is created from. In some cases, there may not be a current thread, in which case the Java runtime system adds the thread to a default thread group called main.

You associate threads with a particular group upon thread creation. There is no way to alter the group membership of a thread once it has been created; in other words, you get one opportunity to specify the permanent group for a thread when you create the thread. The Thread class includes constructors for specifying the thread group for the thread:

Thread(ThreadGroup, String)
Thread(ThreadGroup, Runnable)
Thread(ThreadGroup, Runnable, String)

Each constructor takes a ThreadGroup object as the first parameter. The first constructor also takes a string parameter, allowing you to give the thread a name. The last two constructors take a Runnable object as the second parameter, which is typically an object of your own concoction that implements the Runnable interface. Finally, the last constructor also takes a string parameter, allowing you to name the thread.

Before you can create any threads, you need to create a ThreadGroup object. The ThreadGroup class defines two constructors, which follow:

ThreadGroup(String name)
ThreadGroup(ThreadGroup parent, String name)

The first constructor simply creates an empty thread group with the specified name. The second constructor does the same thing, but places the new thread group within the thread group specified in the parent parameter. This constructor allows you to nest thread groups.

Take a look at a quick example of creating a thread group and adding a few threads to it. The following code shows how to create and manage a thread group and a couple of member threads:

ThreadGroup group = new ThreadGroup("Queen bee");
Thread t1 = new Thread(group, "Worker bee 1");
Thread t2 = new Thread(group, "Worker bee 2");
t1.start();
t2.start();
...
group.suspend();
...
group.resume();

After the thread group is created, each thread is created and passed the ThreadGroup object. This makes them members of the thread group. Each thread is then started by calling the start method of each; the ThreadGroup class doesn't provide a means to start all the thread members at once. Sometime later the thread group suspends both threads with a call to the suspend method. It then gets them running again by calling resume.

You can find out what group a thread belongs to by calling the getThreadGroup method. This method returns the ThreadGroup object that the thread belongs to. You can then find out the name of the thread group by calling the getName method defined in the ThreadGroup class. The following code shows how to print the name of the thread group for a particular thread.

ThreadGroup group = t.getThreadGroup();
System.out.println(group.getName());

Thread States

Thread behavior is completely dependent on the state a thread is in. The state of a thread defines its current mode of operation, such as whether it is running or not. Following is a list of the Java thread states:

New

A thread is in the "new" state when it is first created until its start method is called. New threads are already initialized and ready to get to work, but they haven't been given the cue to take off and get busy.

Runnable

When the start method is called on a new thread, the run method is in turn called and the thread enters the "runnable" state. You may be thinking this state should just be called "running," because the execution of the run method means a thread is running. However, you have to take into consideration the whole priority issue of threads having to potentially share a single CPU. Even though every thread may be running from an end-user perspective, in actuality all but the one currently accessing the CPU are in a "runnable" wait state at any particular instant. You can still conceptually think of the "runnable" state as the "running" state; just remember that all threads have to share system resources.

Not Running

The "not running" state applies to all threads that are temporarily halted for some reason. When a thread is in this state, it is still available for use and is capable of re-entering the "runnable" state at some point. Threads can enter the "not running" state through a variety of means. Following is a list of the different possible events that can cause a thread to be temporarily halted:

For each of these actions causing a thread to enter the "not running" state, there is an equivalent response to get the thread running again. Following is a list of the corresponding events that can put a thread back in the "runnable" state:

Dead

A thread enters the "dead" state when it is no longer needed. Dead threads cannot be revived and executed again. A thread can enter the "dead" state through one of two approaches, which follow:

The first approach is the natural way for a thread to die; you can think of a thread dying when its run method finishes executing as death by natural causes. In contrast to this is a thread dying by way of the stop method; calling the stop method kills a thread in an asynchronous fashion.

Even though the latter approach sounds kind of abrupt, it is often very useful. For example, it's common for applets to kill their threads using stop when their own stop method is called. The reason for this is that an applet's stop method is usually called in response to a user leaving the Web page containing the applet. You don't want threads out there executing for an applet that isn't even active, so killing the threads makes perfect sense.

Synchronization

Throughout the discussion of threads thus far, you've really only learned about threads from an asynchronous perspective. In other words, you've only been concerned with getting threads up and running and not worrying too much about how they actually execute. You can only think in these terms when you are dealing with a single thread or with threads that don't interact with the same data. In reality, there are many instances where it is useful to have multiple threads running and accessing the same data. In this type of scenario, the asynchronous programming approach just won't work; you must take extra steps to synchronize the threads so they don't step on each other's toes.

The problem of thread synchronization occurs when multiple threads attempt to access the same resources or data. As an example, imagine the situation where two threads are accessing the same data file; one thread may be writing to the file while the other thread is simultaneously reading from it. This type of situation can create some very unpredictable, and therefore undesirable, results.

Note
When data objects are shared between competing threads, they are referred to as condition variables.

When you are dealing with threads that are competing for limited resources, you simply must take control of the situation to ensure that each thread gets equal access to the resources in a predictable manner. A system where each thread is given a reasonable degree of access to resources is called a fair system. The two situations you must try to avoid when implementing a fair system are starvation and deadlock. Starvation occurs when a thread is completely cut off from the resources and can't make any progress; the thread is effectively frozen. Where starvation can apply to a number of threads individually, deadlock occurs when two or more threads are waiting for a mutual condition that can never be satisfied; they are starving each other.

A Hypothetical Example

A popular hypothetical example that more clearly demonstrates the problem of deadlock is the dining philosophers. The story goes that there are five hungry philosophers sitting around a table preparing to eat. In front of each philosopher is a bowl of rice, while between each philosopher there is a chopstick. To take a bite of rice, a philosopher needs two chopsticks: one from the left and one from the right. In this situation, the philosophers are equivalent to threads, with the chopsticks representing the limited, shared resources they all need access to. Their desired function is to eat the rice, which requires access to a pair of chopsticks.

The philosophers are only allowed to pick up one chopstick at a time, and they must always pick up the left chopstick and then the right. When a philosopher gets both chopsticks, he can take a bite of rice and then put down both chopsticks. This sounds like a pretty reasonable system of sharing the limited resources so everyone can eat. But consider what happens when each philosopher goes after the chopsticks with equal access to them. Each philosopher immediately grabs the chopstick to his left, resulting in every philosopher having a single chopstick. They all then reach for the chopstick on their right, which is now being held by the philosopher to their right. They are all waiting for another chopstick, so they each just sit holding a single chopstick indefinitely. Both figuratively and literally, they are starving each other!

This is a very good example of how a seemingly fair system can easily go awry. One potential solution to this problem is to force each philosopher to wait a varying amount of time before attempting to grab each chopstick. This approach definitely helps, and the philosophers will probably get to eat some rice, but the potential for deadlock, and therefore starvation, is still there. You are counting on blind luck to save the day and keep the philosophers well fed. In case you didn't guess, this isn't the ideal approach to solving deadlock problems.

You have two approaches to solving deadlock in a situation like this: prevention or detection. Prevention means designing the system so that deadlock is impossible. Detection, on the other hand, means allowing for deadlock but detecting it and dealing with its consequences when they arise. As with a medical illness, it doesn't take a huge mental leap to realize that prevention usually involves much less pain than detection, which results in sort of a chemotherapy for deadlock. My vote is clearly for avoiding deadlock in the first place. Besides, trying to detect deadlock can often be a daunting task in and of itself.

Getting back to the famished philosophers, the root of the problem is the fact that there is no order imposed on the selection of chopsticks. By assigning a priority order to the chopsticks, you can easily solve the deadlock problem; just assign increasing numbers to the chopsticks. Then force the philosophers to always pick up the chopstick with the lower number first. This results in the philosopher sitting between chopsticks 1 and 2 and the philosopher sitting between chopsticks 1 and 5 both going for chopstick 1. Whoever gets it first is then able to get the remaining chopstick, while the other philosopher is left waiting. When the lucky philosopher with two chopsticks finishes his bite and returns the chopsticks, the process repeats itself, allowing all the philosophers to eat. Deadlock has been successfully avoided!

Synchronizing Threads

If you're thinking the dining philosophers example seems fairly simple, you're right. But don't get too confident yet-real-world thread synchronization situations can get extremely messy. Fortunately, Java provides a very powerful solution to the whole issue: the synchronized modifier. The synchronized modifier is used to flag certain parts of your code as synchronized, resulting in limited, predictable access for threads. More specifically, only one thread is allowed access to a synchronized section of code at a time.

For synchronized methods, it works like this: each synchronized method is given a lock, which determines if it can be accessed, similar to a real lock. When a thread attempts to access the method, it must first see if it is locked, in which case the thread is denied access. If the method isn't locked, the thread gets access and the method then becomes locked. Pretty simple, right?

Note
Locks can apply to both methods as well as entire classes, but not directly to individual blocks of code. You are allowed to specify an object or class that is locked for a particular block of synchronized code, but the block itself isn't locked.

Synchronized sections of code are called critical sections, implying that access to them is critical to the successful threaded execution of the program. Critical sections are also sometimes referred to as atomic operations, meaning that they appear to other threads as if they occur at once. In other words, just as an atom is a discrete unit of matter, atomic operations effectively act like a discrete operation to other threads, even though they may really contain many operations inside.

You can use the synchronized modifier to mark critical sections in your code and make them threadsafe. Following are some examples of using the synchronized modifier:

synchronized public void addEmUp() {
  float a, b;
  a += b;
  b += a;
}

public void moveEmOut() {
  Rectangle rect;
  synchronized (rect) {
    rect.width -= 2;
  }
  rect.height -= 2;
}

The first example shows how to secure an entire method and make it synchronized; only one thread is allowed to execute the addEmUp method at a time. The moveEmOut method, on the other hand, contains a synchronized block of code within it. The synchronized block protects the width of the rectangle from being modified by multiple threads at once. Notice that the rect object itself is used as the lock for the block of code. Also notice that the modification of the height of the rectangle isn't included in the synchronized block, and therefore is subject to access by multiple threads at once.

Note
It's important to note that even though there are legitimate situations where you will need to make a block of code synchronized, in general it is better to apply synchronization at the method level. Employing method synchronization as opposed to block synchronization facilitates a more object-oriented design and results in code that is easier to debug and maintain.

There is a subtle problem when using synchronized methods that you may not have thought about. Check out the following code sample:

public class countEngine {
  private static int count;
  public synchronized void decCount() {
    count--;
  }
}

The decCount method is synchronized, so it appears that the count member variable is protected from misuse. However, count is a class variable, not an instance variable, because it is declared as being static. The lock on the synchronized method is performed on the instance object, so the class data isn't protected. The solution is to synchronize the block using the class as the locked object, like this:

public class countEngine {
  private static int count;
  public void decCount() {
    synchronized (getClass()) {
      count--;
    }
  }
}

Notice that the getClass method is called to retrieve the class for the synchronized block. This is a perfect example of where you have to use block synchronization over method synchronization to get a desired result.

Volatile Variables

In rare cases where you don't mind threads modifying a variable whenever they please, Java provides a means of maintaining the variable's integrity. The volatile modifier allows you to specify that a variable will be modified asynchronously by threads. The purpose of the volatile modifier is to protect against variable corruption through registered storage of the variable. In an asynchronous environment, corruption can sometimes occur when a variable is stored in CPU registers. The volatile modifier tells the runtime system to always reference a variable directly from memory, instead of using a register. Furthermore, the variable is read from and written back to memory after each access, just to be safe. It's fairly rare that you'll need to use the volatile modifier, but if you feel like living on the edge, it's there for your enjoyment!

Summary

In this chapter, you learned about multithreading from both a conceptual level and a practical programming level, while seeing exactly what facilities Java provides to support multithreaded programming. You saw the two techniques for creating threads using the Thread class and the Runnable interface. You then took a look at the Java scheduler and how it affects thread priority. You moved on to daemons and thread groups, and why they are important. You finished up by tackling the issue of thread synchronization, which is something you will often have to contend with in multithreaded programming. Fortunately, you saw how Java provides plenty of support for handling the most common types of synchronization problems.