网络隐士的Blog

  博客园 :: 首页 :: 博问 :: 闪存 :: 新随笔 :: 联系 :: 订阅 订阅 :: 管理 ::
http://www.javaworld.com/javaworld/jw-02-2001/jw-0209-double.html

 

Double-checked locking: Clever, but broken

Do you know what synchronized really means?

Summary
Many Java programmers are familiar with the double-checked locking idiom, which allows you to perform lazy initialization with reduced synchronization overhead. Though many Java books and articles recommend double-checked locking, unfortunately, it is not guaranteed to work. In this article I explore some of the issues underlying that odd discovery and dive into the murky waters of the Java Memory Model. (2,400 words)
By Brian Goetz


Printer-friendly version Printer-friendly version | Send this article to a friend Mail this to a friend


Page 1 of 2

Advertisement

From the highly regarded Elements of Java Style to the pages of JavaWorld (see Java Tip 67), many well-meaning Java gurus encourage the use of the double-checked locking (DCL) idiom. There's only one problem with it -- this clever-seeming idiom may not work.

  Double-checked locking can be hazardous to your code!  

This week JavaWorld focuses on the dangers of the double-checked locking idiom. Read more about how this seemingly harmless shortcut can wreak havoc on your code:

  • "Warning! Threading in a multiprocessor world," Allen Holub

  • Double-checked locking: Clever, but broken," Brian Goetz

  • To talk more about double-checked locking, go to Allen Holub's Programming Theory & Practice discussion

  • What is DCL?
    The DCL idiom was designed to support lazy initialization, which occurs when a class defers initialization of an owned object until it is actually needed:


    class SomeClass {
      private Resource resource = null;

      public Resource getResource() {
        if (resource == null)
          resource = new Resource();
        return resource;
      }
    }

    Why would you want to defer initialization? Perhaps creating a Resource is an expensive operation, and users of SomeClass might not actually call getResource() in any given run. In that case, you can avoid creating the Resource entirely. Regardless, the SomeClass object can be created faster if it doesn't have to also create a Resource at construction time. Delaying some initialization operations until a user actually needs their results can help programs start up faster.

    What if you try to use SomeClass in a multithreaded application? Then a race condition results: two threads could simultaneously execute the test to see if resource is null and, as a result, initialize resource twice. In a multithreaded environment, you should declare getResource() to be synchronized.

    Unfortunately, synchronized methods run much slower -- as much as 100 times slower -- than ordinary unsynchronized methods. One of the motivations for lazy initialization is efficiency, but it appears that in order to achieve faster program startup, you have to accept slower execution time once the program starts. That doesn't sound like a great trade-off.

    DCL purports to give us the best of both worlds. Using DCL, the getResource() method would look like this:


    class SomeClass {
      private Resource resource = null;

      public Resource getResource() {
        if (resource == null) {
          synchronized {
            if (resource == null)
              resource = new Resource();
          }
        }
        return resource;
      }
    }

    After the first call to getResource(), resource is already initialized, which avoids the synchronization hit in the most common code path. DCL also averts the race condition by checking resource a second time inside the synchronized block; that ensures that only one thread will try to initialize resource. DCL seems like a clever optimization -- but it doesn't work.

    Meet the Java Memory Model
    More accurately, DCL is not guaranteed to work. To understand why, we need to look at the relationship between the JVM and the computer environment on which it runs. In particular, we need to look at the Java Memory Model (JMM), defined in Chapter 17 of the Java Language Specification, by Bill Joy, Guy Steele, James Gosling, and Gilad Bracha (Addison-Wesley, 2000), which details how Java handles the interaction between threads and memory.

    Unlike most other languages, Java defines its relationship to the underlying hardware through a formal memory model that is expected to hold on all Java platforms, enabling Java's promise of "Write Once, Run Anywhere." By comparison, other languages like C and C++ lack a formal memory model; in such languages, programs inherit the memory model of the hardware platform on which the program runs.

    When running in a synchronous (single-threaded) environment, a program's interaction with memory is quite simple, or at least it appears so. Programs store items into memory locations and expect that they will still be there the next time those memory locations are examined.

    Actually, the truth is quite different, but a complicated illusion maintained by the compiler, the JVM, and the hardware hides it from us. Though we think of programs as executing sequentially -- in the order specified by the program code -- that doesn't always happen. Compilers, processors, and caches are free to take all sorts of liberties with our programs and data, as long as they don't affect the result of the computation. For example, compilers can generate instructions in a different order from the obvious interpretation the program suggests and store variables in registers instead of memory; processors may execute instructions in parallel or out of order; and caches may vary the order in which writes commit to main memory. The JMM says that all of these various reorderings and optimizations are acceptable, so long as the environment maintains as-if-serial semantics -- that is, so long as you achieve the same result as you would have if the instructions were executed in a strictly sequential environment.

    Compilers, processors, and caches rearrange the sequence of program operations in order to achieve higher performance. In recent years, we've seen tremendous improvements in computing performance. While increased processor clock rates have contributed substantially to higher performance, increased parallelism (in the form of pipelined and superscalar execution units, dynamic instruction scheduling and speculative execution, and sophisticated multilevel memory caches) has also been a major contributor. At the same time, the task of writing compilers has grown much more complicated, as the compiler must shield the programmer from these complexities.

    When writing single-threaded programs, you cannot see the effects of these various instruction or memory operation reorderings. However, with multithreaded programs, the situation is quite different -- one thread can read memory locations that another thread has written. If thread A modifies some variables in a certain order, in the absence of synchronization, thread B may not see them in the same order -- or may not see them at all, for that matter. That could result because the compiler reordered the instructions or temporarily stored a variable in a register and wrote it out to memory later; or because the processor executed the instructions in parallel or in a different order than the compiler specified; or because the instructions were in different regions of memory, and the cache updated the corresponding main memory locations in a different order than the one in which they were written. Whatever the circumstances, multithreaded programs are inherently less predictable, unless you explicitly ensure that threads have a consistent view of memory by using synchronization.


    Next page >
    Page 1 Double-checked locking: Clever, but broken
    Page 2 What does synchronized really mean?

    Printer-friendly version Printer-friendly version | Send this article to a friend Mail this to a friend


      Resources
    Double-checked locking idiom: The Java Memory Model and multithreaded programming
  • Discuss your technical issues in the Java Forum:
    http://forums.devworld.com/webx?14@@.ee6b802
  • To quickly search for other important JavaWorld articles, listed by subject, visit our useful Topical Index:
    http://www.javaworld.com/javaworld/topicalindex/jw-ti-index.html
  • Sign up for the JavaWorld This Week free weekly email newsletter and keep up with what's new at JavaWorld:
    http://www.idg.net/jw-subscribe
  • posted on 2005-09-19 18:11  网络隐士  阅读(283)  评论(0编辑  收藏  举报