The Java programming language and the Java virtual machine (JVM) is designed to support concurrent programming. All execution takes place in the context of threads. Objects and resources can be accessed by many separate threads. Each thread has its own path of execution, but can potentially access any object in the program. The programmer must ensure read and write access to objects is properly coordinated (or "synchronized") between threads.[1][2] Thread synchronization ensures that objects are modified by only one thread at a time and prevents threads from accessing partially updated objects during modification by another thread.[2] The Java language has built-in constructs to support this coordination.
Most implementations of the Java virtual machine run as a single process. In the Java programming language, concurrent programming is primarily concerned with threads (also called lightweight processes). Multiple processes can only be realized with multiple JVMs.
Threads share the process' resources, including memory and open files. This makes for efficient, but potentially problematic, communication.[2] Every application has at least one thread called the main thread. The main thread has the ability to create additional threads as Runnable
or Callable
objects. The Callable
interface is similar to Runnable
in that both are designed for classes whose instances are potentially executed by another thread.[3] A Runnable
, however, does not return a result and cannot throw a checked exception.[4]
Each thread can be scheduled[5] on a different CPU core[6] or use time-slicing on a single hardware processor, or time-slicing on many hardware processors. There is no general solution to how Java threads are mapped to native OS threads. Every JVM implementation can so this differently.
Each thread is associated with an instance of the class Thread
. Threads can be managed either by directly using the Thread
objects, or indirectly by using abstract mechanisms such as Executor
s or Task
s.[7]
Two ways to start a Thread:
public class HelloRunnable implements Runnable { @Override public void run() { System.out.println("Hello from thread!"); } public static void main(String[] args) { (new Thread(new HelloRunnable())).start(); } }
public class HelloThread extends Thread { @Override public void run() { System.out.println("Hello from thread!"); } public static void main(String[] args) { (new HelloThread()).start(); } }
An interrupt tells a thread that it should stop what it is doing and do something else. A thread sends an interrupt by invoking Template:Javadoc on the Thread
object for the thread to be interrupted. The interrupt mechanism is implemented using an internal boolean
flag known as the "interrupted status".[8] Invoking interrupt()
sets this flag.[9] By convention, any method that exits by throwing an InterruptedException
clears the interrupted status when it does so. However, it's always possible that the interrupted status will immediately be set again, by another thread invoking interrupt()
.
The
Template:Javadoc method allows one Thread
to wait for the completion of another.
Uncaught exceptions thrown by code will terminate the thread. The main thread prints exceptions to the console, but user-created threads need a handler registered to do so.[10][11]
The Java memory model describes how threads in the Java programming language interact through memory. On modern platforms, code is frequently not executed in the order it was written. It is reordered by the compiler, the processor and the memory subsystem to achieve maximum performance. The Java programming language does not guarantee linearizability, or even sequential consistency,[12] when reading or writing fields of shared objects, and this is to allow for compiler optimizations (such as register allocation, common subexpression elimination, and redundant read elimination) all of which work by reordering memory reads—writes.[13]
Threads communicate primarily by sharing access to fields and the objects that reference fields refer to. This form of communication is extremely efficient, but makes two kinds of errors possible: thread interference and memory consistency errors. The tool needed to prevent these errors is synchronization.
Reorderings can come into play in incorrectly synchronized multithreaded programs, where one thread is able to observe the effects of other threads, and may be able to detect that variable accesses become visible to other threads in a different order than executed or specified in the program. Most of the time, one thread doesn't care what the other is doing. But when it does, that's what synchronization is for.
To synchronize threads, Java uses monitors, which are a high-level mechanism for allowing only one thread at a time to execute a region of code protected by the monitor. The behavior of monitors is explained in terms of locks; there is a lock associated with each object.
Synchronization has several aspects. The most well-understood is mutual exclusion—only one thread can hold a monitor at once, so synchronizing on a monitor means that once one thread enters a synchronized block protected by a monitor, no other thread can enter a block protected by that monitor until the first thread exits the synchronized block.[2]
But there is more to synchronization than mutual exclusion. Synchronization ensures that memory writes by a thread before or during a synchronized block are made visible in a predictable manner to other threads which synchronize on the same monitor. After we exit a synchronized block, we release the monitor, which has the effect of flushing the cache to main memory, so that writes made by this thread can be visible to other threads. Before we can enter a synchronized block, we acquire the monitor, which has the effect of invalidating the local processor cache so that variables will be reloaded from main memory. We will then be able to see all of the writes made visible by the previous release.
Reads—writes to fields are linearizable if either the field is volatile, or the field is protected by a unique lock which is acquired by all readers and writers.
A thread can achieve mutual exclusion either by entering a synchronized block or method, which acquires an implicit lock,[14][2] or by acquiring an explicit lock (such as the ReentrantLock
from the java.util.concurrent.locks
package [15]). Both approaches have the same implications for memory behavior. If all accesses to a particular field are protected by the same lock, then reads—writes to that field are linearizable (atomic).
When applied to a field, the Java volatile
keyword guarantees that:
volatile
variable. This implies that every thread accessing a volatile
field will read its current value before continuing, instead of (potentially) using a cached value. (However, there is no guarantee about the relative ordering of volatile reads and writes with regular reads and writes, meaning that it's generally not a useful threading construct.)A volatile
fields are linearizable. Reading a volatile
field is like acquiring a lock: the working memory is invalidated and the volatile
field's current value is reread from memory. Writing a volatile
field is like releasing a lock: the volatile
field is immediately written back to memory.
A field declared to be final
cannot be modified once it has been initialized.[17] An object's final
fields are initialized in its constructor. As long as the this
reference is not released from the constructor before the constructor returns, then the correct value of any final
fields will be visible to other threads without synchronization.[18]
Since JDK 1.2, Java has included a standard set of collection classes, the Java collections framework
Doug Lea, who also participated in the Java collections framework implementation, developed a concurrency package, comprising several concurrency primitives and a large battery of collection-related classes.[19] This work was continued and updated as part of JSR 166 which was chaired by Doug Lea.
JDK 5.0 incorporated many additions and clarifications to the Java concurrency model. The concurrency APIs developed by JSR 166 were also included as part of the JDK for the first time. JSR 133 provided support for well-defined atomic operations in a multithreaded/multiprocessor environment.
Both the Java SE 6 and Java SE 7 releases introduced updated versions of the JSR 166 APIs as well as several new additional APIs.
Original source: https://en.wikipedia.org/wiki/Java concurrency.
Read more |