The Java programming language and the Java virtual machine (JVM) have been designed to support concurrent programming, and all execution takes place in the context of threads. Objects and resources can be accessed by many separate threads; each thread has its own path of execution but can potentially access any object in the program. The programmer must ensure read and write access to objects is properly coordinated (or "synchronized") between threads. Thread synchronization ensures that objects are modified by only one thread at a time and that threads are prevented from accessing partially updated objects during modification by another thread. The Java language has built-in constructs to support this coordination.
Most implementations of the Java virtual machine run as a single process and in the Java programming language, concurrent programming is mostly concerned with threads (also called lightweight processes). Multiple processes can only be realized with multiple JVMs.
Threads share the process's resources, including memory and open files. This makes for efficient, but potentially problematic, communication. Every application has at least one thread called the main thread. The main thread has the ability to create additional threads as Runnable
or Callable
objects. (The Callable
interface is similar to Runnable
, in that both are designed for classes whose instances are potentially executed by another thread. A Runnable
, however, does not return a result and cannot throw a checked exception.)
Each thread can be scheduled on a different CPU core or use time-slicing on a single hardware processor, or time-slicing on many hardware processors. There is no generic solution to how Java threads are mapped to native OS threads. Every JVM implementation can do it in a different way.
Each thread is associated with an instance of the class Thread. Threads can be managed either directly using Thread objects or using abstract mechanisms such as Executor
s and java.util.concurrent
collections.