In computer science, a lock or mutex (from mutual exclusion) is a synchronization primitive that prevents state from being modified or accessed by multiple threads of execution at once. Locks enforce mutual exclusion concurrency control policies, and with a variety of possible methods there exist multiple unique implementations for different applications.
Generally, locks are advisory locks, where each thread cooperates by acquiring the lock before accessing the corresponding data. Some systems also implement mandatory locks, where attempting unauthorized access to a locked resource will force an exception in the entity attempting to make the access.
The simplest type of lock is a binary semaphore. It provides exclusive access to the locked data. Other schemes also provide shared access for reading data. Other widely implemented access modes are exclusive, intend-to-exclude and intend-to-upgrade.
Another way to classify locks is by what happens when the lock strategy prevents the progress of a thread. Most locking designs block the execution of the thread requesting the lock until it is allowed to access the locked resource. With a spinlock, the thread simply waits ("spins") until the lock becomes available. This is efficient if threads are blocked for a short time, because it avoids the overhead of operating system process re-scheduling. It is inefficient if the lock is held for a long time, or if the progress of the thread that is holding the lock depends on preemption of the locked thread.
Locks typically require hardware support for efficient implementation. This support usually takes the form of one or more atomic instructions such as "test-and-set", "fetch-and-add" or "compare-and-swap". These instructions allow a single process to test if the lock is free, and if free, acquire the lock in a single atomic operation.
Uniprocessor architectures have the option of using uninterruptible sequences of instructions—using special instructions or instruction prefixes to disable interrupts temporarily—but this technique does not work for multiprocessor shared-memory machines. Proper support for locks in a multiprocessor environment can require quite complex hardware or software support, with substantial synchronization issues.
The reason an atomic operation is required is because of concurrency, where more than one task executes the same logic. For example, consider the following C code:
if (lock == 0) {
// lock free, set it
lock = myPID;
}
The above example does not guarantee that the task has the lock, since more than one task can be testing the lock at the same time. Since both tasks will detect that the lock is free, both tasks will attempt to set the lock, not knowing that the other task is also setting the lock. Dekker's or Peterson's algorithm are possible substitutes if atomic locking operations are not available.
Careless use of locks can result in deadlock or livelock. A number of strategies can be used to avoid or recover from deadlocks or livelocks, both at design-time and at run-time. (The most common strategy is to standardize the lock acquisition sequences so that combinations of inter-dependent locks are always acquired in a specifically defined "cascade" order.)
Some languages do support locks syntactically. An example in C# follows:
public class Account // This is a monitor of an account
{
private decimal _balance = 0;
private object _balanceLock = new object();
public void Deposit(decimal amount)
{
// Only one thread at a time may execute this statement.
lock (_balanceLock)
{
_balance += amount;
}
}
public void Withdraw(decimal amount)
{
// Only one thread at a time may execute this statement.
lock (_balanceLock)
{
_balance -= amount;
}
}
}
The code lock(this)
can lead to problems if the instance can be accessed publicly.[1]
Similar to Java, C# can also synchronize entire methods, by using the MethodImplOptions.Synchronized attribute.[2][3]
[MethodImpl(MethodImplOptions.Synchronized)]
public void SomeMethod()
{
// do stuff
}
Before being introduced to lock granularity, one needs to understand three concepts about locks:
There is a tradeoff between decreasing lock overhead and decreasing lock contention when choosing the number of locks in synchronization.
An important property of a lock is its granularity. The granularity is a measure of the amount of data the lock is protecting. In general, choosing a coarse granularity (a small number of locks, each protecting a large segment of data) results in less lock overhead when a single process is accessing the protected data, but worse performance when multiple processes are running concurrently. This is because of increased lock contention. The more coarse the lock, the higher the likelihood that the lock will stop an unrelated process from proceeding. Conversely, using a fine granularity (a larger number of locks, each protecting a fairly small amount of data) increases the overhead of the locks themselves but reduces lock contention. Granular locking where each process must hold multiple locks from a common set of locks can create subtle lock dependencies. This subtlety can increase the chance that a programmer will unknowingly introduce a deadlock.[citation needed]
In a database management system, for example, a lock could protect, in order of decreasing granularity, part of a field, a field, a record, a data page, or an entire table. Coarse granularity, such as using table locks, tends to give the best performance for a single user, whereas fine granularity, such as record locks, tends to give the best performance for multiple users.
Database locks can be used as a means of ensuring transaction synchronicity. i.e. when making transaction processing concurrent (interleaving transactions), using 2-phased locks ensures that the concurrent execution of the transaction turns out equivalent to some serial ordering of the transaction. However, deadlocks become an unfortunate side-effect of locking in databases. Deadlocks are either prevented by pre-determining the locking order between transactions or are detected using waits-for graphs. An alternate to locking for database synchronicity while avoiding deadlocks involves the use of totally ordered global timestamps.
There are mechanisms employed to manage the actions of multiple concurrent users on a database—the purpose is to prevent lost updates and dirty reads. The two types of locking are pessimistic locking and optimistic locking:
Several variations and refinements of these major lock types exist, with respective variations of blocking behavior. If a first lock blocks another lock, the two locks are called incompatible; otherwise the locks are compatible. Often, lock types blocking interactions are presented in the technical literature by a Lock compatibility table. The following is an example with the common, major lock types:
Lock type | read-lock | write-lock |
---|---|---|
read-lock | ✔ | X |
write-lock | X | X |
Comment: In some publications, the table entries are simply marked "compatible" or "incompatible", or respectively "yes" or "no".[5]
Lock-based resource protection and thread/process synchronization have many disadvantages:
Some concurrency control strategies avoid some or all of these problems. For example, a funnel or serializing tokens can avoid the biggest problem: deadlocks. Alternatives to locking include non-blocking synchronization methods, like lock-free programming techniques and transactional memory. However, such alternative methods often require that the actual lock mechanisms be implemented at a more fundamental level of the operating software. Therefore, they may only relieve the application level from the details of implementing locks, with the problems listed above still needing to be dealt with beneath the application.
In most cases, proper locking depends on the CPU providing a method of atomic instruction stream synchronization (for example, the addition or deletion of an item into a pipeline requires that all contemporaneous operations needing to add or delete other items in the pipe be suspended during the manipulation of the memory content required to add or delete the specific item). Therefore, an application can often be more robust when it recognizes the burdens it places upon an operating system and is capable of graciously recognizing the reporting of impossible demands.[citation needed]
One of lock-based programming's biggest problems is that "locks don't compose": it is hard to combine small, correct lock-based modules into equally correct larger programs without modifying the modules or at least knowing about their internals. Simon Peyton Jones (an advocate of software transactional memory) gives the following example of a banking application:[6] design a class Account that allows multiple concurrent clients to deposit or withdraw money to an account, and give an algorithm to transfer money from one account to another.
The lock-based solution to the first part of the problem is:
class Account: member balance: Integer member mutex: Lock method deposit(n: Integer) mutex.lock() balance ← balance + n mutex.unlock() method withdraw(n: Integer) deposit(−n)
The second part of the problem is much more complicated. A transfer routine that is correct for sequential programs would be
function transfer(from: Account, to: Account, amount: Integer) from.withdraw(amount) to.deposit(amount)
In a concurrent program, this algorithm is incorrect because when one thread is halfway through transfer, another might observe a state where amount has been withdrawn from the first account, but not yet deposited into the other account: money has gone missing from the system. This problem can only be fixed completely by putting locks on both accounts prior to changing either one, but then the locks have to be placed according to some arbitrary, global ordering to prevent deadlock:
function transfer(from: Account, to: Account, amount: Integer) if from < to // arbitrary ordering on the locks from.lock() to.lock() else to.lock() from.lock() from.withdraw(amount) to.deposit(amount) from.unlock() to.unlock()
This solution gets more complicated when more locks are involved, and the transfer function needs to know about all of the locks, so they cannot be hidden.
Programming languages vary in their support for synchronization:
synchronize
attribute of methods to be synchronized, but this is specific to COM objects in the Windows architecture and Visual C++ compiler.[10] C and C++ can easily access any native operating system locking features.lock
keyword on a thread to ensure its exclusive access to a resource.SyncLock
keyword like C#'s lock
keyword.synchronized
to lock code blocks, methods or objects[11] and libraries featuring concurrency-safe data structures.@synchronized
[12] to put locks on blocks of code and also provides the classes NSLock,[13] NSRecursiveLock,[14] and NSConditionLock[15] along with the NSLocking protocol[16] for locking as well.Mutex
class in the pthreads
extension. [18]Lock
class from the threading
module.[19]lock_type
derived type in the intrinsic module iso_fortran_env
and the lock
/unlock
statements since Fortran 2008.[20]Mutex<T>
[22] struct.[23]LOCK
prefix on certain operations to guarantee their atomicity.MVar
, which can either be empty or contain a value, typically a reference to a resource. A thread that wants to use the resource ‘takes’ the value of the MVar
, leaving it empty, and puts it back when it is finished. Attempting to take a resource from an empty MVar
results in the thread blocking until the resource is available.[24] As an alternative to locking, an implementation of software transactional memory also exists.[25]A mutex is a locking mechanism that sometimes uses the same basic implementation as the binary semaphore. However, they differ in how they are used. While a binary semaphore may be colloquially referred to as a mutex, a true mutex has a more specific use-case and definition, in that only the task that locked the mutex is supposed to unlock it. This constraint aims to handle some potential problems of using semaphores:
A protected object provides coordinated access to shared data, through calls on its visible protected operations, which can be protected subprograms or protected entries.
{{cite book}}
: CS1 maint: numeric names: authors list (link)
{{cite book}}
: CS1 maint: numeric names: authors list (link)