3.1 Active object
An Active object is a design pattern that separates the execution thread of a method from the thread in which it was called. The purpose of this pattern is to provide parallel execution using asynchronous method calls and a request processing scheduler.
Simplified version:
Classic variant:
This template has six elements:
- A proxy object that provides an interface to the client's public methods.
- An interface that defines access methods for the active object.
- List of incoming requests from clients.
- A scheduler that determines the order in which queries are to be executed.
- Implementation of active object methods.
- A callback procedure or a variable for the client to receive a result.
3.2 lock
The Lock pattern is a synchronization mechanism that allows for exclusive access to a shared resource between multiple threads. Locks are one way to enforce concurrency control policy.
Basically, a soft lock is used, with the assumption that each thread tries to “acquire the lock” before accessing the corresponding shared resource.
However, some systems provide a mandatory locking mechanism whereby an unauthorized access attempt to a locked resource will be aborted by throwing an exception on the thread that attempted to access it.
A semaphore is the simplest type of lock. In terms of data access, no distinction is made between access modes: shared (read-only) or exclusive (read-write). In shared mode, multiple threads can request a lock to access data in read-only mode. The exclusive access mode is also used in the update and delete algorithms.
The types of locks are distinguished by the strategy of blocking the continuation of the execution of the thread. In most implementations, a request for a lock prevents the thread from continuing to execute until the locked resource is available.
A spinlock is a lock that waits in a loop until access is granted. Such a lock is very efficient if a thread waits for a lock for a small amount of time, thus avoiding excessive rescheduling of threads. The cost of waiting for access will be significant if one of the threads holds the lock for a long time.
To effectively implement the locking mechanism, support at the hardware level is required. Hardware support can be implemented as one or more atomic operations such as "test-and-set", "fetch-and-add" or "compare-and-swap". Such instructions allow you to check without interruption that the lock is free, and if so, then acquire the lock.
3.3 Monitor
The Monitor pattern is a high-level process interaction and synchronization mechanism that provides access to shared resources. An approach to synchronizing two or more computer tasks using a common resource, usually hardware or a set of variables.
In monitor-based multitasking, the compiler or interpreter transparently inserts lock-unlock code into appropriately formatted routines, transparently to the programmer, saving the programmer from explicitly calling synchronization primitives.
The monitor consists of:
- a set of procedures that interact with a shared resource
- mutex
- variables associated with this resource
- an invariant that defines conditions to avoid a race condition
The monitor procedure acquires the mutex before starting work and holds it either until the procedure exits or until a certain condition is waited for. If each procedure guarantees that the invariant is true before releasing the mutex, then no task can acquire the resource in a race condition.
This is how the synchronized operator works in Java with the wait()
and methods notify()
.
3.4 Double check locking
Double checked locking is a parallel design pattern intended to reduce the overhead of obtaining a lock.
First, the blocking condition is checked without any synchronization. A thread attempts to acquire a lock only if the result of the check indicates that it needs to acquire the lock.
//Double-Checked Locking
public final class Singleton {
private static Singleton instance; //Don't forget volatile modifier
public static Singleton getInstance() {
if (instance == null) { //Read
synchronized (Singleton.class) { //
if (instance == null) { //Read Write
instance = new Singleton(); //
}
}
}
}
How to create a singleton object in a thread-safe environment?
public static Singleton getInstance() {
if (instance == null)
instance = new Singleton();
}
If you create a Singleton object from different threads, then there may be a situation where several objects are created at the same time, and this is unacceptable. Therefore, it is reasonable to wrap the object creation in a synchronized statement.
public static Singleton getInstance() {
synchronized (Singleton.class) {
if (instance == null)
instance = new Singleton();
}
}
This approach will work, but it has a small downside. After the object is created, every time you try to get it in the future, a check will be performed in the synchronized block, which means that the current thread and everything connected with it will be locked. So this code can be optimized a bit:
public static Singleton getInstance() {
if (instance != null)
return instance;
synchronized (Singleton.class) {
if (instance == null)
instance = new Singleton();
}
}
On some languages and/or on some machines it is not possible to safely implement this pattern. Therefore, it is sometimes called an anti-pattern. Such features have led to the appearance of the "happens before" strict order relationship in the Java Memory Model and the C++ Memory Model.
It is typically used to reduce the overhead of implementing lazy initialization in multi-threaded programs, such as the Singleton design pattern. In lazy initialization of a variable, initialization is deferred until the value of the variable is needed in the computation.
3.5 Scheduler
The Scheduler is a parallel design pattern that provides a mechanism for implementing a scheduling policy, but is independent of any particular policy. Controls the order in which threads should execute sequential code, using an object that explicitly specifies the sequence of waiting threads.
GO TO FULL VERSION