synchronized solves atomicity

What is atomicity?

If multiple threads are doing the same thing

1. How to generate

public class Demo {
    private int i =0;
    private void incr(){
    public static void main(String[] args) throws InterruptedException {
        Demo demo = new Demo();
        Thread[] threads = new Thread[2];
        for (int i = 0; i <2; i++) {
            threads[i] = new Thread(() -> {
                for (int j = 0; j < 1000; j++) {
            //Thread start
        //join is the main process, waiting for the thread to finish executing
        System.out.println("The result of the calculation is------>"+ demo.i);

Results: because i + + is not atomic, the calculation result is incorrect

Why? Let's take a look at the operation of class bytecode
Compile java files into class files, and then javap looks at the class bytecode. How does it run?
javap -v xxx.class
Process understanding:
How many steps are i + + divided into in class bytecode?
  1. First, get the value of the variable through getfield
  2. Then put it on the operand stack
  3. Then add them up and put them back
  4. Assign the calculated value to i this field
Well, what if one of the four steps switches to thread 2. When thread 2 finishes executing and comes back to get i, i is still thread 1's i. thread 2 has no effect. This will cause problems
This is the atomicity problem in multithreaded environment. So, how to solve this problem?

2. How to solve it?

Carefully observe the above figure. On the surface, it is the operation of multiple threads on the same variable. In fact, it is the line of code i + +, which is not atomic. That is why such a problem occurs in a multithreaded environment.
In other words, we only need to ensure that the i + + instruction can only be accessed by one thread at the same time during operation, so we can solve the problem. The next focus is to synchronize the synchronization lock


Synchronized ultimately achieves mutual exclusion, preventing them from accessing the same object

1. Scope of action

synchronized has three ways to lock. Different modification types represent the control granularity of the lock:
1. Modify the instance method to lock the current instance. Obtain the lock of the current instance before entering the synchronization code
public static synchronized void one() { }
2. Static method, which is used to lock the current class object, and obtain the lock of the current class object before entering the synchronization code
public synchronized void two() { }
3. Modify the code block, specify the lock object, lock the given object, and obtain the lock of the given object before entering the synchronization code base.
public synchronized void three() {synchronized (ThreadOneDemo.class){}}
Here, you can see what object he passed in. If it is static, it is a global lock
The scope of the lock can be controlled. The scope of the lock is actually the life cycle of the object

2. Implementation model of lock

3. How?

How does Synchronized implement locks and where is the lock information stored? Take the figure analyzed above for example. Thread A grabs the lock. How does thread B know that the current lock is preempted? There must be A mark in this place, and this mark must be stored in A certain place.

In fact, the lock information is stored in the object header

4.Markword object header

This leads to the concept of Markword object header. It is the meaning of object header. It is simply understood as the layout or storage form of an object in JVM memory.
In the Hotspot virtual machine, the storage layout of objects in memory can be divided into three areas: object header, instance data and padding.



That is, every time the thread comes in, it will get this object from the lock. So as to know the lock mark of this object. Whether the lock is handled correctly
ClassLayout object printhead can be used
Add dependency


What is the object's head like after locking?
Object o = new Object();
public static void main(String[] args) {
    Demo demo = new Demo(); //o how this object is stored and laid out in memory.
    synchronized (demo) {


It is learned from the above that the type of lock is stored on the object head. What are the specific types? Look down
Synchronized lock upgrade
jdk6 introduces a lot of optimizations to the implementation of lock, such as spin lock, adaptive spin lock, lock elimination, lock coarsening, bias lock, lightweight lock and other technologies to reduce the overhead of lock operation

1.Synchronized lock classification

  • No lock
  • Bias lock (default delay on 4s)
The current thread enters the Synchronized range, but on the premise that there is no competition from other threads, it will favor the current thread. When it comes in again next time, it doesn't need to preempt the lock. It can come in directly
  • Lightweight Locking
When the previous bias lock points to A and B to seize the lock, the lock is upgraded. It will be upgraded to A lightweight lock. The spin lock is used to ensure that the thread cannot be blocked and must be obtained as soon as possible
  • Heavyweight lock
    1. Users get kernel state exchange, which requires user space to the occurrence of kernel instructions
      1. Threads that do not acquire locks block waiting to wake up
      In fact, the purpose of this design is to reduce the performance overhead caused by heavyweight locks and solve the thread concurrency problem in the unlocked state as much as possible. The underlying implementation of biased locks and lightweight locks is based on spin locks, which is a lock free implementation compared with heavyweight locks

      2. Lock upgrade process

      1. Go to seize the lock and check whether the bias lock is opened.
      2. If it is enabled, mark the bias lock. If it is not enabled or sent, the lock will be upgraded to a lightweight lock
      • If two threads preempt the lock at the same time, they will not directly mark the partial lock at this time. They will directly mark it as a lightweight lock (because the partial lock has a delay)
      • If a thread preempts the lock, another thread will not wait all the time when that thread is executing. He will try again and again. About 10 times. He won't block this process. (it can also be called spin lock)
      • If you still don't wait, mark the lock as a heavyweight lock, and the second thread enters the blocking queue to wait for wake-up
      1. If other threads can't wait in a loop, they will upgrade the lock to a heavyweight lock

      2.0 specific explanation of lock upgrade process:

      1. By default, the biased lock is on, the biased thread ID is 0, and the biased thread ID is 2 If a thread preempts the lock, the thread will preempt the partial lock first, that is, the process of changing the thread ID of markword to the thread ID of the current preemptive lock. 3 If there is thread competition, the bias lock will be revoked and upgraded to a lightweight lock. The thread will create a LockRecord in its own thread stack frame, and set the markword to the pointer to the LR of its own thread with CAS operation. After setting successfully, it indicates that it has preempted the lock. 4. If the competition intensifies, for example, threads spin more than 10 times (- XX:PreBlockSpin parameter configuration), or the number of spin threads exceeds the number of CPU cores, adaptive self spinning is added after 1.6 The JVM will automatically control the spin time according to the last competition.
      Upgrade to the heavyweight lock, apply for resources from the operating system, Linux Mutex, and then the thread is suspended and enters the waiting queue.
      Take the lock mark in this figure as a reference. Let's take a look at it with code

      2. Light weight lock

      public class ThreadTwoDemo3 {
          Object o =  new Object();
          public static void main(String[] args) {
              Person person = new Person();
              synchronized (person){
                  System.out.println("--------------------After locking-------------------");
          public static class Person{



      3. Bias lock

      By default, there is a delay in opening the bias lock, which is 4 seconds by default. Why is it so designed? Because the JVM virtual machine itself has some threads started by default, and there are many Synchronized codes in these threads. When these Synchronized codes are started, competition will be triggered. If the bias lock is used, it will cause the bias lock to continuously upgrade and revoke the lock, and the efficiency is low. Through the following JVM parameter, you can set the delay to 0
      If it is set not to delay loading, the bias lock will be used by default
      Currently, main has obtained the bias lock
      Here, both the first object and the second object are biased locks, because there will be anonymous objects to obtain biased locks by default

      4. Weight lock

      Monitor monitor
      In the case of fierce competition, when the thread has been unable to obtain the lock, it will be upgraded to heavyweight lock.
      public class ClassLayoutWeightDemo {
          public static void main(String[] args) {
              ClassLayoutWeightDemo testDemo = new ClassLayoutWeightDemo();
              Thread t1 = new Thread(() -> {
                  synchronized (testDemo){
                      System.out.println("t1 lock ing");
              synchronized (testDemo){
                  System.out.println("main lock ing");

      It can be seen from the results that in the case of competition, the mark of the lock is [010], where the mark [10] represents a heavyweight lock

      main lock ing
      com.example.gupao_thread_v1.synchron02.ClassLayoutWeightDemo object internals:
       OFFSET  SIZE   TYPE DESCRIPTION                               VALUE
            0     4        (object header)                           ca c9 e4 02 (11001010 11001001 11100100 00000010) (48548298)
            4     4        (object header)                           00 00 00 00 (00000000 00000000 00000000 00000000) (0)
            8     4        (object header)                           05 c1 00 f8 (00000101 11000001 00000000 11111000) (-134168315)
           12     4        (loss due to the next object alignment)
      Instance size: 16 bytes
      Space losses: 0 bytes internal + 4 bytes external = 4 bytes total
      t1 lock ing
      com.example.gupao_thread_v1.synchron02.ClassLayoutWeightDemo object internals:
       OFFSET  SIZE   TYPE DESCRIPTION                               VALUE
            0     4        (object header)                           ca c9 e4 02 (11001010 11001001 11100100 00000010) (48548298)
            4     4        (object header)                           00 00 00 00 (00000000 00000000 00000000 00000000) (0)
            8     4        (object header)                           05 c1 00 f8 (00000101 11000001 00000000 11111000) (-134168315)
           12     4        (loss due to the next object alignment)
      Instance size: 16 bytes
      Space losses: 0 bytes internal + 4 bytes external = 4 bytes total



      Posted by Davidc316 on Sat, 14 May 2022 12:08:53 +0300