Hongmeng kernel source code analysis (what is the task chapter) | what are the internal organs of the task?

Hongmeng kernel source code annotation Chinese version < gitee warehouse | CSDN warehouse | Github warehouse | Coding warehouse > intensive reading of kernel source code, Chinese annotation analysis, deep excavation of foundation engineering, construction of bottom network diagram, and synchronous update of four code warehouses every day

Hongmeng source code analysis series < CSDN | oschina | weharmony | official account > question and answer guide, life-style metaphor, tabular description, graphical display, and the mainstream sites are updated synchronously every day

 
This article clarifies a problem
What exactly is a mission?
In Hongmeng kernel, a task is a thread, which can also be called a job Before reading this article, it is recommended to first read Hongmeng kernel source code analysis (a required story), Simon and Jinlian's broken story < CSDN | OSCHINA | WeHarmony | official account >, or pay attention to official account: Hongmeng kernel source code analysis, which has a metaphor for task life scenarios

Hongmeng kernel source code analysis is positioned to dig deep into the kernel foundation and build the bottom network map You need to see the real body and dissect the real person The original real body of Los task CB is as follows. This article analyzes it one by one to see what is in its internal organs

typedef struct {
    VOID            *stackPointer;      /**< Task stack pointer */ //Stack pointer in non user mode
    UINT16          taskStatus;         /**< Task status */   //Various status labels can have a variety of labels and be identified by bit
    UINT16          priority;           /**< Task priority */  //Task priority [0:31], the default is level 31
    UINT16          policy;    //Task scheduling methods (three.. LOS_SCHED_RR)
    UINT16          timeSlice;          /**< Remaining time slice *///Remaining time slice
    UINT32          stackSize;          /**< Task stack size */  //Stack size in non user mode
    UINTPTR         topOfStack;         /**< Task stack top */  //Stack top bottom = top + size in non user mode
    UINT32          taskID;             /**< Task ID */    //Task ID: the task pool is essentially a large array. ID is the index of the array. The default value is < 128
    TSK_ENTRY_FUNC  taskEntry;          /**< Task entrance function */ //Task execution entry function
    VOID            *joinRetval;        /**< pthread adaption */ //Used to store the return value of the join thread
    VOID            *taskSem;           /**< Task-held semaphore */ //Which semaphore is task waiting for
    VOID            *taskMux;           /**< Task-held mutex */  //Which lock is task waiting for
    VOID            *taskEvent;         /**< Task-held event */  //Which event is task waiting for
    UINTPTR         args[4];            /**< Parameter, of which the maximum number is 4 */ //Parameters of the entry function, such as main (int argc,char *argv [])
    CHAR            taskName[OS_TCB_NAME_LEN]; /**< Task name */ //Name of the task
    LOS_DL_LIST     pendList;           /**< Task pend node */  //If the task is blocked, it will be linked to the linked list of various blocking conditions, such as ostask wait
    LOS_DL_LIST     threadList;         /**< thread list */   //Hang to the thread linked list of the process
    SortLinkList    sortList;           /**< Task sortlink node */ //Linked to the task execution linked list of cpu core
    UINT32          eventMask;          /**< Event mask */   //Event masking
    UINT32          eventMode;          /**< Event mode */   //Event mode
    UINT32          priBitMap;          /**< BitMap for recording the change of task priority, //The priority of a task will change frequently during execution. This variable is used to record all changes
                                             the priority can not be greater than 31 */   //Priority over, for example 01001011 once had priority 0,1,3,6
    INT32           errorNo;            /**< Error Num */
    UINT32          signal;             /**< Task signal */ //Task signal type, (SIGNAL_NONE,SIGNAL_KILL,SIGNAL_SUSPEND,SIGNAL_AFFI)
    sig_cb          sig;    //Signal control block, the signal used for inter process communication here, is similar to linux singal module
#if (LOSCFG_KERNEL_SMP == YES)
    UINT16          currCpu;            /**< CPU core number of this task is running on */ //The CPU kernel number on which this task is running
    UINT16          lastCpu;            /**< CPU core number of this task is running on last time */ //The CPU kernel number of the last time this task was run
    UINT16          cpuAffiMask;        /**< CPU affinity mask, support up to 16 cores */ //CPU affinity mask supports up to 16 cores. Affinity is very important. In case of multiple cores, try to run one task on one CPU core to improve efficiency
    UINT32          timerCpu;           /**< CPU core number of this task is delayed or pended */ //The CPU kernel number of this task is delayed or suspended
#if (LOSCFG_KERNEL_SMP_TASK_SYNC == YES)
    UINT32          syncSignal;         /**< Synchronization for signal handling */ //Used for synchronization signal between CPU s
#endif
#if (LOSCFG_KERNEL_SMP_LOCKDEP == YES) / / deadlock detection switch
    LockDep         lockDep;
#endif
#if (LOSCFG_KERNEL_SCHED_STATISTICS == YES) / / the scheduling statistics switch. Obviously, turning on this switch will affect the performance. Hongmeng is turned off by default
    SchedStat       schedStat;          /**< Schedule statistics */ //Scheduling statistics
#endif
#endif
    UINTPTR         userArea;   //The use area is defined by the operation time and varies according to the operation state
    UINTPTR         userMapBase;  //Stack bottom position in user mode
    UINT32          userMapSize;        /**< user thread stack size ,real size : userMapSize + USER_STACK_MIN_SIZE */
    UINT32          processID;          /**< Which belong process *///Process ID
    FutexNode       futex;    //Realize quick lock function
    LOS_DL_LIST     joinList;           /**< join list */ //Linked lists allow tasks to release each other
    LOS_DL_LIST     lockList;           /**< Hold the lock list */ //What chain lists did you get
    UINT32          waitID;             /**< Wait for the PID or GID of the child process */ //Wait for the child's PID or GID process
    UINT16          waitFlag;           /**< The type of child process that is waiting, belonging to a group or parent,
                                             a specific child process, or any child process */
#if (LOSCFG_KERNEL_LITEIPC == YES)
    UINT32          ipcStatus;   //IPC status
    LOS_DL_LIST     msgListHead;  //The message queue header node is hung with messages to be read by the task
    BOOL            accessMap[LOSCFG_BASE_CORE_TSK_LIMIT];//Access graph refers to the identification of whether tasks can be accessed. LOSCFG_BASE_CORE_TSK_LIMIT is the total number of task pools
#endif
} LosTaskCB;

  
The structure is still complex. Although each member variable above has been annotated, it is still not clear enough and there is no module and hierarchy It needs to be sorted again. The author decomposes it into the following six parts and analyzes them one by one:

The first block: multi-core CPU related block

#if (LOSCFG_KERNEL_SMP == YES) / / multi CPU core support
    UINT16          currCpu;            /**< CPU core number of this task is running on */ //The CPU kernel number on which this task is running
    UINT16          lastCpu;            /**< CPU core number of this task is running on last time */ //The CPU kernel number of the last time this task was run
    UINT16          cpuAffiMask;        /**< CPU affinity mask, support up to 16 cores */ //CPU affinity mask supports up to 16 cores. Affinity is very important. In case of multiple cores, try to run one task on one CPU core to improve efficiency
    UINT32          timerCpu;           /**< CPU core number of this task is delayed or pended */ //The CPU kernel number of this task is delayed or suspended
#if (LOSCFG_KERNEL_SMP_TASK_SYNC == YES)
    UINT32          syncSignal;         /**< Synchronization for signal handling */ //Used for synchronization signal between CPU s
#endif
#if (LOSCFG_KERNEL_SMP_LOCKDEP == YES) / / deadlock detection switch
    LockDep         lockDep;
#endif
#if (LOSCFG_KERNEL_SCHED_STATISTICS == YES) / / the scheduling statistics switch. Obviously, turning on this switch will affect the performance. Hongmeng is turned off by default
    SchedStat       schedStat;          /**< Schedule statistics */ //Scheduling statistics
#endif
#endif

  
Hongmeng kernel supports multiple CPUs. Everyone knows that multiple CPUs are good, efficient and fast, but everything has two sides. While enjoying the benefits of one thing, we also have to bear the troubles and risks brought by it The benefits and troubles of multi-core are not discussed here. Just know it first. There are special articles and videos to explain it later Tasks can be called threads or jobs CPUs do homework. Multiple CPUs can do homework. Can one job be done at one time? The answer is often no, because the reality does not allow, there can be more than N jobs, and the number of CPUs is very limited, so I often do homework A and am interrupted by my boss to do homework B This is the scheduling algorithm Will it be the CPU that helped it do its homework when its job was interrupted? The answer is not necessarily The variable cpuAffiMask is called CPU affinity. Its function is to specify that A's job is always completed by the same CPU. It can also be handed over to the scheduling algorithm. Whoever is assigned to it can come. This aspect can be left unchecked

The second block: stack space

    VOID            *stackPointer;      /**< Task stack pointer */ //Stack pointer in non user mode
    UINT32          stackSize;          /**< Task stack size */  //Stack size in non user mode
    UINTPTR         topOfStack;         /**< Task stack top */  //Stack top bottom = top + size in non user mode

    UINTPTR         userArea;   //The use area is defined by the operation time and varies according to the operation state
    UINTPTR         userMapBase;  //Stack bottom position in user mode
    UINT32          userMapSize;        /**< user thread stack size ,real size : userMapSize + USER_STACK_MIN_SIZE */

I've read many articles about stack space, saying that each task has its own independent user stack space and kernel stack space. In fact, this statement is not rigorous. At least it can't be understood in Hongmeng kernel, otherwise there will be a lot of doubts and explanations The CPU needs space to do its work, and the stack space is the space for the CPU to do its work The company stipulates that the site provided by the user is required to do the user's work. The site provided by a and B is called the user stack space. However, some operations are sensitive and inconvenient to be completed at the user's site. They need to go back to the site designated by the company. The site of the company is called the kernel stack space Therefore, the prepared statement is that each task has its own independent user stack space and shares a kernel stack space Each CPU has its own office space in the company. It is unlikely that there will be N office spaces, which obviously wastes kernel resources and will not work And those sensitive jobs are called system calls

The third block: resource competition / synchronization

    VOID            *taskSem;           /**< Task-held semaphore */ //Which semaphore is task waiting for
    VOID            *taskMux;           /**< Task-held mutex */  //Which lock is task waiting for
    VOID            *taskEvent;         /**< Task-held event */  //Which event is task waiting for
    UINT32          eventMask;          /**< Event mask */   //Event masking
    UINT32          eventMode;          /**< Event mode */   //Event mode
    FutexNode       futex;    //Realize quick lock function
    LOS_DL_LIST     joinList;           /**< join list */ //Linked lists allow tasks to release each other
    LOS_DL_LIST     lockList;           /**< Hold the lock list */ //What chain lists did you get
    UINT32          signal;             /**< Task signal */ //Task signal type, (SIGNAL_NONE,SIGNAL_KILL,SIGNAL_SUSPEND,SIGNAL_AFFI)
    sig_cb          sig;

The company's resources are limited. CPU itself is also the company's resources. In addition to it, there are other equipment, such as blackboard for homework. Users A,B and C may use it. There are more wolves and less meat. What's the matter? Mutual exclusion (taskMux,futex) solves this problem. Take the lock before doing business. Of course, those who get the lock are happy. Those who don't get it are queued and hung on the lockList. Note that lockList is A two-way linked list, which is the most important structure of the kernel. I mentioned it at the beginning. I'm not impressed to see this Hongmeng kernel source code analysis (two-way linked list) | who is the most important structure of the kernel< CSDN | OSCHINA | WeHarmony | official account >, which is hung with senior officials waiting to lock into the room This is the principle of mutual exclusion, which solves the competitive problem of resource shortage between tasks The other is the semaphore (sig_cb) used for task synchronization. There will be correlation between tasks. In real life, users A and B of the company have normal business exchanges. When CPU helps B with his work, it finds that the precondition is that A needs to complete A certain work before it can proceed. At this time, B needs to take the initiative to let CPU go back and finish A's work first This is the principle of semaphore, which solves the problem of synchronization between tasks

The fourth block: task scheduling
As mentioned earlier, there are many jobs, only a few people do homework, and a single core CPU means only one person works How to allocate CPU requires scheduling algorithm

    UINT16          taskStatus;         /**< Task status */   //Various status labels can have a variety of labels and be identified by bit
    UINT16          priority;           /**< Task priority */  //Task priority [0:31], the default is level 31
    UINT16          policy;    //Task scheduling methods (three.. LOS_SCHED_RR)
    UINT16          timeSlice;          /**< Remaining time slice *///Remaining time slice
    CHAR            taskName[OS_TCB_NAME_LEN]; /**< Task name */ //Name of the task
    LOS_DL_LIST     pendList;           /**< Task pend node */  //If the task is blocked, it will be linked to the linked list of various blocking conditions, such as ostask wait
    LOS_DL_LIST     threadList;         /**< thread list */   //Hang to the thread linked list of the process
    SortLinkList    sortList;           /**< Task sortlink node */ //Linked to the task execution linked list of cpu core  

Is it a simple first come, first served? Of course, we also support this approach Hongmeng kernel uses preemptive scheduling policy, which is that it can jump the queue. It is higher than the priority of [0,31]. The higher the number, the lower the priority. Like the exam, the first is the best, and Hongmeng is the best at 0! I also want to get that the task priority of the kernel is very high. For example, the resource recovery task ranks fifth and the timer task ranks 0 That's enough How many ordinary people? Default level 28, miserable!!! In addition, tasks have time limit timeSlice, which is called time slice. The default time is 20ms. When it is used up, it will be reset to you. Reschedule to find out the execution with high priority. Blocked tasks (such as those that do not get the lock, etc. semaphore synchronization) should be hung on the pendList for easy management

Fifth block: inter mission communication

#if (LOSCFG_KERNEL_LITEIPC == YES)
    UINT32          ipcStatus;   //IPC status
    LOS_DL_LIST     msgListHead;  //The message queue header node is hung with messages to be read by the task
    BOOL            accessMap[LOSCFG_BASE_CORE_TSK_LIMIT];//Access graph refers to the identification of whether tasks can be accessed. LOSCFG_BASE_CORE_TSK_LIMIT is the total number of task pools
#endif

This is very important. To solve the problem of inter task communication, you should know that the process is responsible for the resource management function. What does it mean? It is not responsible for the production and consumption of content, it is only responsible for management to ensure the arrival rate and integrity of your content Producers and consumers are always the task What does the process control? There are a series of articles and special articles. Read them by yourself liteipc is a proprietary communication message queue implementation of Hongmeng In short, it is file based, while the traditional ipc message queue is memory based The difference is not discussed here. There are special articles to analyze it

Sixth block: auxiliary tools
You should know that the task is too important to the kernel. It is the task that keeps the CPU busy. What to do if you make a mistake in the middle of a business trip and how to diagnose where your problem is, you need some tools, such as deadlock detection, such as CPU occupation. Memory monitoring is as follows:

#if (LOSCFG_KERNEL_SMP_LOCKDEP == YES) / / deadlock detection switch
    LockDep         lockDep;
#endif
#if (LOSCFG_KERNEL_SCHED_STATISTICS == YES) / / the scheduling statistics switch. Obviously, turning on this switch will affect the performance. Hongmeng is turned off by default
    SchedStat       schedStat;          /**< Schedule statistics */ //Scheduling statistics
#endif

The above is the viscera of the task. If you see it clearly, the image of Hongmeng kernel will be much clearer!

Author: weharmony

For more information, please visit: Hongmeng technology community jointly built by 51CTO and Huawei's official strategic cooperation https://harmonyos.51cto.com/

Tags: github gitee harmonyos

Posted by ignorant on Wed, 27 Apr 2022 22:58:11 +0300