Android interview questions "thinking and solving" November issue

Here's another update. The Android interview question "think and answer" is presented in the November issue.

Talk about the drawing process of View/ViewGroup

The drawing process of View starts from the performTraversals of ViewRoot. It goes through three processes: measure, layout and draw, and finally draws the View.
performTraversals will call performMeasure, performLayout and performDraw in turn. They will call measure, layout and draw in turn, and then onMeasure, onLayout and dispatchDraw.

  • measure :

For the measurement of a user-defined single view, you only need to calculate the size according to the MeasureSpec passed by the parent view.

For the measurement of ViewGroup, it is generally necessary to override the onMeasure method. In the onMeasure method, the parent container will Measure all child views, and the child element will be used as the parent container to Measure its own child elements repeatedly. In this way, the measurement process will be passed from the DecorView level to level, that is, it is necessary to traverse the dimensions of all child views and finally get the total dimension of ViewGroup. The same is true for the Layout and Draw methods.

  • Layout: according to the layout size and layout parameters obtained by the measure sub View, place the sub View in the appropriate position.

For a user-defined single view, you can calculate its own location.

The view onlayout method needs to be overridden for viewongroup. In addition to calculating the position of your own view, you also need to determine the position of each child view in the parent container and the width and height of the child view (getMeasuredWidth and getMeasuredHeight). Finally, you need to call the layout method of all child views to set the position of the child view.

  • Draw: draw the View object onto the screen.

draw () calls four methods in turn:

1) drawBackground() sets the boundary of the background according to the position parameters of the View obtained in the layout process.
2) onDraw(), which draws the contents of the view itself. Generally, customizing a single view will rewrite this method to realize some drawing logic.
3) dispatchDraw() to draw sub views
4) onDrawScrollBars(canvas) to draw decorations such as scroll indicators, scroll bars, and foreground

Tell me about your understanding of MeasureSpec

MeasureSpec is a measurement requirement for the child View obtained through simple calculation from the MeasureSpec of the parent View and the LayoutParams of the child View. This measurement requirement is MeasureSpec.

  • First of all, MeasureSpec is a combined value of size and mode. The value in MeasureSpec is an integer (32 bits). Size and mode are packaged into an Int type. The upper two bits are mode, and the last 30 bits are size
    // Get measurement mode
    int specMode = MeasureSpec.getMode(measureSpec)

    // Get measurement size
    int specSize = MeasureSpec.getSize(measureSpec)

    // Generate a new SpecMode through Mode and Size
    int measureSpec=MeasureSpec.makeMeasureSpec(size, mode);
  • Secondly, the MeasureSpec value of each child view is calculated from the layout parameters of the child view and the MeasureSpec value of the parent container, so there is a parent layout measurement mode, the layout parameters of the child view, and the MeasureSpec diagram of the child view itself:

In fact, the source code logic of the getChildMeasureSpec method will calculate the MeasureSpec of a single child view according to the layout parameters of the child view and the MeasureSpec of the parent container.

  • Finally, in practical application:

For a customized single view, the onMeasure method can generally not be processed. If you want to customize the width and height, override the onMeasure method and pass the calculated width and height through the setMeasuredDimension method.
For a custom ViewGroup, you generally need to rewrite the onMeasure method, and call the measureChildren method to traverse all child views and measure them (the measureChild method measures the width and height of a specific view), then you can obtain the width and height through getMeasuredWidth/getMeasuredHeight, and finally store its total width and height through setMeasuredDimension method.

How does Scroller realize the elastic sliding of View?

  • In motionevent ACTION_ When the up event is triggered, the startScroll() method is called. This method does not perform the actual sliding operation, but records the sliding related quantity (sliding distance and sliding time)
  • Then call the invalidate/postInvalidate() method to request View redrawing, resulting in View The draw method is executed
  • After the View is redrawn, the computeScroll method will be called in the draw method, and the computeScroll will go to the Scroller to obtain the current scrollX and scrollY; Then the sliding is realized by scrollTo method; Then, the postInvalidate method is called for the second redrawing. Like the previous process, this repeatedly causes the View to continuously slide in a small range, and multiple small range slides form an elastic slide until the whole slide is completed.
mScroller = new Scroller(context);

    public boolean onTouchEvent(MotionEvent event) {
        switch (event.getAction()) {
            case MotionEvent.ACTION_UP:
                // X coordinate at the beginning of rolling, Y coordinate at the beginning of rolling, horizontal rolling distance, vertical rolling distance
                mScroller.startScroll(getScrollX(), 0, dx, 0);
        return super.onTouchEvent(event);

    public void computeScroll() {
        // Override the computeScroll() method and complete the logic of smooth scrolling inside it
        if (mScroller.computeScrollOffset()) {
            scrollTo(mScroller.getCurrX(), mScroller.getCurrY());

What interceptors does OKHttp have and what role do they play

OKHTTP interceptor is to put all interceptors into a list, and then execute the interceptors in turn each time. Each interceptor is divided into three parts:

  • Preprocessing interceptor content
  • Pass the request to the next interceptor through the processed method
  • The next interceptor processing is completed and returned, and subsequent processing work.

In this way, a chain call is formed. Look at the source code and the specific interceptors:

  Response getResponseWithInterceptorChain() throws IOException {
    // Build a full stack of interceptors.
    List<Interceptor> interceptors = new ArrayList<>();
    interceptors.add(new BridgeInterceptor(client.cookieJar()));
    interceptors.add(new CacheInterceptor(client.internalCache()));
    interceptors.add(new ConnectInterceptor(client));
    if (!forWebSocket) {
    interceptors.add(new CallServerInterceptor(forWebSocket));

    Interceptor.Chain chain = new RealInterceptorChain(
        interceptors, null, null, null, 0, originalRequest);
    return chain.proceed(originalRequest);

According to the source code, there are seven interceptors:

  • addInterceptor(Interceptor) is set by the developer. According to the requirements of the developer, the earliest interception processing will be carried out before all interceptors are processed. For example, some public parameters can be added here.
  • RetryAndFollowUpInterceptor. Here, we will do some initialization work for the connection, enrich the failure of the request, and follow-up requests for redirection. Just like his name, it does retry work and some connection tracking work.
  • BridgeInterceptor will build a request for users to access the network. Meanwhile, the follow-up work will convert the Response from the network request into the Response available to users, such as adding file types, content length calculation, adding and gzip unpacking.
  • CacheInterceptor, which mainly deals with cache related processing, will cache the request value according to the configuration of OkHttpClient object and cache policy, and if there is a local cache, it can return the cache result without network interaction.
  • ConnectInterceptor is mainly responsible for establishing connections, establishing TCP connections or TLS connections, and HttpCodec for encoding and decoding
  • Network interceptors are also set by the developers themselves, so they are essentially similar to the first interceptor, but their use is different due to different locations. The interceptor added in this location can see the data of request and response, so you can do some network debugging.
  • CallServerInterceptor, here is the request and response of network data, that is, the actual network I/O operation, reading and writing data through socket.

How does OkHttp implement connection pooling

  • Why do I need a connection pool?

Frequently establishing Sokcet connections and disconnecting sockets consume network resources and waste time, so the keepalive connection in HTTP plays a very important role in reducing latency and improving speed. What is the keepalive mechanism? That is, you can continuously send multiple copies of data in one TCP connection without disconnecting. Therefore, the multiple use of connections, that is, reuse, becomes particularly important, and the reuse of connections requires the management of connections, so there is the concept of connection pool.

ConectionPool is used in OkHttp to implement connection pool. Five concurrent keepalives are supported by default, and the default link life is 5 minutes.

  • How?

1) First, a dual ended queue Deque is maintained in the ConectionPool, that is, a queue that can be accessed at both ends to store connections.
2) Then, in the ConnectInterceptor, that is, the interceptor responsible for establishing a connection, first find the available connection, that is, get the connection from the connection pool. Specifically, call the get method of the conceptionpool.

RealConnection get(Address address, StreamAllocation streamAllocation, Route route) {
    assert (Thread.holdsLock(this));
    for (RealConnection connection : connections) {
      if (connection.isEligible(address, route)) {
        streamAllocation.acquire(connection, true);
        return connection;
    return null;

That is, after traversing the double ended queue, if the connection is valid, the acquire method will be called to count and return the connection.

3) If no available connection is found, a new connection will be created, and the established connection will be added to the double ended queue. At the same time, the thread in the thread pool will start running. In fact, it calls the put method of ConectionPool.

public final class ConnectionPool {
    void put(RealConnection connection) {
        if (!cleanupRunning) {
        	//Called when there is no connection
            cleanupRunning = true;

3) In fact, there is only one thread in this thread pool, which is used to clean up connections, that is, the above cleanupRunnable

private final Runnable cleanupRunnable = new Runnable() {
        public void run() {
            while (true) {
                //Perform cleanup and return the time when the next cleanup is needed.
                long waitNanos = cleanup(System.nanoTime());
                if (waitNanos == -1) return;
                if (waitNanos > 0) {
                    long waitMillis = waitNanos / 1000000L;
                    waitNanos -= (waitMillis * 1000000L);
                    synchronized (ConnectionPool.this) {
                        //Release the lock within the timeout time
                        try {
                            ConnectionPool.this.wait(waitMillis, (int) waitNanos);
                        } catch (InterruptedException ignored) {

This runnable will constantly call the cleanup method to clean up the thread pool, return the time interval of the next cleanup, and then enter the wait.

How did you clean it up? Look at the source code:

long cleanup(long now) {
    synchronized (this) {
      //Traversal connection
      for (Iterator<RealConnection> i = connections.iterator(); i.hasNext(); ) {
        RealConnection connection =;

        //Check whether the connection is idle,
        //No, then inUseConnectionCount + 1
        //Yes, idleConnectionCount + 1
        if (pruneAndGetAllocationCount(connection, now) > 0) {


        // If the connection is ready to be evicted, we're done.
        long idleDurationNs = now - connection.idleAtNanos;
        if (idleDurationNs > longestIdleDurationNs) {
          longestIdleDurationNs = idleDurationNs;
          longestIdleConnection = connection;

      //If keepaliveduration ns or maxIdleConnections are exceeded,
      //Remove from double ended queue connections
      if (longestIdleDurationNs >= this.keepAliveDurationNs
          || idleConnectionCount > this.maxIdleConnections) {      
      } else if (idleConnectionCount > 0) {      //If the number of idle connections > 0, return the time that will expire
        // A connection will be ready to evict soon.
        return keepAliveDurationNs - longestIdleDurationNs;
      } else if (inUseConnectionCount > 0) {
        // The connection is still in use. Return to the cycle of maintaining the connection for 5 minutes
        return keepAliveDurationNs;
      } else {
        // No connections, idle or in use.
        cleanupRunning = false;
        return -1;


    // Cleanup again immediately.
    return 0;

That is, if there are more than 5 free connections maxIdleConnections or the keepalive time is more than 5 minutes, the connection will be cleaned up.

4) Here is a question, how to belong to idle connection?

In fact, it is about the acquire counting method just mentioned:

  public void acquire(RealConnection connection, boolean reportedAcquired) {
    assert (Thread.holdsLock(connectionPool));
    if (this.connection != null) throw new IllegalStateException();

    this.connection = connection;
    this.reportedAcquired = reportedAcquired;
    connection.allocations.add(new StreamAllocationReference(this, callStackTrace));

In RealConnection, there is a streamalallocation virtual reference list called allocations. Each time a connection is created, the streamalallocationreference corresponding to the connection will be added to the list. If the connection is closed, the object will be removed.

5) The connection pool has so much work and is not responsible. It mainly manages the dual ended queue deque < realconnection >, and the available connections are used directly. Then the connections are cleaned up regularly, and automatic recycling is realized through the reference count of streamalallocation.

What design patterns are used in OkHttp

  • Responsibility chain model

This should not be too obvious. It can be said that it is the essence of okhttp, which is mainly reflected in the use of interceptors. For the specific code, see the introduction of interceptors above.

  • Builder pattern

In Okhttp, the Builder mode is also used a lot. Its main purpose is to separate the creation and representation of objects, and assemble various configurations with Builder.
For example, Request:

public class Request {
  public static class Builder {
    @Nullable HttpUrl url;
    String method;
    Headers.Builder headers;
    @Nullable RequestBody body;
    public Request build() {
      return new Request(this);
  • Factory mode

The factory mode is similar to the builder mode. The difference is that the factory mode focuses on the generation process of objects, while the builder mode mainly focuses on the configuration of various parameters of objects.
An example is another CacheStrategy object in the CacheInterceptor Interceptor:

    CacheStrategy strategy = new CacheStrategy.Factory(now, chain.request(), cacheCandidate).get();

    public Factory(long nowMillis, Request request, Response cacheResponse) {
      this.nowMillis = nowMillis;
      this.request = request;
      this.cacheResponse = cacheResponse;

      if (cacheResponse != null) {
        this.sentRequestMillis = cacheResponse.sentRequestAtMillis();
        this.receivedResponseMillis = cacheResponse.receivedResponseAtMillis();
        Headers headers = cacheResponse.headers();
        for (int i = 0, size = headers.size(); i < size; i++) {
          String fieldName =;
          String value = headers.value(i);
          if ("Date".equalsIgnoreCase(fieldName)) {
            servedDate = HttpDate.parse(value);
            servedDateString = value;
          } else if ("Expires".equalsIgnoreCase(fieldName)) {
            expires = HttpDate.parse(value);
          } else if ("Last-Modified".equalsIgnoreCase(fieldName)) {
            lastModified = HttpDate.parse(value);
            lastModifiedString = value;
          } else if ("ETag".equalsIgnoreCase(fieldName)) {
            etag = value;
          } else if ("Age".equalsIgnoreCase(fieldName)) {
            ageSeconds = HttpHeaders.parseSeconds(value, -1);

  • Observer mode

I wrote an article before about the use of websocket in Okhttp. Because websocket is a long connection, it needs to be monitored. Here is the observer mode:

  final WebSocketListener listener;
  @Override public void onReadMessage(String text) throws IOException {
    listener.onMessage(this, text);

  • Singleton mode

This is not an example. Every project will have

  • In addition, some blogs also talk about the strategy mode, facade mode, etc. you can search online. After all, everyone's ideas and opinions will be different. If you look carefully, you may find it.

Introduce the structure of your previous project

Just answer this question truthfully. The key point is to put forward your own views on your project architecture, that is, you should have your own thoughts and ideas.

MVP,MVVM,MVC differences


  • Architecture introduction

Model: data model, for example, we get data from database or network
View: view, that is, our xml layout file
Controller: controller, that is, our Activity

  • Model connection

View -- > controller, that is, some user events (click and touch events) reflecting the view are added to the Activity.
Controller -- > model, that is, Activity, to read and write some data we need.
The controller -- > View, that is, the Activity will reflect the updated content to the View after obtaining the data.

Such a complete project architecture comes out, which is also a common project architecture for our early development.

  • Advantages and disadvantages

This disadvantage is quite obvious. The main performance is that our Activity is too heavy. It is often written in hundreds of thousands of lines.
The reason for this problem is that the relationship between the Controller layer and the View layer is too close, that is, there are too many code operating the View in the Activity.

But! But! In fact, Android is not a traditional MVC structure, because Activity can be called View layer and Controller layer, so I don't think this default development structure of Android can be called MVC project architecture, because it itself is the default development form of Android at the beginning. Everything is lost in Activity, and then it can be encapsulated. These levels can't be separated at all. Of course, this is my personal View. We can all discuss it.


  • Architecture introduction

Before, it was just because there was an operation view in the Activity and the Controller.
Therefore, in fact, the MVP architecture is to distinguish the view from the Controller from the original Activity layer, and separate a layer of Presenter as the position of the original Controller.
Finally, the View layer is written in the form of an interface, then the Activity implements the View interface, and finally implements the method in the Presenter class.

Model: data model, for example, we get data from database or network.
View: view, that is, our xml layout file and Activity.
Presenter: the host, a separate class, only does scheduling work.

  • Model connection

View -- > Presenter, which reflects some user events of view to the Presenter.
Presenter -- > model, presenter reads and writes some data we need.
Controller -- > view, after the presenter obtains the data, it feeds back the updated content to the Activity to update the view.

  • Advantages and disadvantages

The advantage of this method is that it greatly reduces the burden on the Activity, makes the Activity mainly undertake the work of updating the View, and then transfers the work of interacting with the Model to the Presenter, so that the Presenter can control and interact with the Model Party and the View party. Therefore, make the project more clear, simple and sequential thinking development.

The disadvantages are also obvious:
First of all, the amount of code has greatly increased. Each page or function point needs to write a Presenter class. Because it is interface oriented programming, it needs to add a large number of interfaces and there will be a large number of cumbersome callbacks.
Secondly, because the Presenter holds the Activity object, it may lead to memory leakage or view null pointer, which also needs attention.


  • Architecture introduction

MVVM is characterized by two-way binding and has the official blessing of Google. It updates many architecture components in Jetpack, such as ViewModel, Livedata, DataBinding, etc., so it is now the mainstream framework and the framework highly praised by the government.

Model: data model, for example, we get data from database or network.
View: view, that is, our xml layout file and Activity.
ViewModel: an association layer that binds Model and View so that they can be bound to each other and updated in real time

  • Model connection

View -- > ViewModel -- > view, two-way binding, data changes can be reflected in the interface, and interface modifications can be reflected in the data.
ViewModel -- > model, operate some data we need.

  • Advantages and disadvantages

The advantage is that it is strongly supported by the government, so many relevant libraries have been updated to make the MVVM architecture stronger and more useful. Moreover, the feature of two-way binding can save us a lot of interaction between View and Model. It also basically solves the problems of the above two architectures.

Specifically, what do you understand about MVVM

1) Let's talk about how MVVM solves the defects and problems of the other two architectures:

  • The problem of too high coupling between various levels is solved, that is, the decoupling is better completed. In the MVP layer, the Presenter will still hold the reference of the View, but in the MVVM, the View and the Model are bound in both directions, so that the viewModel basically only needs to process the business logic without the elements related to the relational interface.

  • It solves the problem of too much code or too much patterned code. Due to two-way binding, there is much less UI related code, which is also the key to the small amount of code. The key component is DataBinding, which enables all UI changes to be handed over to the observed data model.

  • The possible memory leakage problem is solved. One of the MVVM architecture components is LiveData, which has life cycle awareness and can sense the life cycle of activities, so it can clean up itself after its associated life cycle is destroyed, which greatly reduces the problem of memory leakage.

  • Solved the problem of View null pointer caused by Activity stop. If LiveData is used in MVVM, when the View needs to be updated, if the life cycle of the observer is inactive (such as returning the Activity in the stack), it will not receive any LiveData events. That is, it will ensure that the response will be made when the interface is visible, which solves the null pointer problem.

  • It solves the problem of life cycle management. This is mainly due to the Lifecycle component, which enables some controls to observe the life cycle and carry out life cycle events anytime and anywhere.

2) Let's talk about responsive programming

Responsive programming, to put it bluntly, is that I first construct the relationship between things, and then I can ignore it. They will drive each other because of this relationship.
In fact, it is what we often call observer mode, or subscription and publication mode.

Why do you say this? Because the essential idea of MVVM is similar to this. Whether it is two-way binding or life cycle perception, it is actually an observer mode to make everything observable. Then we only need to stabilize this observation relationship, and the project will be stable.

3) Finally, why is MVVM so powerful?

I personally think that MVVM is powerful not because of the architecture itself, but because of the great advantages of this responsive programming, coupled with the strong official support of Google, there are so many supported components to maintain the MVVM architecture. In fact, the official wants to unify the project architecture.

Excellent architecture idea + official support = strong

What is ViewModel? What do you understand about ViewModel?

If you read my last article, you should know that ViewModel is a level of MVVM architecture, which is used to connect the relationship between View and model. What we want to talk about today is an official framework - ViewModel.

The ViewModel class is designed to store and manage interface related data in a lifecycle oriented manner

According to the official introduction, there are two messages:

  • Focus on the life cycle approach.
    Since the life cycle of ViewModel acts on the whole Activity, it saves some work on state maintenance. The most obvious is that in the case of screen rotation, the data was saved and read before, but ViewModel does not need it. It can automatically retain the data.

Secondly, since the ViewModel will maintain a local singleton in its life cycle, it is more convenient for multiple fragments of the Activity to communicate, because they can obtain the same ViewModel instance, that is, the data state can be shared.

  • Store and manage interface related data.

The fundamental responsibility of the ViewModel layer is to maintain the UI state on the interface. In fact, it is to maintain the corresponding data, because the data will eventually be reflected on the UI interface. Therefore, the ViewModel layer actually manages and stores the data related to the interface.

Why was ViewModel designed and what problems did it solve?

  • Before the ViewModel component is designed, how does MVVM implement the ViewModel level?

In fact, you write your own classes, and then realize the two-way binding of View and data through interfaces and internal dependencies.
Therefore, Google created this ViewModel component just to standardize the implementation of MVVM architecture, and try to make the ViewModel level only touch the business code and don't care about the reference of VIew level. Then cooperate with other components, including livedata and databinding, to make the MVVM architecture more perfect, standardized and robust.

  • What problems have been solved?

In fact, we have already mentioned some above, such as:

1) It will not be destroyed due to the rotation of the screen, which reduces the work of maintaining the state
2) Due to the characteristics of a single instance within the scope, multiple fragment s can communicate easily and maintain the same data state.
3) The MVVM architecture is improved to make decoupling more pure.

Talk about the principle of ViewModel.

  • First, let's talk about how to save the life cycle

ViewModel2. Before 0, in fact, the principle is to add a HolderFragment to the Activity, and then set the setRetainInstance(true) method to make this Fragment survive when the Activity is rebuilt, which ensures that the state of the ViewModel will not change with the state of the Activity.

After 2.0, the two methods of onRetainNonConfigurationInstance() and getLastNonConfigurationInstance() of Activity are actually used, which is equivalent to saving the instance of ViewModel during horizontal and vertical screen cutting and then restoring, so the data of ViewModel is guaranteed.

  • Besides, how to ensure the only instance in the scope

First of all, the instance of ViewModel is obtained through reflection, and the reflection takes the context of application, so as to ensure that it will not hold the reference of View such as Activity or Fragment. Then the instance created will be saved to a ViewModelStore container, which is actually a collection class. This ViewModelStore class is actually the instance saved on the interface, and our ViewModel is a child element of a collection class.

So every time we get it, we first check whether there is our ViewModel in the collection. If not, we will instantiate it. If there is, we will directly get the instance for use, so as to ensure the unique instance. Finally, when the interface is destroyed, the clear method of ViewModelStore will be executed to clear the ViewModel data in the collection. A short code explains:

public <T extends ViewModel> T get(Class<T> modelClass) {
      // First find out whether there is an instance of ViewModel from the ViewModelStore container
      ViewModel viewModel = mViewModelStore.get(key);
      // If ViewModel already exists, it will be returned directly
      if (modelClass.isInstance(viewModel)) {
            return (T) viewModel;
      // If it does not exist, instantiate the ViewModel through reflection and store it in the ViewModelStore
      viewModel = modelClass.getConstructor(Application.class).newInstance(mApplication);
      mViewModelStore.put(key, viewModel);
      return (T) viewModel;

public class ViewModelStore {
    private final HashMap<String, ViewModel> mMap = new HashMap<>();

     public final void clear() {
        for (ViewModel vm : mMap.values()) {

protected void onDestroy() {

   if (mViewModelStore != null && !isChangingConfigurations()) {


How does ViewModel automate the lifecycle? Why don't you lose status after rotating the screen? Why can ViewModel follow the life cycle of Activity/Fragment without causing memory leakage?

These three questions are very similar. They are all about the life cycle. In fact, they are asking why ViewModel can manage the life cycle and will not be affected by reconstruction.

  • ViewModel2. Before 0

A HolderFragment without view is used to maintain its life cycle. We know that the ViewModel instance is stored in a ViewModelStore container, so the empty fragment can be used to manage the container. As long as the Activity is active, the HolderFragment will not be destroyed, which ensures the life cycle of the ViewModel.

Moreover, setting setRetainInstance(true) method can ensure that the life cycle during configchange will not be changed, so that this fragment can survive during activity reconstruction. To sum up, an empty fragment is used to manage and maintain the ViewModelStore, and then the mapping of ViewModel is deleted when the corresponding activity is destroyed. Let the life cycle of ViewModel remain the same as that of activity. This is also a clever method used by many third-party libraries, such as Glide, which is also managed by creating an empty fragment.

  • After 2.0, with Android x support

In fact, it uses ComponentActivity, a subclass of Activity, and then overrides onretainenonconfigurationinstance() method to save ViewModelStore. When necessary, that is, in the reconstructed Activity, get the ViewModelStore instance through getLastNonConfigurationInstance() method. This ensures that the ViewModel in the ViewModelStore will not change with the reconstruction of the Activity.

At the same time, due to the implementation of the LifecycleOwner interface, you can use the Lifecycles component to sense the life cycle of each page and subscribe through it. When the Activity is destroyed and the destruction is not caused by configuration, clear the ViewModel, that is, call the clear method of ViewModelStore.

getLifecycle().addObserver(new LifecycleEventObserver() {
        public void onStateChanged(@NonNull LifecycleOwner source,
                @NonNull Lifecycle.Event event) {
            if (event == Lifecycle.Event.ON_DESTROY) {
                // Determine whether the destruction is caused by configuration change
                if (!isChangingConfigurations()) {

The onRetainNonConfigurationInstance method here will be called when the Activity is destroyed due to configuration changes. It is similar to the call timing of the onSaveInstanceState method. The difference is that onSaveInstanceState saves bundles, which have type and size restrictions, and need to be serialized in the main thread. The onRetainNonConfigurationInstance method has no restrictions, so it is preferred.

So, here, the third question should also be answered. Before 2.0, they created an empty fragment and followed the life cycle of the fragment. After 2.0, because both activities and fragments implement the lifecycle owner interface, ViewModel can sense their lifecycle through Lifecycles for instance management.

ViewModelScope understand

This is mainly about the relationship between ViewModel and some other components. As for the cooperation process, I also mentioned one before, which is mainly used for thread switching. If you need to stop some tasks in multiple processes, you must manage these processes. Generally, you need to add a CoroutineScope. If you need to cancel the process, you can cancel the CoroutineScope, and all the processes tracked by it will be cancelled.

GlobalScope.launch {

However, this global usage method is not recommended. If you want to limit the scope, viewModelScope is generally recommended.

viewModelScope is a Kotlin extension property of ViewModel. It can exit when the ViewModel is destroyed (when the onCleared() method is called). Therefore, as long as you use ViewModel, you can use viewModelScope to start various collaborations in ViewModel without worrying about task leakage.

class MyViewModel() : ViewModel() {

    fun initialize() {
        viewModelScope.launch {

    suspend fun processBitmap() = withContext(Dispatchers.Default) {
        // Do time-consuming operations here


What is LiveData?

LiveData is an observable data storage class. Unlike conventional observable classes, LiveData has life cycle awareness, which means that it follows the life cycle of other application components (such as Activity, Fragment or Service). This awareness ensures that LiveData updates only application component observers in the active lifecycle state.

The official introduction is as follows. In fact, it is relatively clear. The main functions are as follows:

  • Data storage class. That is, a class used to store data.

  • Observable. This data storage class can be observed, that is, it has such a function more than the general data storage class, and can respond to data changes.

The main idea is to use the observer mode idea to decouple the observer and the observed, and at the same time, it can perceive the changes of data. Therefore, it is generally used in ViewModel. ViewModel is responsible for triggering the update of data, which will notify LiveData, and then LiveData will notify the active observer.

        var liveData = MutableLiveData<String>()

        liveData.observe(this, object : Observer<String> {
            override fun onChanged(t: String?) {

        //Sub thread call

Why was LiveData designed and what problems did it solve?

As an observer pattern design idea, LiveData is often compared with Rxjava. The biggest advantage of observer pattern is that the upstream of event transmission and the downstream of event reception do not interfere with each other, greatly reducing the strong coupling caused by mutual dependencies.

Secondly, LiveData can be seamlessly connected to the MVVM architecture, which is mainly reflected in its ability to perceive the life cycle of activities and so on, which brings many benefits:

  • No memory leaks
    Observers are bound to Lifecycle objects and clean themselves up after their associated lifecycles are destroyed.

  • No crash due to Activity stop
    If the observer's lifecycle is inactive (such as returning an Activity in the stack), it will not receive any LiveData events.

  • Automatically determine the life cycle and call back the method
    If the life cycle of an observer is in the STARTED or RESUMED state, LiveData will think that the observer is active and call the onActive method. Otherwise, if the LiveData object does not have any active observers, it will call the onInactive() method.

Talk about the principle of LiveData.

When it comes to principles, there are actually two methods:

  • Subscription method, that is, observe method. Through this method, the subscriber and the observed are associated to form the observer mode.

Simply look at the source code:

    public void observe(@NonNull LifecycleOwner owner, @NonNull Observer<? super T> observer) {
        LifecycleBoundObserver wrapper = new LifecycleBoundObserver(owner, observer);
        ObserverWrapper existing = mObservers.putIfAbsent(observer, wrapper);
        if (existing != null && !existing.isAttachedTo(owner)) {
            throw new IllegalArgumentException("Cannot add the same observer"
                    + " with different lifecycles");
        if (existing != null) {

      public V putIfAbsent(@NonNull K key, @NonNull V v) {
        Entry<K, V> entry = get(key);
        if (entry != null) {
            return entry.mValue;
        put(key, v);
        return null;

The putIfAbsent method here is to store the life cycle related wrapper s and observer observers in mObservers as key s and value s.

  • Callback method, that is, onChanged method. Notify the observer by changing the stored value, that is, call the onChanged method. From the perspective of changing the stored value method setValue:
protected void setValue(T value) {
    mData = value;

private void dispatchingValue(@Nullable ObserverWrapper initiator) {
    do {
        mDispatchInvalidated = false;

        if (initiator != null) {
            initiator = null;
        } else {
            for (Iterator<Map.Entry<Observer<T>, ObserverWrapper>> iterator =
                    mObservers.iteratorWithAdditions(); iterator.hasNext(); ) {
                if (mDispatchInvalidated) {
    } while (mDispatchInvalidated);
    mDispatchingValue = false;

private void considerNotify(ObserverWrapper observer) {
    if (!observer.mActive) {
    // Check latest state b4 dispatch. Maybe it changed state but we didn't get the event yet.
    // we still first check to keep it as the entrance for events. So even if
    // the observer moved to an active state, if we've not received that event, we better not
    // notify for a more predictable notification order.
    if (!observer.shouldBeActive()) {
    if (observer.mLastVersion >= mVersion) {
    observer.mLastVersion = mVersion;
    //noinspection unchecked
    observer.mObserver.onChanged((T) mData);

This set of logic is still relatively simple. Traverse the map - mObservers just now, and then find the observer observer. If the observer is not active (active, that is, visible, STARTED or RESUMED), it will return directly without notification. Otherwise, the onChanged method of notifying the observer normally.

Of course, if you want to listen at any time and get a callback, you can call the observaforever method.

What is dependency injection? Why do you need her?

In short, dependency injection means that the internal class is instantiated externally. That is, you don't need to instantiate yourself, but leave it to an external container, and finally inject it into the caller to form dependency injection.

for instance:
There is a user class in the Activity. Normally, to use this user, you must instantiate it, otherwise it is a null value. However, after dependency injection is used, you don't need to instantiate it inside the Activity, so you can use it directly.

class MainActivity : BaseActivity() {
    lateinit var user: User

This user can be used directly. Isn't it a little magical and doesn't need to be relied on manually? Of course, the code hasn't been written, so we'll improve it later. It just expresses the meaning of dependency injection.

So what are the benefits of instantiating objects by external containers? The biggest advantage is to reduce manual dependencies and understand classes. The main points are as follows:

  • Dependency injection and warehousing will automatically release objects that are no longer used, reducing the excessive use of resources.
  • Within the scope of configuring scopes, reusable dependencies and created instances improve the reusability of code and reduce a lot of template code.
  • The code becomes more readable.
  • Easy to build objects.
  • Writing low coupling code makes it easier to test.

What is Hilt and how to use it?

Obviously, Hilt is a dependency annotation repository, a dependency annotation repository that encapsulates Dagger and is built on the basis of Dagger. We all know that Dagger is an early dependency injection and storage, but it is really difficult to use. It needs to be configured with many things. How simple is Hilt? We continue to improve the above example:

public class MainApplication extends Application {

class HiltActivitiy : AppCompatActivity() {

    lateinit var user: UserData

    override fun onCreate(savedInstanceState: Bundle?) {


data class UserData(var name: String) {
    constructor() : this("bob")

Say the meaning of the following notes:

  • @HiltAndroid App. All apps using Hilt must contain an Application annotated with @ HiltAndroidApp, which is equivalent to the initialization of Hilt and will trigger the generation of Hilt code.
  • @AndroidEntryPoint. It is used to provide the dependency of a class, that is, it represents the instance that this class will use injection.
  • @Inject. This annotation is used to tell Hilt how to provide instances of this class. It is often used in constructors, non private fields and methods.

Which classes of dependency injection does Hilt support.

1) If it is an Android component supported by Hilt, you can directly use @ AndroidEntryPoint annotation. Such as Activity, Fragment, Service, etc.

  • If it is a subclass of ComponentActivity, @ AndroidEntryPoint can be used directly, such as the above example.
  • For other Android classes, the same annotation must be added to the Android class it depends on, for example, @ AndroidEntryPoint annotation must be added to the Fragment, and @ AndroidEntryPoint annotation must also be added to the Activity it depends on.

2) If you need to inject a third-party dependency, you can use the @ Module annotation and use the common class of @ Module annotation to create the object of the third-party dependency. For example, get an instance of okhttp

object NetworkModule {

     * @Provides 
     * @Singleton Provide single example
    fun provideOkHttpClient(): OkHttpClient {
        return OkHttpClient.Builder()


Here are some new notes:

  • @Module. Object used to create dependent classes
  • @InstallIn. For classes injected with @ module, you need to use @ installin annotation to specify the scope of the module. For example, modules annotated with @ InstallIn(ActivityComponent::class) will be bound to the life cycle of the activity.
  • @Provides. It is used for the internal methods of the class marked by @ Module annotation and provides dependency objects.
  • @Singleton. Provide single example

3) Special annotations for ViewModel

@ViewModelInject: use the @ ViewModelInject annotation in the constructor of the ViewModel object to provide a ViewModel.

class HiltViewModel @ViewModelInject constructor() : ViewModel() {}

private val mHitViewModule: HiltViewModel by viewModels()

Talk about DNS and the existing problems

After reading the network problems I mentioned before, you should know that DNS is used for domain name resolution. After entering a domain name, you need to convert the domain name into IP address. This conversion process is DNS resolution.

However, there are some problems with traditional DSN parsing, such as:

  • Domain name cache problem
    Make a cache locally and directly return the cached data. Global load balancing may fail, because the last cache may not be the nearest place to the customer this time, and it may take a long way.

  • Domain name forwarding problem
    If operator A forwards the resolution request to operator B, and B goes to the authoritative DNS server for query, the authoritative server will think you are operator B and return the website address of operator B. as A result, it will cross the operator every time.

  • Export NAT problem
    After the network address conversion, the authoritative DNS server cannot judge which operator the customer is through the address. It is very likely to misjudge the operator, resulting in cross operator access.

  • Domain name update problem
    The local DNS server is independently deployed by different regions and operators. There are differences in the processing of domain name resolution cache. Some will be lazy and ignore the time limit of TTL resolution results, resulting in the server not updating the new ip but pointing to the old ip.

  • Resolution delay
    The DNS query process needs to recursively traverse multiple DNS servers in order to obtain the final result. There may be some delay.

  • domain hijacking
    DNS domain name resolution server may be hijacked or forged, and normal access will be resolved to the wrong address.

  • unreliable
    Because DNS resolution runs on UDP protocol, and UDP, as I said before, is an unreliable protocol. Its advantage is real-time, but it may lose packets.

These problems will not only slow down the access speed, but also lead to access exceptions, access pages are replaced and so on.

How to optimize DNS resolution

  • Security optimization

In short, DNS will still have various problems. How to solve them? Is to use HTTP DNS.

HTTP DNS is a new concept. It will bypass the traditional operator DNS server instead of traditional DNS resolution. Instead, use HTTP protocol to request a DNS server cluster and obtain the address directly through HTTP protocol.

  • Because it bypasses the operator, the domain name can be avoided from being hijacked.
  • It is based on the source ip of access, so it can obtain more accurate parsing results
  • There will be pre parsing, parsing cache and other functions, so the parsing delay is also very small

Therefore, the first optimization for security is to replace the HTTP DNS resolution method with Alibaba cloud, Tencent cloud and other services, but these services are not free. Is there any free? Yes, qiniuyun's happy DNS. Add a dependency library, and then implement the DNS interface of okhttp. Simply write an example:

//Import library
    implementation 'com.qiniu:happy-dns:0.2.13'
    implementation 'com.qiniu.pili:pili-android-qos:0.8'

//Implement DNS interface
public class HttpDns implements Dns {

    private DnsManager dnsManager;

    public HttpDns() {
        IResolver[] resolvers = new IResolver[1];
        try {
            resolvers[0] = new Resolver(InetAddress.getByName(""));
            dnsManager = new DnsManager(NetworkInfo.normal, resolvers);
        } catch (UnknownHostException e) {

    public List<InetAddress> lookup(String hostname) throws UnknownHostException {
        if (dnsManager == null)  //The default parsing method is used when the construction fails
            return Dns.SYSTEM.lookup(hostname);

        try {
            String[] ips = dnsManager.query(hostname);  //Get HttpDNS resolution results
            if (ips == null || ips.length == 0) {
                return Dns.SYSTEM.lookup(hostname);

            List<InetAddress> result = new ArrayList<>();
            for (String ip : ips) {  //Convert the ip address array into the required object list
            //Before returning the result, we can add some other known IP addresses
            return result;
        } catch (IOException e) {
        //When an exception occurs, the default resolution is used
        return Dns.SYSTEM.lookup(hostname);

//Replace dns resolution of okhttp
OkHttpClient okHttpClient = new OkHttpClient.Builder().dns(new HttpDns()).build();

  • Speed optimization

In the test environment, in fact, we can directly configure the ip white list, then skip the DNS resolution process and directly obtain the ip address. For example:

    private static class TestDNS implements Dns{
        public List<InetAddress> lookup(@NotNull String hostname) throws UnknownHostException {
            if ("".equalsIgnoreCase(hostname)){
                InetAddress byAddress=InetAddress.getByAddress(hostname,new byte[]{(byte)192,(byte)168,1,1});
                return Collections.singletonList(byAddress);
            }else {
                return Dns.SYSTEM.lookup(hostname);

What about DNS resolution timeout

When we make a network request with okhttp, if the network device switches the route and there is no response for a long time when accessing the network, an UnknownHostException will be thrown after a long time. Although we set the connectTimeout timeout in okhttp, it doesn't work for DNS resolution.

In this case, we need to judge the timeout in the custom Dns class:

public class TimeDns implements Dns {
    private long timeout;

    public TimeDns(long timeout) {
        this.timeout = timeout;

    public List<InetAddress> lookup(final String hostname) throws UnknownHostException {
        if (hostname == null) {
            throw new UnknownHostException("hostname == null");
        } else {
            try {
                FutureTask<List<InetAddress>> task = new FutureTask<>(
                        new Callable<List<InetAddress>>() {
                            public List<InetAddress> call() throws Exception {
                                return Arrays.asList(InetAddress.getAllByName(hostname));
                new Thread(task).start();
                return task.get(timeout, TimeUnit.MILLISECONDS);
            } catch (Exception var4) {
                UnknownHostException unknownHostException =
                        new UnknownHostException("Broken system behaviour for dns lookup of " + hostname);
                throw unknownHostException;

//Replace dns resolution of okhttp
OkHttpClient okHttpClient = new OkHttpClient.Builder().dns(new TimeDns(5000)).build();

What is the annotation? What meta annotations are there

Annotation, in my opinion, is an information description that does not affect code execution, but can be used to configure some code or functions.

Common annotations, such as @ Override, represent rewriting methods. See how they are generated:

public @interface Override {

You can see that Override is modified by @ interface to represent annotation. At the same time, there are two annotations @ Target and @ Retention at the top. This modified annotation is called meta annotation, which is the most basic annotation. There are four meta annotations in java:

  • @Target: indicates the scope of the annotation object.
  • @Retention: indicates the life cycle of annotation retention
  • @Inherited: indicates that the annotation type can be automatically inherited by the class.
  • @Documented: indicates that the element containing the annotation type (Annotated) will be documented through javadoc or similar tools.

Specifically, how are these meta annotations used

  • @Target

target indicates the scope of the annotation object. For example, the Override annotation indicates ElementType Method, that is, the scope of action is the method scope, that is, this annotation can only be added to the method header. In addition, there are the following modification range parameters:

  • TYPE: class, interface, enumeration and annotation TYPE.
  • FIELD: class member (construction method, method, member variable).
  • LOCAL_VARIABLE: local variable.
  • ANNOTATION_TYPE: annotation.
  • PACKAGE: PACKAGE declaration.
  • TYPE_PARAMETER: type parameter.
  • TYPE_USE: type usage declaration.

Such as ANNOTATION_TYPE means that the scope of the annotation is annotation. Haha, it's a little around. Look at the code of Target annotation:

public @interface Target {
     * @return an array of the kinds of elements an annotation type
     * can be applied to
    ElementType[] value();

It takes an ElementType parameter, that is, the scope parameter mentioned above. In addition, it is modified by the Target annotation. The passed parameter is ANNOTATION_TYPE, that is, I annotate myself. I set my own scope of action as annotation. Let's go around by ourselves..

  • @Retention

Indicates the life cycle of the annotation retention, or the duration of the annotation retention. There are several optional parameters:

  • SOURCE: only Java SOURCE files exist, and the corresponding annotations will be discarded after the compiler. It is applicable to some checking operations, such as @ Override.

  • Class: it takes effect when compiling the class file. There are Java source files and class bytecode files generated by the compiler, but VM will no longer keep comments at run time. This is also the default parameter. It is applicable to some preprocessing operations during compilation, such as @ BindView of ButterKnife. It can generate some auxiliary code or complete some functions during compilation.

  • RUNTIME: there are source files, compiled Class bytecode files, and reserved in the RUNTIME VM. Annotations can be read reflexively. It is applicable to some annotations that need to be dynamically obtained at RUNTIME, such as reflection to obtain annotations.

  • @Inherited

Indicates that the annotation type can be automatically inherited by the class. There are two points to note:

  • Class. In other words, only in the class integration relationship can the subclass integrate the annotation modified by @ Inherited in the annotation used by the parent class. Other interface integration relationships and class implementation interface relationships will not have automatic inheritance annotations.

  • Automatic inheritance. In other words, if the parent class has the annotation modified by @ Inherited, the child class does not need to write this annotation and will have this annotation automatically.

Let's take an example:

public @interface MyInheritedAnnotation {
	//Annotation 1, decorated with Inherited annotation

public @interface MyAnnotation {
	//Annotation 2, no Inherited annotation modification

public class BaseClass {
	//The parent class has the above two annotations
public class ExtendClass extends BaseClass  {

	//The subclass will inherit the MyInheritedAnnotation annotation of the parent class,
	//The MyAnnotation annotation is not inherited

  • @Documented

Indicates that the elements with this annotation can be documented through tools such as javadoc, that is, when generating Java API documents, they will be written into the documents.

What can annotations be used for

It is mainly used for the following purposes:

  • Reduce the coupling degree of the project.
  • Automatically complete some regular code.
  • Automatically generate java code to reduce the workload of developers.

What does serialization mean? What's the usage?

Serialization refers to a series of operations such as transmission and storage after the object becomes an ordered byte stream.
Deserialization is the opposite operation of serialization, that is, to transfer the bytes generated by serialization into our memory objects.

Introduce two serialization interfaces in Android

  • Serializable

It is a serialization interface provided by Java. It is an empty interface that provides serialization and deserialization operations for objects. The specific use is as follows:

public class User implements Serializable {
    private static final long serialVersionUID=519067123721561165l;
    private int id;

    public int getId() {
        return id;

    public void setId(int id) { = id;

Implement the Serializable interface and declare a serialVersionUID.

Someone may have asked here. No, there is no serialVersionUID at ordinary times. Yes, serialVersionUID is not necessary, because if it is not written, the system will automatically generate this variable. What's the use of it? When serializing, the system will write the serialVersionUID of the current class into the serialized file. When deserializing, it will detect this serialVersionUID to see whether it is consistent with the serialVersionUID of the current class. If it is the same, it can be deserialized normally. If it is different, an error will be reported.

Therefore, this serialVersionUID is an identifier in the process of serialization and deserialization, representing consistency. What effect will it have if you don't add it? If we change some member variables of this class after serialization, the serialVersionUID will change. At this time, if we take the previously serialized data for deserialization, an error will be reported. Therefore, if we manually specify the serialVersionUID, we can ensure the maximum recovery of data.

  • Parcelable

Android's own interface is much more troublesome to use: you need to implement the Parcelable interface, rewrite describeContents(), writetoparcel (parcel DeST, @ writeflags, int flags), add a static member variable CREATOR and implement Parcelable CREATOR interface

public class User implements Parcelable {
    private int id;

    protected User(Parcel in) {
        id = in.readInt();

    public void writeToParcel(Parcel dest, int flags) {

    public int describeContents() {
        return 0;

    public static final Creator<User> CREATOR = new Creator<User>() {
        public User createFromParcel(Parcel in) {
            return new User(in);

        public User[] newArray(int size) {
            return new User[size];

    public int getId() {
        return id;

    public void setId(int id) { = id;

  • createFromParcel, User(Parcel in), representing the creation of the original object from the serialized object
  • newArray, which represents the creation of the original object array of the specified length
  • writeToParcel represents writing the current object into the serialization structure.
  • describeContents, which represents the content description of the current object returned. If there is still a file descriptor, return 1, otherwise return 0.

What is the difference between the two and how to use the option

Serializable is a serialization interface provided by Java. It is simple to use but expensive. Both serialization and deserialization require a lot of I/O operations.
Parcelable is provided in Android and is also the recommended serialization method in Android. Although it is troublesome to use, it is very efficient.

Therefore, if it is the memory serialization level, it is recommended to be Parcelable, because it will be more efficient.
In the case of network transmission and disk storage, Serializable is recommended, because the serialization method is relatively simple, and Parcelable cannot guarantee the continuity of data when external conditions change.

  • Serializable

The essence of Serializable is to serialize Java objects into binary files, which can be passed between processes, and used for a series of operations such as network transmission or local storage, because its essence stores files. You can see the source code:

private void writeObject0(Object obj, boolean unshared)
    throws IOException
    try {
        Object orig = obj;
        Class<?> cl = obj.getClass();
        ObjectStreamClass desc;
        desc = ObjectStreamClass.lookup(cl, true);
        if (obj instanceof Class) {
            writeClass((Class) obj, unshared);
        } else if (obj instanceof ObjectStreamClass) {
            writeClassDesc((ObjectStreamClass) obj, unshared);
        // END Android-changed:  Make Class and ObjectStreamClass replaceable.
        } else if (obj instanceof String) {
            writeString((String) obj, unshared);
        } else if (cl.isArray()) {
            writeArray(obj, desc, unshared);
        } else if (obj instanceof Enum) {
            writeEnum((Enum<?>) obj, desc, unshared);
        } else if (obj instanceof Serializable) {
            writeOrdinaryObject(obj, desc, unshared);
        } else {
            if (extendedDebugInfo) {
                throw new NotSerializableException(
                    cl.getName() + "\n" + debugInfoStack.toString());
            } else {
                throw new NotSerializableException(cl.getName());

private void writeOrdinaryObject(Object obj,
                                     ObjectStreamClass desc,
                                     boolean unshared)
        throws IOException
        try {
            //Write binary file, magic number 0x73 at the beginning of ordinary object
            //Write the descriptor of the corresponding class. See the source code at the bottom
            writeClassDesc(desc, false);
            handles.assign(unshared ? null : obj);
            if (desc.isExternalizable() && !desc.isProxy()) {
                writeExternalData((Externalizable) obj);
            } else {
                writeSerialData(obj, desc);
        } finally {
            if (extendedDebugInfo) {

    public long getSerialVersionUID() {
        // If the serialVersionUID is not defined, the serialization mechanism will call a function to calculate a hash value according to the attributes inside the class
        if (suid == null) {
            suid = AccessController.doPrivileged(
                new PrivilegedAction<Long>() {
                    public Long run() {
                        return computeDefaultSUID(cl);
        return suid.longValue();

You can see that the relevant information of the object and object attributes is obtained through reflection, and then the data is written to a binary file, and the serialization protocol version is written, etc.
The logic of obtaining · serialVersionUID · is also reflected. If the id is empty, a hash value will be generated and calculated.

  • Parcelable

The storage of Parcelable is stored in memory through Parcel. In short, Parcel provides a set of mechanism to write the serialized data into a shared memory. Other processes can read the byte stream from this shared memory through Parcel and deserialize it into objects.

This is actually realized through the native method. I didn't analyze the specific logic. If you have a great God friend, you can analyze it in the comment area.

Of course, Parcelable can also be persistent, involving the unmarshall ing and marshall methods in Parcel. Here is a simple code:

protected void saveParce() {
        FileOutputStream fos;
        try {
            fos = getApplicationContext().openFileOutput(TAG,
            BufferedOutputStream bos = new BufferedOutputStream(fos);
            Parcel parcel = Parcel.obtain();
            parcel.writeParcelable(new ParceData(), 0);

        } catch (Exception e) {

    protected void loadParce() {
        FileInputStream fis;
        try {
            fis = getApplicationContext().openFileInput(TAG);
            byte[] bytes = new byte[fis.available()];
            Parcel parcel = Parcel.obtain();
            parcel.unmarshall(bytes, 0, bytes.length);

            ParceData data = parcel.readParcelable(ParceData.class.getClassLoader());

        } catch (Exception e) {

Serialization summary

1) For memory serialization, it is recommended to use Parcelable. Why?

  • Because Serializable stores a binary file, it will have frequent IO operations, consume a lot, and use a lot of reflection, which is also time-consuming. In contrast, Parcelable is much more efficient.

2) Serializable is recommended for data persistence. Why?

  • First of all, Serializable itself is stored in binary files, so it is more convenient for persistence. The Parcelable serialization operates in memory. If the data in memory will disappear when the process is shut down or restarted, the Parcelable serialization may fail to persist, that is, the data will not be continuous and complete. Moreover, another problem with Parcelable is compatibility. The internal implementation of each Android version may be different. If knowledge is used in memory, that is, to transfer data, it will not be affected. However, there may be a problem if it is persisted, and there may be a compatibility problem when the data of the lower version gets the higher version. Therefore, it is recommended to use Serializable for persistence.

3) Must parcel be faster than Serializable?

  • An interesting example is: when a super large object chart is serialized (indicating that through an object, many other objects can be accessed through a certain path), and each object has more than 10 attributes, and Serializable implements writeObject() and readObject(). On average, the Serializable serialization speed is 3.6 times faster than Parcelable and the deserialization speed is 1.6 times faster on each Android device

The specific reason is that there is a concept of caching in the implementation of Serilazable. When an object is parsed, it will be cached in the HandleTable. After the object of the same type is parsed next time, you can write the corresponding cache index to the binary stream. But for Parcel, there is no such concept. Every serialization is independent. Each object is treated as a new object and a new type.

Introduction to LruCache

LruCache is Android 3 1 provides a cache class for data cache, which is generally used for image memory cache. Lru's English is Least Recently Used, that is, the Least Recently Used algorithm. The core idea is that when the cache is full, it will give priority to those cache objects that have been Least Recently Used.

When we load pictures on the network, we must cache the pictures, so that the next time we load pictures, we can get them directly from the cache. Level 3 cache should be familiar to everyone, including memory, hard disk and network. Therefore, memory cache and hard disk cache are generally used, and the memory cache is LruCache.

LruCache usage

public class MyImageLoader {
    private LruCache<String, Bitmap> mLruCache;

    public MyImageLoader() {
        int maxMemory = (int) (Runtime.getRuntime().maxMemory())/1024;
        int cacheSize = maxMemory / 8;
        mLruCache = new LruCache<String, Bitmap>(cacheSize) {
            protected int sizeOf(String key, Bitmap value) {
                return value.getRowBytes()*value.getHeight()/1024;


     * Add picture cache
    public void addBitmap(String key, Bitmap bitmap) {
            mLruCache.put(key, bitmap);

     * Get pictures from cache
    public Bitmap getBitmap(String key) {
        return mLruCache.get(key);


Using the above method, you only need to provide the total capacity of the cache and rewrite the sizeOf method to calculate the size of the cache object. Here, the size of the total capacity is also a general method, that is, 1 / 8 of the available memory of the process, in kb. Then you can use the put method to add the cache object and the get method to get the cache object.

LruCache principle

In fact, the principle is also very simple, that is, the LRU algorithm is used and the LinkedHashMap is used internally for storage. When the cache is full, the least recently used element is removed. How do you guarantee to find the least recent element? Every time you use the get method to access an element or add an element, move the element to the tail of the LinkedHashMap, so that the first element is the least frequently used element, which can be removed after the capacity is full.

Simply look at the source code:

 public LruCache(int maxSize) {
       if (maxSize <= 0) {
           throw new IllegalArgumentException("maxSize <= 0");
       this.maxSize = maxSize; = new LinkedHashMap<K, V>(0, 0.75f, true);

   public final V put(K key, V value) {
       if (key == null || value == null) {
           throw new NullPointerException("key == null || value == null");

       V previous; //Find out whether the element corresponding to the key already exists
       synchronized (this) {
           //Calculate the size of the entry
           size += safeSizeOf(key, value); 
           previous = map.put(key, value);
           if (previous != null) {
             //If the previous entry exists, the memory occupied by the previous entry is subtracted first
               size -= safeSizeOf(key, previous);

       if (previous != null) {
       //If it exists before, call this method overridden by the entryRemoved callback subclass to do some processing
           entryRemoved(false, key, previous, value);
       //According to the maximum capacity, calculate whether the least frequently used entry needs to be eliminated
       return previous;

    public final V get(K key) {
      if (key == null) {
          throw new NullPointerException("key == null");

      V mapValue;
      //Query the qualified etnry according to the key
      synchronized (this) {
          mapValue = map.get(key);
          if (mapValue != null) {
              return mapValue;

       * Attempt to create a value. This may take a long time, and the map
       * may be different when create() returns. If a conflicting value was
       * added to the map while create() was working, we leave that value in
       * the map and release the created value.

      V createdValue = create(key);
      if (createdValue == null) {
          return null;

      synchronized (this) {
          //mapValue returns an entry that already has the same key
          mapValue = map.put(key, createdValue);

          if (mapValue != null) {
              // There was a conflict so undo that last put
              map.put(key, mapValue);
          } else {
              size += safeSizeOf(key, createdValue);

      if (mapValue != null) {
          entryRemoved(false, key, createdValue, mapValue);
          return mapValue;
      } else {
          return createdValue;

In fact, it can be seen that the LruCache class itself does not do much, limits the size of the cache map, and then uses LinkHashMap to complete the LRU cache strategy. Therefore, the main implementation of LRU logic is still in LinkHashMap. LinkedHashMap is a combination of HashMap and linked list. It records the order and link relationship of elements through linked list, and stores data through HashMap. It can control the output order of elements when they are traversed. It is a two-way linked list. As mentioned above, it will put the recently accessed elements at the end of the queue. If you are interested, you can see the source code of LinkHashMap.

What happened from the creation of Activity to the interface we saw

  • Firstly, the layout is loaded through setContentView, in which a DecorView is created, then different root layout files are loaded according to the theme or Feature set by the activity, and finally the layoutResID resource file is loaded through the insert method. In fact, the xml file is parsed and the View object is generated according to the node. flow chart:

  • The second is to draw the view to the interface. This process takes place in the handleResumeActivity method, that is, the method that triggers onResume. Here, a ViewRootImpl object will be created as the parent of the DecorView, and then the DecorView will be measured, laid out and drawn. flow chart:

Activity, PhoneWindow, DecorView and ViewRootImpl?

  • PhoneWindow is the only subclass of Window. Each Activity will create a PhoneWindow object. You can understand it as a Window, but it is not a real visual Window, but a management class. It is the interface between Activity and the whole View system and the middle layer of Activity and View interaction system.

  • DecorView is an internal class of PhoneWindow. It is the top level of the whole View level. It generally includes two parts: title bar and content bar. Different layouts will be adjusted according to different theme characteristics. It is created in the setContentView method, specifically in the installDecor method of PhoneWindow.

  • ViewRootImpl is the parent of DecorView. It is used to control various events of the View and is created in the handleResumeActivity method.

requestLayout and invalidate

  • The requestLayout method is used to trigger the drawing process. It will call the requestLayout of the parent layer by layer until the top layer, that is, the requestLayout of ViewRootImpl, where the thread is judged. Finally, it will execute the three drawing processes of performmeasure - > performlayout - > performdraw, that is, measurement layout drawing.
    public void requestLayout() {
        if (!mHandlingLayoutInLayoutRequest) {
            mLayoutRequested = true;
            scheduleTraversals();//Execute drawing process

The performMeasure method will execute to the measure method of view to measure the size. The performLayout method will execute to the layout method of the view to calculate the location. The performDraw method needs to be noted that it will execute the draw method of view, but it does not necessarily draw. Only flag is set to PFLAG_DIRTY_OPAQUE will draw.

  • The invalidate method is also used to trigger the drawing process, mainly by calling the draw() method. Although it will also go to the scheduleTraversals method, that is, it will go to the three processes, the View will judge whether to carry out onMeasure and onLayout operations through mPrivateFlags. When using the invalidate method, mPrivateFlags is updated, so measure and layout will not be performed. At the same time, he will also set the Flag to PFLAG_DIRTY_OPAQUE, so the onDraw method will be executed.
private void invalidateRectOnScreen(Rect dirty) {
        final Rect localDirty = mDirty;
        if (!mWillDrawSoon && (intersected || mIsAnimating)) {
            scheduleTraversals();//Execute drawing process

Finally, let's take a look at the three drawing process logic in the scheduleTraversals method. Is it as we said before, FORCE_LAYOUT flags are onMeasure and onLayout, PFLAG_DIRTY_OPAQUE flag onDraw:

  public final void measure(int widthMeasureSpec, int heightMeasureSpec) {
    final boolean forceLayout = (mPrivateFlags & PFLAG_FORCE_LAYOUT) == PFLAG_FORCE_LAYOUT;
    // Only mPrivateFlags are pflag_ FORCE_ The onMeasure method is used only when you are in layout
    if (forceLayout || needsLayout) {
      onMeasure(widthMeasureSpec, heightMeasureSpec);

    // Set LAYOUT_REQUIRED flag
    mPrivateFlags |= PFLAG_LAYOUT_REQUIRED;

  public void layout(int l, int t, int r, int b) {
    //The judgment flag bit is pflag_ LAYOUT_ onLayout method only when required
    if (changed || (mPrivateFlags & PFLAG_LAYOUT_REQUIRED) == PFLAG_LAYOUT_REQUIRED) {
        onLayout(changed, l, t, r, b);

public void draw(Canvas canvas) {
    final int privateFlags = mPrivateFlags;
    // flag is PFLAG_DIRTY_OPAQUE needs to be drawn
    final boolean dirtyOpaque = (privateFlags & PFLAG_DIRTY_MASK) == PFLAG_DIRTY_OPAQUE &&
            (mAttachInfo == null || !mAttachInfo.mIgnoreDirtyState);
    mPrivateFlags = (privateFlags & ~PFLAG_DIRTY_MASK) | PFLAG_DRAWN;
    if (!dirtyOpaque) {
    if (!dirtyOpaque) onDraw(canvas);
    // Draw Child
    // foreground is drawn every time regardless of the dirtyOpaque flag

There is a good summary in the reference article:

Although both are used to trigger the drawing process, in the process of measure and layout, only the flag will be set to force_ The layout will be re measured and re laid out, while the draw method will only redraw the area with flag as dirty. requestLayout is used to set FORCE_LAYOUT flag, invalid is used to set the dirty flag. Therefore, requestLayout will only trigger measure and layout, and invalidate will only trigger draw.

Why does the system provide a Handler

  • We should all know that this is to switch threads, mainly to solve the problem that sub threads cannot access the UI.

So why doesn't the system allow access to the UI in child threads?

  • Because the UI control of Android is not thread safe, the single thread model is used to process UI operations. You can switch the threads of UI access through the Handler.

So why not lock the UI control?

  • Because locking will complicate the logic of UI access, reduce the efficiency of UI access and block the execution of threads.

How does the Handler get the Looper of the current thread

  • Everyone should know that Looper is bound to threads. Its scope is threads, and different threads have different loopers, that is, to take out the Looper object from different threads. ThreadLocal is used here.

Suppose we don't know this class. If we want to complete such a requirement and obtain the Looper in the thread from different threads, can we use a global object, such as hashmap, to store the thread and the corresponding Looper? Therefore, we need a class to manage Looper. However, there is not only the data to be stored and obtained in the thread, but also other requirements, which are also bound to the thread. Therefore, our system has designed a tool class such as ThreadLocal.

The workflow of ThreadLocal is as follows: we can access the get method of the same ThreadLocal from different threads, and then ThreadLocal will take an array from their respective threads, and then find the corresponding value value through the ThreadLocal index in the array. For the specific logic, let's take a look at the code, which are ThreadLocal's get method and set method:

    public void set(T value) {
        Thread t = Thread.currentThread();
        ThreadLocalMap map = getMap(t);
        if (map != null)
            map.set(this, value);
            createMap(t, value);
    ThreadLocalMap getMap(Thread t) {
        return t.threadLocals;
 	public T get() {
        Thread t = Thread.currentThread();
        ThreadLocalMap map = getMap(t);
        if (map != null) {
            ThreadLocalMap.Entry e = map.getEntry(this);
            if (e != null) {
                T result = (T)e.value;
                return result;
        return setInitialValue();

First look at the set method, get the current thread, then take out the threadLocals variable in the thread, which is a ThreadLocalMap class, and then save the current ThreadLocal as the key and the value to be set as the value in this map.

The same is true for the get method. It still obtains the current thread, then takes out the ThreadLocalMap instance in the thread, and then gets the value corresponding to the current ThreadLocal.

In fact, we can see that the objects of operation are ThreadLocalMap instances in the thread, that is, the reading and writing operations are only limited to the thread, which is the subtlety of ThreadLocal's deliberate design. It can read and write data in different threads without interference between threads.

Draw a picture to facilitate understanding and memory:

What are you doing when there is no message in the MessageQueue? Will it occupy CPU resources.

  • When there is no message in the MessageQueue, it will be blocked in the queue of the loop The next () method is here. Specifically, it will be called into the nativePollOnce method and finally into epoll_wait() to block and wait.

At this time, the main thread will sleep, so it will not consume CPU resources. When the next message arrives, it will write data through the pipe pipe, and then wake up the main thread to work.

The mechanism involved in blocking and wake-up is called epoll mechanism.

Let's start with file descriptors and I/O multiplexing:

In the Linux operating system, everything can be regarded as a file, and the file descriptor is abbreviated as fd. When the program opens an existing file or creates a new file, the kernel returns a file descriptor to the process, which can be understood as an index value.

I/O multiplexing is a mechanism that allows a single process to monitor multiple file descriptors. Once a descriptor is ready (generally read ready or write ready), it can notify the program to perform the corresponding read-write operation

Therefore, I/O multiplexing is actually a notification mechanism for listening, reading and writing, and the three IO multiplexing methods provided by Linux are: select, poll and epoll. Among them, epoll is the best multi-channel I/O ready notification method.

Therefore, epoll used here is actually an I/O multiplexing method, which is used to monitor the I/O events of multiple file descriptors. Through epoll_ The wait method waits for I/O events and blocks the calling thread if no events are currently available.

What is Binder

First borrow a passage from the Divine Book "exploration of Android development art":

Intuitively, Binder Is a class that implements IBinder Interface.

from IPC(Inter-Process Communication,In terms of interprocess communication, Binder yes Android A cross process communication mode in.

It can also be understood as a virtual physical device whose device driver is/dev/binder. 

from Android FrameWork From a perspective, Binder yes ServiceManager Connect various Manager(ActivityManager,WindowManager Etc.) and response ManagerService Bridges.

from Android For the application layer, Binder It is the medium of communication between client and server.

There are many concepts, right? In fact, Binder is used for inter process communication. It is an IPC method. All the following explanations are related to Binder's practical application.

Whether it is to obtain other system services or the communication between the server and the client, it comes from Binder's inter process communication capability.

Binder communication process and principle

First, let's look at a picture. The original picture is also from the book of God:

First of all, it should be clear that the client process cannot directly operate the classes and methods in the server, because different processes do not share resources directly. The server-side object of Binder is just a reference of the server-side process.

The overall communication process is:

  • The client sends a request to the server through a proxy object.
  • The proxy object is sent to the server process through the Binder driver
  • The server process processes the request and returns the processing result to the proxy object through the Binder driver
  • The proxy object returns the result to the client.

Let's take another look at the working model commonly used in our application, as shown in the figure above:

This is our common working model at the application level. We obtain various system process services through service manager. The communication process here is as follows:

  • All classes across processes on the server must inherit the Binder class, so it is the Binder entity corresponding to the server. This class is not an actual remote Binder object, but a Binder reference (i.e. the class reference of the server), which will be mapped in the Binder driver.
  • When the client wants to call the remote object function, it only needs to write the data to Parcel and call the transact() function of the Binder reference it holds
  • During the execution of transact function, data such as parameters and identifiers (marking remote objects and their functions) will be put into the shared memory of the Client. Binder driver reads data from the shared memory of the Client and finds the shared memory of the corresponding remote process according to these data.
  • Then copy the data to the shared memory of the remote process and notify the remote process to execute the onTransact() function, which also belongs to Binder class.
  • After the Binder object of the remote process is executed, it writes the obtained into its own shared memory. The Binder driver copies the shared memory data of the remote process to the shared memory of the client and wakes up the client thread.

Therefore, the more important thing in the communication process is the Binder reference of the server to find and communicate with the server.

Some people may be confused when they see here. Why didn't Cheng Chi in the middle line of the picture be used?

  • As can be seen from the first figure, the Binder thread pool is located on the server side. Its main function is to uniformly forward the Binder requests of each business module to the remote Servie for execution, so as to avoid the repeated process of creating services. That is, there is only one server, but it can process Binder requests from multiple different clients.

Application in Android

What else can you think of about Binder's application in Android besides the ServiceManager just now?

  • System services are services obtained through getSystemService, that is, through ServiceManager. For example, the start-up scheduling of the four components is passed to the ActivityManagerService through the Binder mechanism, and then fed back to Zygote. In our daily application, we obtain services through getSystemService (getapplication()) WINDOW_ Service code acquisition.
  • AIDL(Android Interface definition language). For example, we define an iserver Aidl file, Aidl tool will automatically generate an iserver java interface class of java (including Stub, Proxy and other internal classes).
  • When the foreground process binds the background service process through bindService, onserviceconnected (componentname, IBinder service) returns the IBinder object, which can be accessed through IServer Stub. Asinterface (service) gets the object of the internal class Proxy of IServer, which implements the IServer interface.

Binder advantage

In Linux, Binder is definitely not the only way of process communication, but also the following:

Pipeline( Pipe)
Signal( Signal)
Message queue( Message)
Shared memory( Share Memory)
Socket( Socket)

After that, Binder has the following advantages:

  • High performance and efficiency: traditional IPC (socket, pipeline and message queue) needs to copy memory twice, Binder only needs to copy memory once, and shared memory does not need to copy memory.
  • Good security: the receiver can obtain the process Id and user Id of the sending from the data packet to facilitate the verification of the identity of the sender. Other IPC can only actively store the experiment, but this may be modified during the sending process.

Friends familiar with Zygote may know that when the fork() process, that is, when sending a message to Zygote process to create a process, the inter process communication method used is not Binder, but Socket. This is mainly because fork does not allow multithreading, and Binder communication is multithreading.

Therefore, the specific situation still needs to choose the appropriate IPC mode.

Let's talk about the caching mechanism of RecyclerView. Slide 10 and then slide back. Several will execute onBindView. What is cached? Will cachedView execute onBindView?

RecyclerView prefetching mechanism

These two questions are about caching, so I'll talk about them together.

1) First, let's talk about the cache structure of RecycleView:

Recycleview has four levels of cache: attachedsrap (on-screen), mcacheviews (off-screen), mviewcacheextension (custom cache), and mrecyclerpool (buffer pool)

  • Attachedsrap (on-screen), which is used for rapid reuse of itemview on-screen without re creating createView and bindView
  • Mcacheviews (off screen) saves the ViewHolder that has recently moved out of the screen, including data and position information. Only viewholders in the same position can be reused when reusing. Application scenarios are in those lists that need to slide back and forth. When sliding back, ViewHolder data can be reused directly without rebinding view.
  • Mviewcacheextension (user-defined cache) is not used directly and needs to be customized by the user. It is not implemented by default.
  • Mrecyclerpool (buffer Pool). When the cacheView is full or the adapter is replaced, put the ViewHolder removed from the cacheView into the Pool. Before putting it, the ViewHolder data will be cleared. Therefore, it is necessary to rebind the bindView during reuse.

2) The level 4 cache is read sequentially as needed. Therefore, the complete caching process is:

  1. Save cache process:
  • When inserting or deleting itemView, first save the ViewHolder in the screen to attachedsrap
  • When sliding the screen, the first disappeared itemviews will be saved to the CacheView. The default size of the CacheView is 2. If the number exceeds the first in first out principle, the itemviews removed from the header will be saved to the RecyclerPool cache pool (if there is a custom cache, it will be saved to the custom cache). The RecyclerPool cache pool will be saved according to the itemtype of the itemview. The number of caches for each itemtype is 5, and if it exceeds the number, it will be recycled.
  1. Get cache process:
  • Get from attachedsrap, match holder through pos - > get failed, get from CacheView, also get holder cache through POS
    ——>Failed to get cache from custom cache - > failed to get cache from mrecyclepool
    ——>Failed to get. Recreate viewholder - createViewHolder and bindview.

3) After understanding the cache structure and cache process, let's take a look at the specific problems
Slide 10 and then slide back. How many will execute onBindView?

  • According to the previous cache structure, there is only one cache that needs to re execute onBindView, that is, the cache pool mrecyclepool.

So let's assume that the disk starts from loading RecyclView (the page can accommodate 7 pieces of data):

  • First, the seven pieces of data will call onCreateViewHolder and onBindViewHolder in turn.
  • Slide down one (position=7), and the data with position=0 will be put into mCacheViews. At this time, the number of mCacheViews caches is 1 and the number of mrecyclerpoles is 0. Then, for the new data with position=7, the corresponding ViewHolder cannot be found in mCacheViews through position, and the corresponding data cannot be found in mrecyclepool through itemtype, so onCreateViewHolder and onBindViewHolder methods will be called.
  • Slide down another piece of data (position=8), as above.
  • Slide down another piece of data (position=9). The data with position=2 will be put into mCacheViews. However, since the default capacity of mCacheViews cache is 2, the data with position=0 will be emptied and put into the mrecyclepool cache pool. For the new position=9 data, since the ViewHolder of the corresponding type cannot be found in the mRecyclerPool, the onCreateViewHolder and onBindViewHolder methods will still be used. Therefore, at this time, the number of mCacheViews caches is 2 and the number of mrecyclerpoles is 1.
  • Slide down another piece of data (position=10). At this time, the ViewHolder of the same viewtype can be found in mRecyclerPool. Therefore, it is reused directly, and the onBindViewHolder method is called to bind data.
  • And so on. The two pieces of data that have just disappeared will be put into mCacheViews. When they reappear, the onBindViewHolder method will not be called, while the reused third piece of data is obtained from mrecyclepool, the onBindViewHolder method will be called.

4) Therefore, this problem comes to a conclusion (assuming that the capacity of mCacheViews is the default value of 2):

  • If new data is slid at the beginning, then 10 bindview methods will be used when sliding 10. Then slide back and take 10-2 bindview methods. A total of 18 calls.

  • If the old data is slid at the beginning, if you slide 10-2, you will go through 8 bindview methods. Then slide back and take 10-2 bindview methods. A total of 16 calls.

However, the actual situation is a little different. Because Recycleview introduces a new mechanism in v25, prefetching mechanism.

The prefetching mechanism is that during the sliding process, an element to be displayed will be cached in mCachedViews in advance. Therefore, when sliding 10 elements, the 11th element will also be created, and the bindview method will be used once more. However, it will not be affected when sliding back, because even if a cached data is fetched in advance, the bindview method is only advanced, which does not affect the total number of bound item s.

Therefore, when new data is sliding, the bindview method will be called again.

5) Summary, how to answer the question?

  • Level 4 cache and process.
  • Slide 10 and then slide back. bindview can be called 19 times or 16 times.
  • The cached view is actually the view of the cached item. In Recycleview, it is viewholder.
  • cachedView is the view in the mCacheViews cache. There is no need to rebind the data.

How to realize the local update of RecyclerView? Have you used payload and the parameters in notifyItemChange method?

There are several methods to update RecycleView data:

  • notifyDataSetChanged(), refresh all visible item s.
    *notifyItemChanged(int), refresh the specified item.
  • notifyItemRangeChanged(int,int), refresh the specified item from the specified position.
  • notifyItemInserted(int),notifyItemMoved(int),notifyItemRemoved(int). Insert, move, and refresh automatically.
  • notifyItemChanged(int, Object), local refresh.

It can be seen that the local refresh of view is the notifyItemChanged(int, Object) method, which is described in detail below:

notifyItemChange has two construction methods:

  • notifyItemChanged(int position, @Nullable Object payload)
  • notifyItemChanged(int position)

Among them, the payload parameter can be considered as a sign you want to refresh. For example, sometimes I just want to refresh the textview in itemView, and sometimes I just want to refresh the imageview? Or do I just want to highlight the text color of a view? Then I can mark this special requirement through the payload parameter.

How to do it? For example, if I call notifyItemChanged (14,"changeColor"), make a judgment in the onBindViewHolder callback method:

    public void onBindViewHolder(ViewHolderholder, int position, List<Object> payloads) {
        if (payloads.isEmpty()) {
            // payloads is empty, indicating that the entire ViewHolder is updated
            onBindViewHolder(holder, position);
        } else {
            // payloads is not empty. Only the View that needs to be updated can be updated.
            String payload = payloads.get(0).toString();
            if ("changeColor".equals(payload)) {

RecyclerView nested RecyclerView sliding conflict, NestScrollView nested RecyclerView.

1) When RecyclerView is nested with RecyclerView, if both of them need to slide up and down, it will cause sliding conflict. By default, the RecycleView of the outer layer is slidable and the inner layer is not.

As mentioned before, there are two ways to solve sliding conflict: internal interception method and external interception method.
Here I provide an internal interception method, and there are some other methods that you can think about yourself.

   holder.recyclerView.setOnTouchListener { v, event ->
                //When the operation is pressed, the parent view will be notified not to intercept. When the operation is picked up, it will be set to intercept and slide the parent view normally.
                MotionEvent.ACTION_DOWN,MotionEvent.ACTION_MOVE -> v.parent.requestDisallowInterceptTouchEvent(true)
                MotionEvent.ACTION_UP -> v.parent.requestDisallowInterceptTouchEvent(false)

2) For the sliding conflict of ScrclerView, the same solution is to intercept events.
Another way is to use nestedscrollview instead of ScrollView. Nestedscrollview is a new View officially designed to solve the sliding conflict problem. It is defined as a ScrollView that supports nested sliding.

Therefore, replacing Nestedscrollview directly can ensure that both can slide normally. However, pay attention to setting RecyclerView Setnestedscrollingenabled (false) is used to cancel the sliding effect of RecyclerView itself.

This is because RecyclerView defaults to setNestedScrollingEnabled(true). This method supports nested scrolling. In other words, when it is nested in NestedScrollView, it will scroll with NestedScrollView by default and give up its own scrolling. So it gives us the feeling of being stranded and stuck. So we set it to false to solve the Caton problem and make it slide normally without external influence.

reference resources


For the convenience of new and old friends, I sorted the interview question "thinking and solutions" into PDF. Let's go to work Numerous The download link can be obtained by replying to the message "111" on the homepage No.
There are small partners who can study together ❤️ My official account - building blocks on the code, analyze a knowledge point every day, and we will accumulate knowledge together.

Posted by tphgangster on Wed, 04 May 2022 14:51:42 +0300