Front-end high-frequency interview questions (with answers)

Handwritten pubsub

class EventListener {
    listeners = {};
    on(name, fn) {
        (this . listeners[name] || (this.listeners[name] = [])).push(fn)
    }
    once(name, fn) {
        let tem = (...args) => {
            this.removeListener(name, fn)
            fn(...args)
        }
        fn.fn = tem
        this.on(name, tem)
    }
    removeListener(name, fn) {
        if (this.listeners[name]) {
            this.listeners[name] = this.listeners[name].filter(listener => (listener != fn && listener != fn.fn))
        }
    }
    removeAllListeners(name) {
        if (name && this.listeners[name]) delete this.listeners[name]
        this.listeners = {}
    }
    emit(name, ...args) {
        if (this.listeners[name]) {
            this.listeners[name].forEach(fn => fn.call(this, ...args))
        }
    }
}

Implement an add method

Topic description: Implement an add method so that the calculation results can meet the following expectations:
add(1)(2)(3)()=6
add(1,2,3)(4)()=10

In fact, it is the currying of the test function.

The implementation code is as follows:

function add(...args) {
  let allArgs = [...args];
  function fn(...newArgs) {
    allArgs = [...allArgs, ...newArgs];
    return fn;
  }
  fn.toString = function () {
    if (!allArgs.length) {
      return;
    }
    return allArgs.reduce((sum, cur) => sum + cur);
  };
  return fn;
}

Difference Between Synchronous and Asynchronous

  • Synchronization means that when a process is executing a request, if the request needs to wait for a period of time to return, the process will wait until the message returns and then continue to execute downward.
  • Asynchronous means that when a process is executing a request, if the request needs to wait for a period of time to return, the process will continue to execute at this time, without blocking the return of the waiting message, and the system will notify the process when the message returns. deal with.

Tell me why data is a function and not an object?

Objects in JavaScript are data of reference type. When multiple instances refer to the same object, as long as one instance operates on this object, the data in other instances will also change. In Vue, we want to reuse components more, which requires each component to have its own data, so that components will not interfere with each other. Therefore, the data of the component cannot be written in the form of objects, but in the form of functions. Data is defined in the form of function return values, so that every time we reuse a component, a new data will be returned, that is to say, each component has its own private data space, and they maintain their own data. interfere with the normal operation of other components.

The concept of CDN

CDN (Content Delivery Network) refers to a computer network system connected to each other through the Internet, using the server closest to each user to deliver music, pictures, videos, applications and other files faster and more reliably Sent to users to provide high performance, scalability and low cost network content delivery to users.

A typical CDN system consists of the following three parts:

  • Distribution service system: The most basic unit of work is the Cache device. The cache (edge ​​cache) is responsible for directly responding to the end user's access request and quickly providing the locally cached content to the user. At the same time, the cache is also responsible for synchronizing content with the source site, acquiring updated content and content not available locally from the source site and saving it locally. The number, scale, and total service capability of Cache devices are the most basic indicators to measure the service capability of a CDN system.
  • Load balancing system: The main function is to schedule access to all users who initiate service requests, and to determine the final actual access address provided to users. The two-level scheduling system is divided into global load balancing (GSLB) and local load balancing (SLB). Global load balancing mainly determines the physical location of the cache that provides services to users by judging each service node based on the principle of proximity to users. Local load balancing is mainly responsible for device load balancing inside the node
  • Operation management system: The operation management system is divided into operation management and network management subsystems, which are responsible for the collection, sorting, and delivery work necessary for interaction with external systems at the business level, including customer management, product management, billing management, statistical analysis, etc. Function.

How to improve the packaging speed of webpack?

(1) Optimize Loader

For Loader, Babel is the first to affect packaging efficiency. Because Babel will convert the code into a string to generate AST, and then continue to transform the AST and finally generate new code. The larger the project, the more converted code, the lower the efficiency. Of course, this can be optimized.

First, we optimize the file search range of Loader

module.exports = {
  module: {
    rules: [
      {
        // js files only use babel
        test: /\.js$/,
        loader: 'babel-loader',
        // Only look in the src folder
        include: [resolve('src')],
        // path not to look for
        exclude: /node_modules/
      }
    ]
  }
}

For Babel, it only works on JS code, and the code used in node_modules is compiled, so there is no need to deal with it again.

Of course, this is not enough. You can also cache the Babel-compiled files. Next time, you only need to compile the changed code files, which can greatly speed up the packaging time.

loader: 'babel-loader?cacheDirectory=true'

(2)HappyPack

Due to the fact that Node runs in a single thread, Webpack is also single-threaded in the packaging process, especially when the Loader is executed, there are many long-term compilation tasks, which will lead to waiting.

HappyPack can convert the synchronous execution of Loader into parallel, so that it can make full use of system resources to speed up packaging efficiency

module: {
  loaders: [
    {
      test: /\.js$/,
      include: [resolve('src')],
      exclude: /node_modules/,
      // The content after the id corresponds to the following
      loader: 'happypack/loader?id=happybabel'
    }
  ]
},
plugins: [
  new HappyPack({
    id: 'happybabel',
    loaders: ['babel-loader?cacheDirectory'],
    // open 4 threads
    threads: 4
  })
]

(3)DllPlugin

DllPlugin can package a specific class library in advance and then import it. This method can greatly reduce the number of packaging class libraries. Only when the class library is updated, it needs to be repackaged, and it also realizes the optimization scheme of separating common code into separate files. The usage of DllPlugin is as follows:

// Configured separately in a file
// webpack.dll.conf.js
const path = require('path')
const webpack = require('webpack')
module.exports = {
  entry: {
    // Class libraries that want to be packaged uniformly
    vendor: ['react']
  },
  output: {
    path: path.join(__dirname, 'dist'),
    filename: '[name].dll.js',
    library: '[name]-[hash]'
  },
  plugins: [
    new webpack.DllPlugin({
      // name must be the same as output.library
      name: '[name]-[hash]',
      // This property needs to be consistent with DllReferencePlugin
      context: __dirname,
      path: path.join(__dirname, 'dist', '[name]-manifest.json')
    })
  ]
}

Then you need to execute this configuration file to generate the dependency file, and then you need to use DllReferencePlugin to import the dependency file into the project

// webpack.conf.js
module.exports = {
  // ...omit other configuration
  plugins: [
    new webpack.DllReferencePlugin({
      context: __dirname,
      // manifest is the json file packaged before
      manifest: require('./dist/vendor-manifest.json'),
    })
  ]
}

(4) Code compression

In Webpack3, UglifyJS is generally used to compress code, but this is run on a single thread. In order to speed up efficiency, you can use webpack-parallel-uglify-plugin to run UglifyJS in parallel to improve efficiency.

In Webpack4, the above operations are not needed, just set the mode to production to enable the above functions by default. Code compression is also a performance optimization solution that we must do. Of course, we can compress not only JS code, but also HTML and CSS code. In the process of compressing JS code, we can also configure it, such as deleting console Log code function

(5) Others

Packing can be speeded up with some small optimizations

  • resolve.extensions: used to indicate a list of file suffixes. The default search order is ['.js', '.json']. If your import file does not add a suffix, it will search for files in this order. We should reduce the length of the suffix list as much as possible, and then sort the suffixes with high frequency first
  • resolve.alias: You can map a path through an alias, allowing Webpack to find the path faster
  • module.noParse: If you are sure that there are no other dependencies under a file, you can use this property to make Webpack not scan the file, which is helpful for large class libraries

What the hell is await waiting for?

What is await waiting for? Generally speaking, await is considered to be waiting for an async function to complete. However, according to the syntax, await waits for an expression that evaluates to a Promise object or other value (in other words, there is no special qualification).

Since an async function returns a Promise object, await can be used to await the return value of an async function - which can also be said to await an async function, but to be clear, it is actually waiting for a return value. Note that await is not only used to wait for Promise objects, it can wait for the result of any expression, so after await can actually be followed by ordinary function calls or direct quantities. So the following example works perfectly fine:

function getSomething() {
    return "something";
}
async function testAsync() {
    return Promise.resolve("hello async");
}
async function test() {
    const v1 = await getSomething();
    const v2 = await testAsync();
    console.log(v1, v2);
}
test();

The result of an await expression depends on what it is waiting for.

  • If it's not waiting for a Promise object, the result of the await expression is what it's waiting for.
  • If it is waiting for a Promise object, await is busy, it will block the code behind, waiting for the Promise object to resolve, and then get the value of resolve as the result of the await expression.

Let's see an example:

function testAsy(x){
   return new Promise(resolve=>{setTimeout(() => {
       resolve(x);
     }, 3000)
    }
   )
}
async function testAwt(){    
  let result =  await testAsy('hello world');
  console.log(result);    // hello world appears after 3 seconds
  console.log('cuger')   // cug appears after 3 seconds
}
testAwt();
console.log('cug')  //output cug immediately

That's why await must be used in async functions. async function calls will not cause blocking, and all internal blocking is encapsulated in a Promise object for asynchronous execution. await pauses the execution of the current async, so 'cug'' is output first, and hello world' and 'cuger' appear at the same time 3 seconds later.

deep copy (considering copying Symbol type)

Topic description: Handwritten new operator implementation

The implementation code is as follows:

function isObject(val) {
  return typeof val === "object" && val !== null;
}

function deepClone(obj, hash = new WeakMap()) {
  if (!isObject(obj)) return obj;
  if (hash.has(obj)) {
    return hash.get(obj);
  }
  let target = Array.isArray(obj) ? [] : {};
  hash.set(obj, target);
  Reflect.ownKeys(obj).forEach((item) => {
    if (isObject(obj[item])) {
      target[item] = deepClone(obj[item], hash);
    } else {
      target[item] = obj[item];
    }
  });

  return target;
}

// var obj1 = {
// a:1,
// b:{a:2}
// };
// var obj2 = deepClone(obj1);
// console.log(obj1);

Common image formats and usage scenarios

(1) BMP is a lossless bitmap that supports both indexed and direct colors. This picture format compresses data very little, so pictures in BMP format are usually larger files.

(2) GIF is a lossless, indexed color bitmap. The LZW compression algorithm is used for encoding. The small file size is the advantage of the GIF format. At the same time, the GIF format also has the advantages of supporting animation and transparency. However, the GIF format only supports 8-bit indexed colors, so the GIF format is suitable for scenes that do not require high color and require a smaller file size.

(3) JPEG is a lossy, direct color bitmap. The advantage of JPEG pictures is that they use direct colors. Thanks to the richer colors, JPEG is very suitable for storing photos. Compared with GIF, JPEG is not suitable for storing corporate logos and wireframes. Because lossy compression will result in blurry images, and the choice of direct color will result in larger image files than GIF s.

(4) PNG-8 is a lossless bitmap that uses indexed colors. PNG is a relatively new image format. PNG-8 is a very good substitute for GIF format. Where possible, PNG-8 should be used instead of GIF, because under the same image effect, PNG- 8 has a smaller file size. In addition, PNG-8 also supports transparency adjustment, while GIF does not support. There is no reason to use GIF over PNG-8 unless support for animation is required.

(5) PNG-24 is a lossless bitmap that uses direct color. The advantage of PNG-24 is that it compresses the data of the picture, so that the file size of PNG-24 format is much smaller than that of BMP for pictures with the same effect. Of course, PNG24 images are still much larger than JPEG, GIF, and PNG-8.

(6) SVG is a lossless vector graphics. SVG being vector graphics means that an SVG picture consists of lines and curves and the method of drawing them. When zooming in on an SVG image, you still see lines and curves instead of pixels. This means that SVG images will not be distorted when enlarged, so it is very suitable for drawing Logo s, Icon s, etc.

(7) WebP is a new image format developed by Google. WebP is a bitmap that supports both lossy and lossless compression and uses direct color. It can be seen from the name that it is born for the Web, what is born for the Web? That is to say, images of the same quality, WebP has a smaller file size. Now websites are full of a large number of pictures. If the file size of each picture can be reduced, the amount of data transmission between the browser and the server will be greatly reduced, thereby reducing the access delay and improving the access experience. Currently only Chrome browser and Opera browser support WebP format, the compatibility is not very good.

  • In the case of lossless compression, the file size of WebP images of the same quality is 26% smaller than that of PNG;
  • In the case of lossy compression, the WebP image with the same image accuracy has a file size that is 25%~34% smaller than that of JPEG;
  • The WebP image format supports image transparency, a lossless compressed WebP image, if you want to support transparency, only 22% of the extra file size is required.

Browser local storage and usage scenarios

(1)Cookie

Cookie is the earliest local storage method proposed. Before that, the server could not judge whether two requests in the network were initiated by the same user. To solve this problem, cookies appeared. The size of a Cookie is only 4kb. It is a plain text file that carries a Cookie every time an HTTP request is made.

Characteristics of cookies:

  • Once the Cookie is created successfully, the name cannot be changed
  • Cookies cannot be cross-domain, that is to say, cookies under a domain name and b domain name cannot be shared. This is also determined by the privacy security of cookies, which can prevent illegal acquisition of cookies from other websites.
  • The number of cookies under each domain name cannot exceed 20, and the size of each Cookie cannot exceed 4kb
  • There are security issues. If the cookie is intercepted, all the information of the session can be obtained. Even encryption will not help. You don't need to know the meaning of the cookie. Just forwarding the cookie can achieve the purpose.
  • Cookie s are sent when a new page is requested

If you need to share cookies across domains between domain names, there are two methods:

  1. Use Nginx reverse proxy
  2. After logging into one site, write cookies to other sites. The Session of the server is stored to a node, and the Cookie stores the sessionId

Cookie usage scenarios:

  • The most common usage scenario is the combination of Cookie and session. We store the sessionId in the Cookie, and each request will carry the sessionId, so that the server knows who initiated the request and responds with corresponding information.
  • Can be used to count the number of clicks on the page

(2)LocalStorage

LocalStorage is a new feature introduced by HTML5. Because sometimes we store a large amount of information, cookies cannot meet our needs. At this time, LocalStorage comes in handy.

Advantages of LocalStorage:

  • In terms of size, the size of LocalStorage is generally 5MB, which can store more information
  • LocalStorage is persistent storage and will not disappear when the page is closed. Unless actively cleaned up, it will exist forever
  • Only stored locally, unlike cookies that are carried with every HTTP request

Disadvantages of LocalStorage:

  • There are browser compatibility issues, browsers below IE8 are not supported
  • If the browser is set to private mode, then we will not be able to read LocalStorage
  • LocalStorage is restricted by the same-origin policy, that is, if any of the ports, protocols, and host addresses are different, they will not be accessed.

Common API s of LocalStorage:

// save data to localStorage
localStorage.setItem('key', 'value');

// Get data from localStorage
let data = localStorage.getItem('key');

// Delete saved data from localStorage
localStorage.removeItem('key');

// Delete all saved data from localStorage
localStorage.clear();

// Get the Key of an index
localStorage.key(index)

Usage scenarios of LocalStorage:

  • Some websites have the function of skinning. At this time, you can store the skinning information in the local LocalStorage. When you need to change the skin, you can directly operate the LocalStorage.
  • User browsing information on the website will also be stored in LocalStorage, and some personal information that changes frequently on the website can also be stored in local LocalStorage

(3)SessionStorage

SessionStorage and LocalStorage are both storage solutions proposed in HTML5. SessionStorage is mainly used to temporarily save the data of the same window (or tab), which will not be deleted when the page is refreshed, and will be deleted after closing the window or tab.

Comparison of SessionStorage and LocalStorage:

  • Both SessionStorage and LocalStorage store data locally;
  • SessionStorage also has the limitation of the same-origin policy, but SessionStorage has a stricter limitation. SessionStorage can only be shared in the same window of the same browser;
  • Neither LocalStorage nor SessionStorage can be crawled by crawlers;

Common API s of SessionStorage:

// Save data to sessionStorage
sessionStorage.setItem('key', 'value');

// Get data from sessionStorage
let data = sessionStorage.getItem('key');

// Delete saved data from sessionStorage
sessionStorage.removeItem('key');

// Delete all saved data from sessionStorage
sessionStorage.clear();

// Get the Key of an index
sessionStorage.key(index)

SessionStorage usage scenarios

  • Since SessionStorage is time-sensitive, it can be used to store visitor login information of some websites, as well as temporary browsing record information. When the website is closed, this information is also eliminated.

What are the advantages and disadvantages of SPA single page?

advantage:

1.Good experience, no refresh, less requests  data ajax Asynchronously get the page process;

2.Front and rear separation

3.Reduce server pressure

4.Share a set of back-end program code, adapt to multiple terminals

shortcoming:

1.The first screen loads too slowly;

2.SEO Not good for search engine crawling

Why is the arguments parameter of a function an array-like instead of an array? How to iterate over an array of classes?

arguments is an object. Its properties are numbers that are incremented from 0. There are also attributes such as callee and length, which are similar to arrays; However, it does not have the common method properties of arrays, such as foreach and reduce, so it is called array like

To iterate over an array of classes, there are three methods:

(1) Apply the array method to the class array, then you can use the call and apply methods, such as:

function foo(){ 
  Array.prototype.forEach.call(arguments, a => console.log(a))
}

(2) Use the Array.from method to convert the class array into an array:‌

function foo(){ 
  const arrArgs = Array.from(arguments) 
  arrArgs.forEach(a => console.log(a))
}

(3) Use the spread operator to convert an array-like array into an array

function foo(){ 
    const arrArgs = [...arguments] 
    arrArgs.forEach(a => console.log(a)) 
}

What are the characteristics of setTimeout, setInterval, and requestAnimationFrame?

Asynchronous programming is of course indispensable for timers. Common timer functions include setTimeout, setInterval, and requestAnimationFrame. The most commonly used is setTimeout. Many people think that setTimeout is how long to delay, that is how long it should be executed.

In fact, this view is wrong, because JS is executed in a single thread, if the previous code affects the performance, it will cause setTimeout to not be executed as scheduled. Of course, setTimeout can be corrected through code to make the timer relatively accurate:

let period = 60 * 1000 * 60 * 2
let startTime = new Date().getTime()
let count = 0
let end = new Date().getTime() + period
let interval = 1000
let currentInterval = interval
function loop() {
  count++
  // Time spent in code execution
  let offset = new Date().getTime() - (startTime + count * interval);
  let diff = end - new Date().getTime()
  let h = Math.floor(diff / (60 * 1000 * 60))
  let hdiff = diff % (60 * 1000 * 60)
  let m = Math.floor(hdiff / (60 * 1000))
  let mdiff = hdiff % (60 * 1000)
  let s = mdiff / (1000)
  let sCeil = Math.ceil(s)
  let sFloor = Math.floor(s)
  // Get the time spent in the next cycle
  currentInterval = interval - offset 
  console.log('Time:'+h, 'Minute:'+m, 'millisecond:'+s, 'Seconds are rounded up:'+sCeil, 'Code execution time:'+offset, 'next cycle interval'+currentInterval) 
  setTimeout(loop, currentInterval)
}
setTimeout(loop, currentInterval)

Next, let's look at setInterval. In fact, the function of this function is basically the same as setTimeout, except that this function executes a callback function at regular intervals.

In general setInterval is not recommended. First, it, like setTimeout, does not guarantee that the task will be executed at the expected time. Second, it has the problem of performing accumulation, please see the following pseudo code

function demo() {
  setInterval(function(){
    console.log(2)
  },1000)
  sleep(2000)
}
demo()

In the browser environment of the above code, if a time-consuming operation occurs during the execution of the timer, multiple callback functions will be executed at the same time after the time-consuming operation ends, which may cause performance problems.

If there is a need for a loop timer, it can actually be achieved through requestAnimationFrame:

function setInterval(callback, interval) {
  let timer
  const now = Date.now
  let startTime = now()
  let endTime = startTime
  const loop = () => {
    timer = window.requestAnimationFrame(loop)
    endTime = now()
    if (endTime - startTime >= interval) {
      startTime = endTime = now()
      callback(timer)
    }
  }
  timer = window.requestAnimationFrame(loop)
  return timer
}
let a = 0
setInterval(timer => {
  console.log(1)
  a++
  if (a === 3) cancelAnimationFrame(timer)
}, 1000)

First of all, requestAnimationFrame has a function throttling function, which can basically ensure that it can be executed only once within 16.6 milliseconds (without frame dropping). Moreover, the delay effect of this function is accurate, and there is no problem of inaccurate timer time. Of course, you can also use this function to achieve setTimeout

Coercion rules for the == operator?

For ==, if the types of the two sides of the comparison are not the same, type conversion will be performed. If the comparison between x and y is the same, the following judgment process will be performed:

  1. First, it will determine whether the two types are the same, and if they are the same, compare the size of the two;
  2. If the types are not the same, type conversion will be performed;
  3. It will first judge whether it is comparing null and undefined, and if so, it will return true
  4. Determine whether the two types are string and number, and if so, convert the string to number
1 == '1'
      ↓
1 ==  1
  1. Determine whether one of the parties is a boolean, if so, it will convert the boolean to a number and then judge
'1' == true
        ↓
'1' ==  1
        ↓
 1  ==  1
  1. Determine whether one of them is an object and the other is a string, number or symbol. If so, the object will be converted to a primitive type and then judged.
'1' == { name: 'js' }        ↓'1' == '[object Object]'

What is the difference and role of pseudo-elements and pseudo-classes?

  • Pseudo-elements: Insert extra elements or styles before and after content elements, but these elements are not actually generated in the document. They are only visible externally, but they are not found in the source code of the document, hence the name "pseudo" elements. E.g:
p::before {content:"Chapter One:";}
p::after {content:"Hot!";}
p::first-line {background:red;}
p::first-letter {font-size:30px;}
  • Pseudo-classes: Add special effects to specific selectors. It adds categories to existing elements and does not create new elements. E.g:
a:hover {color: #FF00FF}
p:first-child {color: red}

Summary: Pseudo-classes change the state of elements by adding pseudo-classes to element selectors, while pseudo-elements change elements by operating on elements.

code output

var a, b
(function () {
   console.log(a);
   console.log(b);
   var a = (b = 3);
   console.log(a);
   console.log(b);   
})()
console.log(a);
console.log(b);

Output result:

undefined 
undefined 
3 
3 
undefined 
3

This topic is similar to the knowledge points examined in the above topic, b is assigned to 3, b is a global variable at this time, and 3 is assigned to a, a is a local variable, so when it is finally printed, a is still undefined.

What is the Same Origin Policy

The cross-domain problem is actually caused by the browser's same-origin policy.

The same-origin policy restricts how documents or scripts loaded from the same origin can interact with resources from another origin. This is an important security mechanism for browsers to isolate potentially malicious files. Homology means that the protocol, port number, and domain name must be the same.

Same-origin policy: protocol (protocol), domain (domain name), and port (port) must be consistent.

The same-origin policy limits three main areas:

  • js scripts under the current domain cannot access cookie s, localStorage and indexDB under other domains.
  • The js script under the current domain cannot operate the DOM under other domains.
  • ajax cannot send cross-domain requests under the current domain.

The purpose of the homology policy is mainly to ensure the information security of users. It is only a restriction on js scripts, not on browsers. There is no cross domain restriction on general img or script script requests. This is because none of these operations will respond to the result for potentially security problematic operations

The concept and characteristics of TCP and UDP

Both TCP and UDP are transport layer protocols, and they both belong to the TCP/IP protocol family:

(1)UDP

The full name of UDP is User Datagram Protocol. In the network, it is used to process data packets like TCP protocol. It is a connectionless protocol. In the OSI model, at the transport layer, it is the upper layer of the IP protocol. UDP has the disadvantage of not providing packet grouping, assembly and sorting of packets, that is to say, when a packet is sent, it is impossible to know whether it arrives safely and completely.

Its features are as follows:

1) For connectionless

First of all, UDP does not need to perform a three-way handshake to establish a connection like TCP before sending data. If you want to send data, you can start sending. And it is only a porter of data packets, and will not perform any splitting and splicing operations on data packets.

Specifically:

  • On the sender side, the application layer passes the data to the UDP protocol of the transport layer. UDP will only add a UDP header to the data to identify the UDP protocol, and then pass it to the network layer.
  • At the receiving end, the network layer passes the data to the transport layer, and UDP only removes the IP header and passes it to the application layer without any splicing operation

2) It has the functions of unicast, multicast and broadcast

UDP not only supports one-to-one transmission mode, but also supports one-to-many, many-to-many, and many-to-one modes, that is to say, UDP provides unicast, multicast, and broadcast functions.

3) message-oriented

The sender's UDP sends the message to the application, and then delivers it down to the IP layer after adding the header. UDP does not combine or split the packets delivered by the application layer, but preserves the boundaries of these packets. Therefore, the application must choose the appropriate size of the message

4) Unreliability

First of all, the unreliability is reflected in the lack of connection. The communication does not need to establish a connection, and it can be sent as soon as it wants. This situation is definitely unreliable.

And what data is received will be transmitted, and the data will not be backed up, and the data will not be sent to the other party without caring whether the other party has received the data correctly.

In addition, the network environment is good and bad, but UDP will always send data at a constant speed because there is no congestion control. Even if the network conditions are not good, the sending rate will not be adjusted. The disadvantage of this implementation is that it may lead to packet loss in the case of poor network conditions, but the advantages are also obvious. In some scenarios with high real-time requirements (such as teleconferences), UDP instead of TCP needs to be used.

5) The header overhead is small, and it is very efficient to transmit data packets.

The UDP header contains the following data:

  • Two sixteen-digit port numbers, which are the source port (optional field) and the destination port
  • The length of the entire datagram
  • Checksum of the entire datagram (an optional IPv4 field) used to detect errors in header information and data

Therefore, the header overhead of UDP is small, only 8 bytes, which is much less than at least 20 bytes of TCP, and it is very efficient when transmitting data packets.

(2) TCP The full name of TCP is Transmission Control Protocol, which is a connection-oriented, reliable, byte stream-based transport layer communication protocol. TCP is a connection-oriented, reliable streaming protocol (a stream is an uninterrupted data structure).

It has the following features:

1) connection oriented

Connection-oriented means that a connection must be established at both ends before sending data. The method of establishing a connection is a "three-way handshake", which can establish a reliable connection. Establishing a connection lays the foundation for reliable data transmission.

2) Only supports unicast transmission

Each TCP transmission connection can only have two endpoints, only point-to-point data transmission, and does not support multicast and broadcast transmission methods.

3) Oriented to byte stream

Unlike UDP, TCP does not transmit packets independently, but transmits them in a stream of bytes without preserving packet boundaries.

4) Reliable transmission

For reliable transmission, the determination of packet loss and bit error depends on the TCP segment number and acknowledgment number. In order to ensure the reliability of message transmission, TCP gives each packet a sequence number, and the sequence number also ensures the orderly reception of packets transmitted to the receiving entity. The receiver entity then sends back a corresponding acknowledgment (ACK) for the successfully received bytes; if the sender entity does not receive an acknowledgment within a reasonable round-trip delay (RTT), then the corresponding data (assuming lost) will be retransmitted.

5) Provide congestion control

When the network is congested, TCP can reduce the rate and quantity of data injected into the network and relieve the congestion.

6) Provide full duplex communication

TCP allows applications on both sides of the communication to send data at any time, because both ends of the TCP connection have buffers to temporarily store data for bidirectional communication. Of course, TCP can send a segment immediately, or buffer it for a while to send more segments at once (maximum segment size depends on MSS)

Tell me how to convert an array of class to an array?

//Convert by call ing the slice method of the array
Array.prototype.slice.call(arrayLike)

//Convert by call ing the splice method of the array
Array.prototype.splice.call(arrayLike,0)

//The conversion is achieved by calling the concat method of the array with apply
Array.prototype.concat.apply([],arrayLike)

//The conversion is achieved by the Array.from method
Array.from(arrayLike)

Tags: Javascript Front-end

Posted by grandeclectus on Wed, 31 Aug 2022 21:34:03 +0300