Translate Chapter 9 details of JavaScript Everywhere (^ ^)

Translate Chapter 9 details of JavaScript Everywhere (^ ^)

Write at the beginning

Hello, I'm Mao Xiaoyou, a front-end development engineer. Is translating an English technical book.

In order to improve everyone's reading experience, the structure and content of the sentence are slightly adjusted. If you find any flaws in this article, or you have any comments or suggestions, you can leave a message in the comment area or add my wechat: code\_maomao, welcome to communicate and learn from each other.

( σ ゚∀゚) σ..:\* ☆ ouch, good

Chapter 9 details

When Febreze, the ubiquitous air freshener, was first released, it was like a mute.

The original ads only showed that people could use the product to remove certain unpleasant smells, such as cigarette smoke, resulting in poor sales. In the face of disappointing results, the marketing team shifted its focus to using Febreze as the perfect detail. Now, these ads depict the process of someone cleaning the room, loosening pillows and Febreze excitedly completing the fresh room. This redesign of the product led to a surge in sales.

This is a good example of the importance of detail.

Now we have an API that can be used normally, but it lacks the finishing touch that enables us to put into production.

In this chapter, we will implement some best practices for Web and GraphQL application security and user experience.

These details go far beyond the spraying scope of air fresheners and are crucial to the safety, safety and availability of our applications.

Web applications and express JS best practices

Express.js is the underlying Web application framework that supports our API. We can express JS code makes some minor adjustments to provide a solid foundation for our application.

Express Helmet

the ExpressHelmet middleware is a collection of small security middleware functions. These will adjust the HTTP header of our application to improve security. Although many of these browser based applications, enabling Helmet is a simple step to protect our applications from common Web vulnerabilities.

To enable Helmet, we need to use middleware in our application and instruct Express to use it in our middleware stack as soon as possible. Yes/ src/index.js file, add the following content:

// first require the package at the top of the file
const helmet = require('helmet')

// add the middleware at the top of the stack, after const app = express()

By adding Helmet middleware, we quickly enabled common Web security best practices for our applications.

Cross domain resource sharing

Cross domain resource sharing (CORS) is a method that allows us to request resources from another domain. Since our API and UI code will exist separately, we want to enable the use of other sources. If you are interested in learning the whole story of CORS, I strongly recommend you Mozilla CORS guide.

To enable CORS, we will src/index.js file JSCORS middleware package:

// first require the package at the top of the file
const cors = require('cors');

// add the middleware after app.use(helmet());

By adding Middleware in this way, we enabled cross domain requests from all domains. At present, this is good for us because we are in development mode and are likely to use domains generated by managed service providers, but by using middleware, we can also limit requests to specific sources.


At present, both our note query and user query return all the list notes and users in the database.

This is good for local development, but as our applications grow, it becomes unsustainable because queries that may return multiple (or thousands) notes are very expensive and slow down databases, servers and networks. Instead, we can page these queries and return only a certain number of results. We can implement two common paging types.

The first is offset paging, in which the client passes the offset number and returns a limited amount of data to work.

For example, if the data per page is limited to 10 records, and we want to request the third page of data, we can pass an offset of 20. Although conceptually this is the most direct approach, it may encounter scaling and performance problems.

The second type of paging is cursor based paging, where a time-based cursor or unique identifier is passed as the starting point. We then ask for a specific amount of data to follow this record. This approach gives us maximum control over paging. In addition, since Mongo's object IDs are ordered (they start with a 4-byte time value), we can easily use them as cursors. To learn more about Mongo object IDS, it is recommended to read the corresponding MongoDB documentation.

If you don't understand the concept, it doesn't matter. Let's gradually implement the paging of notes as a GraphQL query. First, let's define what will be created, then update the schema, and finally the parser code. For our requirements, we need to query our API and choose to pass the cursor as a parameter. Then, the API should return a limited amount of data, representing the cursor point of the last page in the dataset and the Boolean value of another page of data to query.

With this description, we can update Src / schema JS file to define this new query. First, we need to add a NoteFeed type to the file:

type NoteFeed {
  notes: [Note]!
  cursor: String!
  hasNextPage: Boolean!

Next, we will add our noteFeed query:

type Query {
  # add noteFeed to our existing queries
  noteFeed(cursor: String): NoteFeed

After updating the structure, we can write the parser code for our query. Yes/ src/resolvers/query.js, add the following to the exported object:

noteFeed: async (parent, { cursor }, { models }) => {
  // hardcode the limit to 10 items
  const limit = 10;
  // set the default hasNextPage value to false
  let hasNextPage = false;
  // if no cursor is passed the default query will be empty
  // this will pull the newest notes from the db
  let cursorQuery = {};

  // if there is a cursor
  // our query will look for notes with an ObjectId less than that of the cursor
  if (cursor) {
   cursorQuery = { _id: { $lt: cursor } };

  // find the limit + 1 of notes in our db, sorted newest to oldest
  let notes = await models.Note.find(cursorQuery)
   .sort({ _id: -1 })
   .limit(limit + 1);

  // if the number of notes we find exceeds our limit
  // set hasNextPage to true and trim the notes to the limit
  if (notes.length > limit) {
   hasNextPage = true;
   notes = notes.slice(0, -1);

  // the new cursor will be the Mongo object ID of the last item in the feed array
  const newCursor = notes[notes.length - 1]._id;

  return {
   cursor: newCursor,

After using this parser, we can query our noteFeed and return up to 10 results. In GraphQL Playground, we can write the following queries to receive notes, their object ID, their "created in" timestamp, cursor and list of Boolean values on the next page:

query {
  noteFeed {
   notes {

Since there are more than 10 notes in our database, it returns a cursor and hasNextPage value true. Using this cursor, we can query the second page of the feed:

query {
  noteFeed(cursor: "<YOUR OBJECT ID>") {
   notes {

We can continue to perform this operation on each cursor whose hasNextPage value is true. With this implementation, we create a paginated note block. This not only enables our UI to request specific data blocks, but also reduces the burden on the server and database.

Data limitation

In addition to creating paging, we also need to limit the amount of data that can be requested through our API. This can prevent the query from overloading our server or database.

The first step in this process is to limit the amount of data that can be returned by the query. Our two queries, user and notes, return all matching data from the database. We can set a limit() method for our database query. For example, in our src/resolvers/query.js file, we can update the note query as follows:

notes: async (parent, args, { models }) => {
  return await models.Note.find().limit(100);

Although limiting data is a good start, at present, our queries can be written in unlimited depth. This means that you can write a single query to retrieve the list of notes, the author information of each note, the favorite list of each author, the author information of each favorite, and so on. There are a lot of data in a query, let's continue to write! To prevent these types of excessive queries, we can limit the depth of the query according to the API.

In addition, we may have some complex queries, which are not nested too much, but still need a lot of calculation to return data. We can prevent such requests by limiting the complexity of the query.

We can use it/ src/index. The graphql depth limit and graphql validation complexity packages in the. JS file implement these restrictions:

// import the modules at the top of the file
const depthLimit = require('graphql-depth-limit');
const { createComplexityLimitRule } = require('graphql-validation-complexity');

// update our ApolloServer code to include validationRules
const server = new ApolloServer({
  validationRules: [depthLimit(5), createComplexityLimitRule(1000)],
  context: async ({ req }) => {
    // get the user token from the headers
    const token = req.headers.authorization;
    // try to retrieve a user with the token
    const user = await getUser(token);
    // add the db models and the user to the context
    return { models, user };

By adding these packages, we added additional query protection to the API. For more information on protecting the GraphQL API from malicious queries, please check out the wonderful article by Max Stoiber, chief technology officer of Spectrum.

Other precautions

After building our API, you should have a solid understanding of the basics of GraphQL development. If you want to know more about these topics, you can test them next. GraphQL subscription and Apollo Engine are some good choices.


Well, I admit: I don't feel guilty about this writing test. Testing our code is important because it allows us to make changes easily and improve collaboration with other developers. A great advantage of GraphQL setup is that the parser is just a simple function that requires some parameters and returns data. This makes our GraphQL logic easy to test.

Subscription content

Subscription is a powerful feature of GraphQL, which provides a direct way to integrate the publish subscribe model into our application. This means that when publishing data on the server, the UI can subscribe to notifications or updates. This makes the GraphQL server an ideal solution for applications that process real-time data. For more information about GraphQL subscriptions, check the Apollo server documentation.

Apollo GraphQL platform

Throughout the development of the API, we have been using the Apollo GraphQL library. In later chapters, we will also use the Apollo client library to interface with the API. I chose these libraries because they are industry standards and provide excellent developer experience for using GraphQL. If you put applications into production, Apollo, the company that maintains these libraries, will also provide a platform that provides monitoring and tools for the GraphQL API. You can learn more about Apollo's website below.


In this chapter, we added some finishing touches to our application. Although we can realize many other options, at this point, we have developed a reliable MVP (minimum feasible product). In this state, we are ready to start our API!

In the next chapter, we will deploy our API to a public web server.

If there is a lack of understanding, you are welcome to correct it. If you think it's OK, please like your collection or share it. I hope it can help more people.

Tags: Front-end css3 html

Posted by grigori on Thu, 05 May 2022 15:27:58 +0300