Github recently launched a new function, that is, you create a warehouse with the same name as your user name under your account and create a readme under the warehouse MD file, which will be displayed on your Github homepage. Due to the powerful description ability of markdown, you can display all kinds of information on your Github home page, and some people even hang their resume directly on it, which greatly enriches the playability of Github.
For example, my Github account is xindoo , I'll just build one xindoo Code base, and then write a README.md The document introduces you. The final display effect is shown in the figure above. You can also come to me directly Github homepage view . For example, mark down can give full play to your subjective initiative, but it can give full play to your own initiative Github front end little sister In contrast, I can't hide my straight man's identity.
However, how to make a good-looking personal home page is not the main content of today. I mainly want to teach you how to make a dynamically updated home page. For example, there is a column on my home page that lists my latest blog posts. Every time I write a new blog post, I have to manually update readme md? Of course not. My home page will be updated automatically and regularly. How can I do it???
The idea is very simple. The essence of dynamic update of home page is to update readme MD file, ask a question first, readme MD do you have to write by hand? Is it not good to generate with program!! As long as there is a scheduled task, it can automatically grab the content of my blog home page, and then update readme MD and push it to github. People who have a server on hand may immediately think of writing a crontab to complete the scheduled task. It's not urgent without a server. You can look back.
People with a little coding level use the program to generate a readme MD is not difficult. Take my home page for example. The slightly more difficult thing is how to capture my latest blog. In fact, CSDN is essentially a simple crawler. At present, CSDN has no anti pickpocketing mechanism, so it is not difficult to implement. The code is as follows. I used urlib3 to grab the html source code, and used etree's xpath to parse the blog title and address.
# -*- coding: utf-8 -*- import urllib3 from lxml import etree import html import re blogUrl = 'https://xindoo.blog.csdn.net/' headers={'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.89 Safari/537.36'} def addIntro(f): txt = ''' 9 Technology blogger in, CSDN Certified blog expert, new video up main He has worked in Alibaba for 2 years+1 Developed in, it is now a factory Java Back end development engineer, with rich experience in digging, stepping, filling and backing pot 🐶 Focus on Java,He has also dabbled in the operating system, network and compilation principle. At present, he is writing a simple script language ''' f.write(txt) def addProjectInfo(f): txt =''' ### Open source project - [eng-practices-cn](https://github. (COM / xindoo / ENG practices CN) Google engineering practice Chinese version - [regex](https://github. COM / xindoo / regex) regular engine expression implemented in Java - [redis](https://github.com/xindoo/redis) Redis Chinese annotation version - [slowjson](https://github.com/xindoo/slowjson) json parser implemented with antlr - [leetcode](https://github.com/xindoo/leetcode) [See more](https://github.com/xindoo/) ''' f.write(txt) def addBlogInfo(f): http = urllib3.PoolManager(num_pools=5, headers = headers) resp = http.request('GET', blogUrl) resp_tree = etree.HTML(resp.data.decode("utf-8")) html_data = resp_tree.xpath(".//div[@class='article-item-box csdn-tracking-statistics']/h4") f.write("\n### My blog \ n "") cnt = 0 for i in html_data: if cnt >= 5: break title = i.xpath('./a/text()')[1].strip() url = i.xpath('./a/@href')[0] item = '- [%s](%s)\n' % (title, url) f.write(item) cnt = cnt + 1 f.write('\n[See more](https://xindoo.blog.csdn.net/)\n') f = open('README.md', 'w+') addIntro(f) f.write('<table><tr>\n') f.write('<td valign="top" width="50%">\n') addProjectInfo(f) f.write('\n</td>\n') f.write('<td valign="top" width="50%">\n') addBlogInfo(f) f.write('\n</td>\n') f.write('</tr></table>\n') f.close
With the above code, you only need to set up a crontab on your server, execute this python code, and then git commit - a "update" git push can have the Github home page of the same model as me.
It doesn't matter if you don't have your own server. Github gives you a free server for you to use. This is the Actions function launched by Github before. I understand its essence is that Github provides you with a free container. You can execute some workflow in the container and, of course, run some custom code. For more Actions, please refer to teacher Ruan Yifeng GitHub Actions Getting Started tutorial Of course, you can also see it directly Official documents of Actions.
In order to realize the automatic update without server, we only need to make the above python run on github Actions. You only need to create a Workflow in your code warehouse - > actions - > new Workflow, and GitHub will automatically help you create it under your warehouse github/workflows/${FILENAME}.yml file, you only need to modify the writing file according to the format requirements. For the specific writing method, please refer to the above documents, which will not be expanded here.
Finally, let me show you my workflow file. You can also view the file directly in my github warehouse xindoo/.github/workflows/build.yml
# This is a basic workflow to help you get started with Actions name: build readme # Controls when the action will run. Triggers the workflow on push or pull request # events but only for the master branch on: # Trigger timing push: branches: [ master ] # master update schedule: - cron: '0 */6 * * *' # Run every 6 hours # A workflow run is made up of one or more jobs that can run sequentially or in parallel jobs: # This workflow contains a single job called "build" build: # The type of runner that the job will run on runs-on: ubuntu-latest # Use the latest ubuntu image # Steps represent a sequence of tasks that will be executed as part of the job steps: # Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it - uses: actions/checkout@v2 - name: Setup Python # Install python environment uses: actions/setup-python@v2.1.1 - name: Install Python dependencies # Install python crawler dependency package run: python -m pip install urllib3 lxml - name: Run python # Generate a new readme MD file run: python generateReadme.py - name: Record time run: echo `date` > date.log - name: Commit and push if changed # Set readme MD update to warehouse run: | git diff git config --global user.email "xindoo@zxs.io" git config --global user.name "zxs" git add -A git commit -m "Updated Readme" || exit git push
This article comes from https://blog.csdn.net/xindoo