Advanced Google test - parameter test, Mock test, time-consuming test, type test

1. Previously on

Google test is a unit testing framework developed for C + +. It provides many favorable wheels for writing unit testing, which can largely avoid the trouble of repeatedly building wheels for writing unit testing.

This series of articles is preceded by a basic document for getting started. It mainly focuses on the environment construction and simple knowledge explanation of Google test module, including but not limited to:

1. GoogleTest And GoogleMock Environment construction
2. TEST,TEST_F,MOCK_METHOD Simple use case introduction of three wheels
3. EXPECT_EQ,EXPECT_LE,EXPECT_GT,EXPECT_STREQ Introduction of
4. ASSERT_*Series introduction

If you are not clear about the above knowledge points, you can borrow the portal of the previous article for understanding and learning. The specific connections are as follows:

Introduction to Google test -- from building to main function case introduction

2. The focus of this article

This article focuses on the problems and solutions encountered in the process of process management test in a real business, covering three aspects: parameter test, Mock interface test and interface time-consuming test. It will simply mention type test. And cooperate with the CMake Tools plug-in of VSCode to test the process management class quickly and conveniently.

3. Clear thinking is needed before writing unit tests

Before unit testing, you must be clear about the test objectives of each class, for example, to test whether the implementation of a class is correct or whether an external call to this class is correct. After making this clear, you can start the dry unit test. The real first step is not to start writing test cases, but to draw the following call diagram in your mind.

Almost all programs have such an architecture. The top entry must be the main function, and it is a globally unique entry. The main function directly calls various self written classes, or frameworks, or STL class interfaces. Then, in the code of the whole program, there are always some interfaces that will call the System API (system interface).

At this time, the idea of writing test code and the direction of program operation are just the opposite. After sorting out the call relationship tree of the whole program, we need to write test cases from the bottom up, and each test is a section of arrow (essentially the specific implementation of a function call).

In the process of testing, for the interface calling System API, you can directly mock a test pile. After a test case is run, check whether the corresponding parameters are passed in the test pile as scheduled. Mock's test pile can also book some return values to achieve the purpose that the test process is not affected by the real System API and the test environment, but can also be tested.

Other upper layer dependent interfaces, such as the complex dependent interface in the legend, can also test only one interface implementation, and the lower layer interface that has been tested and specifically implemented can achieve the unit test effect of a single interface through adding a lower layer Mock test pile.

4. Start with cmakelists Txt start talking about the main points of attention

4.1 installation of cmake tools

If the IDE uses VSCode, you can use CMake Tools to conduct unit testing, which will have a more smooth experience.

After installation, you need to modify the corresponding configuration in VSCode. Here, directly Ctrl + Shift + P, search settings, and then enter the configuration. Add the following contents in the configuration file. For subsequent convenience, this paper posts the settings json file you use for reference Settings.json:

"cmake.defaultVariants": {
    "buildType": {
        "default": "500",
        "description": "The build type.",
        "choices": {
            "300": {
                "short": "300",
                "long": "Compile 300 signal executable program",
                "settings": {
                    "TSC": "300",
                    //"DEBUG":"DEBUG" opens and starts the DEBUG mode
                },
                "buildType": "Release"
            },
            "500": {
                "short": "500",
                "long": "Compile 500 signal executable program",
                "settings": {
                    "TSC": "500"
                    //"DEBUG":"DEBUG" opens and starts the DEBUG mode
                },
                "buildType": "Release"
            },
            "400": {
                "short": "400",
                "long": "Compile 400 signal executable program",
                "settings": {
                    "TSC": "400"
                    //"DEBUG":"DEBUG" opens and starts the DEBUG mode
                },
                "buildType": "Release"
            },
            "pc": {
                "short": "pc",
                "long": "compile pc Executable program",
                "settings": {
                    "TSC": "pc"
                    //"DEBUG":"DEBUG" opens and starts the DEBUG mode
                },
                "buildType": "Release"
            },
            "debug": {
                "short": "Debug",
                "long": "Disable optimizations - include debug information.",
                "buildType": "Debug"
            },
            "release": {
                "short": "Release",
                "long": "Optimize for speed - exclude debug information.",
                "buildType": "Release"
            },
            "pc_test": {
                "short": "pc_test",
                "long": "ctest compile pc Executable program",
                "settings": {
                    "TSC": "pc_test",
                    "CTEST": "true"    //This environment variable is useful and will be linked in the subsequent CMakeLists.
                    //"DEBUG":"DEBUG" opens and starts the DEBUG mode
                },
                "buildType": "Release"
            },
        }
    }
},

After setting, click the CMake Tools control at the bottom to find our newly set Build Variant. As shown in the figure below.

4.2 problems often encountered in unit testing - how to test the private member variables of a class without modifying the business class code and adding a get interface

gcc series compilers support a better function, - fno access control, which is added during compilation. The access restrictions of all private and protected permissions in the generated target file can be directly changed into public without modifying the code.

With this compilation option, private member variables can be accessed directly through class objects. Isn't it cool!

Corresponding to cmakelists Txt, you can add the following fields to complete

if(DEFINED CTEST)
    set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fno-access-control")
    add_definitions(-DUSE_CTEST)
endif()

Similarly, in order to test the code, the new header file path can also be added in the following ways:

if(DEFINED CTEST)
    set(HEADERS
    ${HEADERS} 
    ./unittest/include )
endif()

Of course, when compiling the test code, you may not compile the original business component, but you can also exclude it in the following ways:

if(NOT DEFINED CTEST)
    add_executable(Business component name ${ALL_SRCS})
    target_link_libraries(Business component name ${LIBRARIES})
endif()

5. Mock test

5.1 classify the test subjects and clarify which need Mock

In the program interface logic diagram mentioned above, we mentioned that the test code should be tested from the bottom. Then the main interface types encountered will be the following.

a. The test without any dependent interface is relatively simple. It mainly focuses on whether the input and return values in the whole test process meet the expectations. For class member functions, you can directly detect whether the internal member variables change as expected with - fno access control.

b. The situation that only depends on the system API interface is slightly complex. It is necessary to add some test piles of the system interface on the basis of step A. For example, some interfaces need to read some specific characters from the serial port or network to appear. So this kind of interface can not be tested directly by using the system API. Then you need to manually make a specific test pile.

c. For complex dependent interfaces, it is more complex. Not only the system APIs involved need to be driven by themselves, but also the measured underlying dependent APIs need to be driven. This ensures that each test case tests a single interface implementation. Of course, for the tested interface, you can call it directly with the real interface. Then the subsequent value prediction and interface call prediction need to consider a deeper range.

5.2 what are the benefits of mock's other modules

Here, we first focus on the interface that depends on the external log module in this test. Since the main body of this test is the process management module, we don't care about the log module.

The benefits of other Mock modules are as follows:

a. Exceptions, segment errors, abort s and other problems in other modules will not affect their own modules.

b. Even if other modules are not developed, as long as the interface and its effect are informed, it will not affect the development and testing.

c. Be able to clearly show their logical relationship with the outside world.

5.3 other modules and system modules of MockAPI in this test

The following is an example of Mock's log module. Google test provides the wheel of GMock, so that when writing the Mock API, you don't need to implement the functions of the API. You can simply write one line to complete the production of Mock test pile. The expected return and expected parameter constraints of the subsequent MockAPI can also be easily constrained in specific use cases.

#pragma once
#include "gtest/gtest.h"
#include "gmock/gmock.h"

class MockLog
{
public:
    MOCK_METHOD(void, info, (int, std::string, bool, std::string, const char*));
    MOCK_METHOD(void, error, (int, std::string, bool, std::string, const char*));
};

The following is an example of the system MockAPI. The specific description is not expanded. It should be noted here that the latest version of Google test 1.10, MOCKE_METHOD has removed the requirement to write n. The overall MockAPI declaration is written more naturally and easily. This has greatly improved the fluency of writing compared with the past.

#pragma once
#include "gtest/gtest.h"
#include "gmock/gmock.h"
#include <chrono>
#include <sched.h>

class MockSystemAPI
{
public:
    MOCK_METHOD(void, sleep_for, (std::chrono::seconds));
    MOCK_METHOD(int, kill, (pid_t, int));
    MOCK_METHOD(pid_t, waitpid, (pid_t,int *, int));
    MOCK_METHOD(int, execvpe, (const char * ,char *const *, char *const *));
    MOCK_METHOD(pid_t, fork, ());
};

6. Parameter test

Test_f was explained in the previous introduction article. The parameter test here actually includes all the functions of the test fixture, and they are directly related to is a. If you don't know the test fixture, you can learn step by step Introduction to Google test -- from building to main function case introduction.

6.1 differences between parameter test and test fixture

The new thing in parameter testing here is mainly the function of automatically generating orthogonal parameter sets.

We give a scenario, for example, an interface has five input parameters.

int test(int a, int b, int c, int d, int e);

Here, the constraints of all our input parameters are different. For example, the input parameters of a may be three possibilities, and the input parameters of 1, 0, - 1 and b may range from 0 to 100. Other later CDES also have their own requirements.

Here, the parameter test gives a function of automatic orthogonal combination. When the range of abcde parameters is constrained, Google test will automatically generate all collocation combinations by orthogonal traversal according to the possibility of each parameter, and test each combination in turn.

6.2 parameter preparation stage of parameter test

The roughly written example is as follows:

First, define the usable range of each input parameter:

// Test set for delay start parameter
// Here, you can directly use:: testing::Range to generate automatically
// const std::initializer_list<int> delayset = {0,1,2,3,4,5,6};      
// Whether to restart the test set of parameters
const std::initializer_list<bool>  boolset = {true, false};  
// Test case set of destruction
const std::initializer_list<bool> destoryset = {true, false};
// fork returns the test case set of pid value
const std::initializer_list<int> pidset = {987, 660, 562, 0, -1};              
// Test case set for argvec
const std::initializer_list<std::list<std::string>> argset = {
    std::list<std::string>{std::string("--webdir"), std::string("/dev/shm")},
    std::list<std::string>{std::string("--lastpid"), std::string("0"), std::string("--webdir"), std::string("/dev/shm")},
    std::list<std::string>{std::string("--lastpid"), std::string("980"), std::string("--webdir"), std::string("/dev/shm")},
    std::list<std::string>{std::string("--webdir"), std::string("/dev/shm"), std::string("--lastpid"), std::string("980")},
    std::list<std::string>{std::string("--baseport"),std::string("30000"), std::string("--debug")},
    std::list<std::string>{std::string("--baseport"),std::string("30000"), std::string("--debug"), std::string("--lastpid"), std::string("980")}
};

Then call the parameters of the framework to match the generated interface:

INSTANTIATE_TEST_SUITE_P(P1, SubProject_Cons,\
    testing::Combine(testing::Values(std::string("progarm1")),testing::Values(std::string("1.0.1")), testing::Range(0,7,1), \
                    testing::Values(9),testing::Values(2), testing::Values(2), \
                    testing::Values(std::list<std::string>()), testing::ValuesIn(argset), \
                    testing::Values(std::list<std::string>()),\
                    testing::ValuesIn(boolset), testing::ValuesIn(destoryset), testing::ValuesIn(pidset)));

After this matching generation, the framework will automatically orthogonalize the parameters to generate the agreed parameter test data set.

Then we can inherit the data set with an agreed class name. Here, the class name must be INSTANTIATE_TEST_SUITE_P is the name of the second parameter passed in, and must inherit public::testing::TestWithParam. For the first parameter, it doesn't really matter, as long as it doesn't repeat. The code of this use case is as follows:

class SubProject_Cons : public::testing::TestWithParam<::testing::tuple<std::string,\
                                     std::string, int,int,int,int,std::list<std::string>, \
                                     std::list<std::string>,std::list<std::string>, bool, \
                                     bool, int>>
{
    // Interface time consuming
    enum { MAX_TIME_OUT = 20};
protected:
    void SetUp() override
    {
        
        std::tie(name.program, name.version, name.delay, name.killSignal, 
        name.checkTime, name.maxTimeout, name.comList, name.argvec, 
        name.env, boot, destory, pid_return) = GetParam();
        mk= MockSubProject(std::move(name), mklogger, mockSystemAPI);
        // This variable is used for test cases
        _curTimePoint = std::chrono::steady_clock::now();
    }
    // Here, you must explicitly release in advance, otherwise the generated child process will end abnormally when the program terminates
    void TearDown() override
    {
        mk.Release();
        auto _now = std::chrono::steady_clock::now();
        auto duration = std::chrono::duration_cast<std::chrono::milliseconds>(_now
        - _curTimePoint);
        EXPECT_LT(duration.count(),MAX_TIME_OUT) << "It's too long!"; //Timeout error
    }
private:
    monitor::RpcConf    name;
    MockLog             mklogger;
    MockSubProject      mk;
    MockSystemAPI       mockSystemAPI;
    bool                boot = false;
    bool                destory = false;
    pid_t               pid_return = -1;

    // Time consuming tests require variables
    std::chrono::steady_clock::time_point _curTimePoint;
};

After completing the above code, what we actually do is based on each subsequent subproject_ The test cases of cons can automatically test the matching of 840 parameters. Is it also COOOOOL!.

6.2.1 simply review TEST_P loading mechanism

The use case based on parameter test will first find the first parameter in the corresponding parameter test macro as the class name, find the corresponding class, then use the SetUp interface in this class, and then call the TearDown interface of this class at the end of the test case.

For the above examples, each one is based on subproject_ At the beginning of the test case of cons, the parameters returned by GetParam() will be unpacked and copied to each member variable. At the end of SetUp, the start time of the use case will be recorded.

When the use case is completed, subproject will be run_ The TearDown interface of cons is mainly used to display the release of test resources and record the comparison between a time point and SetUp, so as to achieve the purpose of time-consuming test. At the end of the time-consuming test, there is an EXPECT_LT interface requires that the use case time must be less than a set value, otherwise the test is considered to have failed.

6.3 case writing stage of parameter test

After the parameters of parameter test are prepared, the test case can be written. Before entering the first line of code of the test case, subproject has actually been executed_ It can be understood that the parameters here have been filled in according to the preset scheme of cons.

Then we need to explain an important knowledge point:

EXPECT_CALL

This constraint is a little special. The thing to pay attention to is to set the expectation first and then take action. The action performed before the expectation cannot be counted into the expectation.

Master the above key points_ Call is a prerequisite for this function.

Let's use the following example to explain it in detail.

//Test SubProject::onConfUpdate
TEST_P(SubProject_Cons, onConfUpdate) {
    // Set the expected number of calls
    EXPECT_CALL(mklogger, error)
    .Times(0);
    mk._res->onConfUpdate();
    // Test business expectations, onConfUpdate
    EXPECT_EQ(mk._res->total_arg.size(), name.argvec.size() + 1) << "onConfUpdate The number of group parameters is abnormal!";
    EXPECT_EQ(mk._res->total_arg.front(), name.program) << "onConfUpdate total_arg Group parameters are abnormal!";
    mk._res->total_arg.pop_front();
    // This is because STD:: List < > supports direct = = operation comparison, and the stl overload of = = operation is element by element comparison.
    EXPECT_EQ(mk._res->total_arg, name.argvec) << "onConfUpdate argvec Group parameters are abnormal!";;
};

there

EXPECT_CALL(mklogger, error)
.Times(0);

It means that from the beginning of this line of code to the end of the use case, the error interface of mklooger object cannot be called once. The subtext of this sentence is, EXPECT_CALL although this use case implements the SetUp interface, even if the error interface of mklooger object is involved in this interface, it will not be calculated.

It is particularly important to understand the above concept.

7. Time consuming test

See details 6.2 parameter preparation stage of parameter test , this is a simple time-consuming test example, but time-consuming tests often focus on a statistical value rather than a single result. In such a scenario, Google test can also cover it.

7.1 custom test output

The call results of test cases often have certain value and can output the call performance report of the system. The default output provides a series of information such as call start, end, success, failure and time-consuming. Through the way of customized output, it can help us obtain some data we want more pertinently.

Application example:

Or rpc call test case. In this case, we run 1000 + rounds and want to obtain the minimum time-consuming, maximum time-consuming and average time-consuming of each interface. The default output information is difficult to extract and analyze data,
In this case, you can use the event handling class provided by Google test for custom output.

//Event listener class inheriting gtest
class PrintResult : public testing::EmptyTestEventListener
{
	struct result
	{
		std::string name;
		int successed = 0;
		int failed = 0;
		int maxtime = 0;
		int sumtime = 0;
		result(const std::string &_name) : name(_name) {}
		result(const char *_name) : name(_name) {}
	};
	
private:
	//Statistics are performed at the end of the test suite run
	void OnTestSuiteEnd(const testing::TestSuite& test_suite) override
	{
		//Only Rpc calls this TestSuit
		if (strcmp(test_suite.name(),"RpcTimeTakeTest") != 0) {
			return ;
		}
		for (int j = 0; j < test_suite.total_test_count();++j) {
			const testing::TestInfo &test_info = *test_suite.GetTestInfo(j);
			std::string name(test_info.name());

			if (infos.find(name) == infos.end()) {
				result n(name);
				infos.emplace(name,n);
			}
			auto &r = infos.find(name)->second;
			const testing::TestResult &ret = *test_info.result();
			if (ret.Passed()) {
				r.successed++;
			} else if (ret.Failed()) {
				r.failed++;
			}
			if ((int)ret.elapsed_time() > r.maxtime) {
				r.maxtime = (int)ret.elapsed_time();
			}
			r.sumtime += (int)ret.elapsed_time();
		}
	}
	//Print at the end of the program
	void OnTestProgramEnd(const testing::UnitTest& unit_test) override
	{
		//
		printf("------------------Call statistics----------------\n");
		for (auto it : infos) {
			auto &&r = it.second;
			printf("[%-16s] ",r.name.c_str());
			printf("succed[%4d] ",r.successed);
			printf("failed[%4d] ",r.failed);
			printf("aveTimeTake:%2dms ",r.sumtime/(r.successed+ r.failed));
			printf("maxTimeTake:%2dms\n",r.maxtime);
		}
	}
	std::map<std::string ,result> infos;
};


::testing::TestEventListener* createUserListener() { return new PrintResult;}

Add this listener object in the main function

::testing::TestEventListener* createUserListener();
int main(int argc, char *argv[])
{
	::testing::InitGoogleTest(&argc, argv);

	//Add our own EventListener
	testing::UnitTest &unit_test = *testing::UnitTest::GetInstance();
	testing::TestEventListeners& listeners = unit_test.listeners();
	listeners.Append(createUserListener());

	return  RUN_ALL_TESTS();
}

Execute our use case

./RpcTest --gtest_random_seed=0 --gtest_shuffle --gtest_repeat=1000

After running, the following information is output

------------------Call statistics----------------
[addRoad         ] succed[1000] failed[   0] aveTimeTake: 0ms maxTimeTake: 2ms
[configUpdate    ] succed[1000] failed[   0] aveTimeTake: 0ms maxTimeTake: 1ms
[currentRoad     ] succed[1000] failed[   0] aveTimeTake: 0ms maxTimeTake: 1ms
[delRoad         ] succed[1000] failed[   0] aveTimeTake: 0ms maxTimeTake: 1ms
[downGrade       ] succed[1000] failed[   0] aveTimeTake: 0ms maxTimeTake: 2ms
[getLeftRule     ] succed[1000] failed[   0] aveTimeTake: 0ms maxTimeTake: 4ms
[getRtChannel    ] succed[1000] failed[   0] aveTimeTake: 0ms maxTimeTake:15ms
[getRule         ] succed[1000] failed[   0] aveTimeTake: 0ms maxTimeTake: 1ms
[isTransition    ] succed[1000] failed[   0] aveTimeTake: 0ms maxTimeTake: 1ms
[lock            ] succed[1000] failed[   0] aveTimeTake: 0ms maxTimeTake: 1ms
[lockLeft        ] succed[1000] failed[   0] aveTimeTake: 0ms maxTimeTake: 1ms
[lockStatus      ] succed[1000] failed[   0] aveTimeTake: 0ms maxTimeTake: 1ms
[phaseCtrl       ] succed[1000] failed[   0] aveTimeTake: 0ms maxTimeTake: 3ms
[setCycle        ] succed[1000] failed[   0] aveTimeTake: 0ms maxTimeTake: 1ms
[setRule         ] succed[1000] failed[   0] aveTimeTake: 0ms maxTimeTake:13ms

So:

We count the number of successful calls, the number of failures, the maximum time-consuming and average time-consuming in 1000 calls. Through the customized output information, it is easy to help us check the time-consuming interfaces in the system, which is convenient for us to carry out targeted optimization.

8. Type test

After understanding parameter testing, in fact, the overall process of type testing and parameter testing is very similar, except that automatic parameter generation is replaced by automatic type generation. The generated scope and macro declaration are only slightly changed. The following is an example:

//The phase is actually the same type, but the states are different. We need to abstract it into different types and simply encapsulate it
enum
{
	PHASE_TYPE_NOTRUN = 0,
	PHASE_TYPE_RUNNING,
	PHASE_TYPE_OVER,
	PHASE_TYPE_STAYING,
};

// Here is the PhaseBase class to be tested, which is a template class, so you need to use type testing to cover several target types
template <int N>
class PhaseBase
{
 public:
    PhaseBase()
    {
        /*Constructor generates phases in different states*/
    }
    static const int type = N;
	Phase ph;
};


// Through simple packaging, we have different types of phases
// Type declaration using gtest interface

// The meaning here is equivalent to setting a type set, which is the same as the parameter set above.
using 	testing::Types;
typedef Types<PhaseBase<PHASE_TYPE_NOTRUN>,
				PhaseBase<PHASE_TYPE_RUNNING>,
				PhaseBase<PHASE_TYPE_OVER>,
				PhaseBase<PHASE_TYPE_STAYING>> implatePhases;


//Declare a type based test class (test fixture)
template <typename T>
class TypeTestSuit : public testing::Test
{
public:
	T phase;
};


//Declare the type implatePhases to be tested by the test fixture TypeTestSuit

// The meaning here is equivalent to install in parameter test_ TEST_ SUITE_ P
TYPED_TEST_SUITE(TypeTestSuit,implatePhases);

//For different phases, you only need to write a reset use case
// [reset] the phase in any state can be reset, and the state after reset is determined
TYPED_TEST(TypeTestSuit,reset)
{
	this->phase.ph.reset();
	EXPECT_TRUE(this->phase.ph.notrun());
	EXPECT_FALSE(this->phase.ph.running());
	EXPECT_FALSE(this->phase.ph.over());
	EXPECT_FALSE(this->phase.ph.staying());
}
/*
	The above use case will execute reset TEST for different state phases in implatePhases
	When we introduce a new type of phase, we only need to add it to implatePhases without rewriting the use case
*/

9. Tips for running Google test results

9.1 random call sequence

One difference between test cases and practical applications is that our use cases are often executed sequentially, which can not effectively simulate the actual application scenario. In this case, we can use the command line parameters of gtest to execute the use cases randomly.

Application example:

The use cases for testing rpc interface are executed in the order of writing, and the use cases are executed in the same order every time. This situation is not consistent with the actual application

In order to better simulate the scenario of random call, we only need to simply add command line parameters when executing the use case

testProcess --gtest_random_seed=0 --gtest_shuffle

9.2 multi round test

In some tests, a single test may not be representative, especially some tests with statistical significance. At the same time, the random test in the previous section is not representative in a single run. For such scenarios, we can encapsulate a layer of script externally and execute the example multiple times,
In addition, Google test also provides us with ready-made methods

Application example:

In the test cases of rpc calls, there is not only the requirement of random calls, but also the time-consuming test should have statistical significance. A single call does not timeout, which does not mean that the performance of this interface is stable.

For this testing requirement, we combine the time-consuming use cases of Google test, randomly call and multiple rounds of testing for mixed testing

testProcess --gtest_random_seed=0 --gtest_shuffle --gtest_repeat=1000

This operation will run the use cases we wrote in a random way for 1000 times

A common problem in multi round testing is that the assertions we write by default are usually EXPECT_, We will not exit in case of error. In a large number of printing in multiple rounds of test, we may miss the error message. In this case, we will add command line parameters to interrupt directly in case of error

 --gtest_break_on_failure

Append vscode remote setting

vscode settings json:

{
    "breadcrumbs.enabled": true,
    "editor.mouseWheelZoom": true,
    "editor.renderWhitespace": "all",
    "c-cpp-flylint.flexelint.enable": false,
    "c-cpp-flylint.cppcheck.force": true,
    "c-cpp-flylint.cppcheck.language": "c++",
    "c-cpp-flylint.cppcheck.verbose": true,
    "editor.fontSize": 18,
    "C_Cpp.updateChannel": "Insiders",
    "c-cpp-flylint.cppcheck.platform": "unix64",
    "c-cpp-flylint.debug": true,
    "c-cpp-flylint.clang.blocks": false,
    "c-cpp-flylint.cppcheck.inconclusive": true,
    "C_Cpp.default.intelliSenseMode": "linux-gcc-arm",
    // "C_Cpp.default.systemIncludePath": [
    //     "/data1/xiaoyanyi/cross-tool/arm-imx6ul-linux-gnueabihf/arm-imx6ul-linux-gnueabihf/include/c++/5.4.0/**",
    //     "/data1/xiaoyanyi/cross-tool/arm-at91-linux-gnueabi/arm-at91-linux-gnueabi/include/c++/4.9.2/**",
    //     "/data1/xiaoyanyi/cross-tool/arm-imx6ul-linux-gnueabihf/arm-imx6ul-linux-gnueabihf/sysroot/usr/include",
    //     "/data1/xiaoyanyi/cross-tool/arm-at91-linux-gnueabi/arm-at91-linux-gnueabi/sysroot/usr/include",
    // ],
    "[cpp]": {
        "editor.quickSuggestions": true
            },
        "[c]": {
        "editor.quickSuggestions": true
            },
        "C_Cpp.default.includePath": [
            "/data1/xiaoyanyi/cross-tool/arm-imx6ul-linux-gnueabihf/arm-imx6ul-linux-gnueabihf/include/c++/5.4.0/**",
            "/data1/xiaoyanyi/cross-tool/arm-at91-linux-gnueabi/arm-at91-linux-gnueabi/include/c++/4.9.2/**",
            "/data1/xiaoyanyi/cross-tool/arm-imx6ul-linux-gnueabihf/arm-imx6ul-linux-gnueabihf/sysroot/usr/include",
            "/data1/xiaoyanyi/cross-tool/arm-at91-linux-gnueabi/arm-at91-linux-gnueabi/sysroot/usr/include",
            "/data1/xiaoyanyi/work/common/**",
            "${workspaceFolder}/**",
        ],
        "C_Cpp.default.cppStandard": "c++11",
        "C_Cpp.default.cStandard": "c99",
        "C_Cpp.intelliSenseEngineFallback": "Enabled",
        "C_Cpp.loggingLevel": "Debug",
        "explorer.confirmDelete": false,
        "workbench.colorTheme": "Monokai",
        "python.pythonPath": "/usr/bin/python3",
        "window.zoomLevel": 0,
        "C_Cpp.commentContinuationPatterns": [
            "/**"
        ],
        "markdown-preview-enhanced.revealjsTheme": "black.css",
        "markdown-preview-enhanced.automaticallyShowPreviewOfMarkdownBeingEdited": true,
        "markdown-preview-enhanced.previewTheme": "solarized-light.css",
        "hediet.vscode-drawio.local-storage": "eyIuZHJhd2lvLWNvbmZpZyI6IntcImxhbmd1YWdlXCI6XCJcIixcImN1c3RvbUZvbnRzXCI6W10sXCJsaWJyYXJpZXNcIjpcImdlbmVyYWxcIixcImN1c3RvbUxpYnJhcmllc1wiOltcIkwuc2NyYXRjaHBhZFwiXSxcInBsdWdpbnNcIjpbXSxcInJlY2VudENvbG9yc1wiOltdLFwiZm9ybWF0V2lkdGhcIjpcIjI0MFwiLFwiY3JlYXRlVGFyZ2V0XCI6ZmFsc2UsXCJwYWdlRm9ybWF0XCI6e1wieFwiOjAsXCJ5XCI6MCxcIndpZHRoXCI6ODI3LFwiaGVpZ2h0XCI6MTE2OX0sXCJzZWFyY2hcIjp0cnVlLFwic2hvd1N0YXJ0U2NyZWVuXCI6dHJ1ZSxcImdyaWRDb2xvclwiOlwiI2QwZDBkMFwiLFwiZGFya0dyaWRDb2xvclwiOlwiIzZlNmU2ZVwiLFwiYXV0b3NhdmVcIjp0cnVlLFwicmVzaXplSW1hZ2VzXCI6bnVsbCxcIm9wZW5Db3VudGVyXCI6MCxcInZlcnNpb25cIjoxOCxcInVuaXRcIjoxLFwiaXNSdWxlck9uXCI6ZmFsc2UsXCJ1aVwiOlwiXCJ9In0=",
        "remote.SSH.showLoginTerminal": true,
        "remote.SSH.remotePlatform": {
            "10.1.74.245": "linux"
        },
        "c-cpp-flylint.clang.enable": false,
        "c-cpp-flylint.flawfinder.enable": false,
        "c-cpp-flylint.lizard.enable": false,
        "eslint.format.enable": true,
        "remote.autoForwardPortsSource": "output",
        "markdown-preview-enhanced.enableScriptExecution": true,
        "markdown-preview-enhanced.codeBlockTheme": "github.css",
        "markdown-preview-enhanced.enableHTML5Embed": true,
        "markdown-preview-enhanced.HTML5EmbedIsAllowedHttp": true,
        "markdown-preview-enhanced.printBackground": true,
        "fileheader.customMade": {
            "Date": "Do not edit", // File creation time (unchanged)
            "Author": "Adam Xiao",
            "LastEditors": "Adam Xiao", // Last editor of the document
            "LastEditTime": "Do not edit", // Last edit time of file
            "FilePath": "Do not edit" // The relative path of the file in the project is automatically updated
          },
        "fileheader.cursorMode": {
        // Default field
        "description":"",
        "param":"",
        "return":""
        },
        "markdown.preview.typographer": true,
        "cmake.statusbar.advanced": {
        },
        "clangd.path": "/data1/xiaoyanyi/cross-tool/clangd_13.0.0/bin/clangd",
        "cmake.cmakePath": "cmake",
        "cmake.defaultVariants": {
            "buildType": {
                "default": "500",
                "description": "The build type.",
                "choices": {
                    "300": {
                        "short": "300",
                        "long": "Compile 300 signal executable program",
                        "settings": {
                            "TSC": "300",
                            //"DEBUG":"DEBUG" opens and starts the DEBUG mode
                        },
                        "buildType": "Release"
                    },
                    "500": {
                        "short": "500",
                        "long": "Compile 500 signal executable program",
                        "settings": {
                            "TSC": "500"
                            //"DEBUG":"DEBUG" opens and starts the DEBUG mode
                        },
                        "buildType": "Release"
                    },
                    "400": {
                        "short": "400",
                        "long": "Compile 400 signal executable program",
                        "settings": {
                            "TSC": "400"
                            //"DEBUG":"DEBUG" opens and starts the DEBUG mode
                        },
                        "buildType": "Release"
                    },
                    "pc": {
                        "short": "pc",
                        "long": "compile pc Executable program",
                        "settings": {
                            "TSC": "pc"
                            //"DEBUG":"DEBUG" opens and starts the DEBUG mode
                        },
                        "buildType": "Release"
                    },
                    "debug": {
                        "short": "Debug",
                        "long": "Disable optimizations - include debug information.",
                        "buildType": "Debug"
                    },
                    "release": {
                        "short": "Release",
                        "long": "Optimize for speed - exclude debug information.",
                        "buildType": "Release"
                    },
                    "pc_test": {
                        "short": "pc_test",
                        "long": "ctest compile pc Executable program",
                        "settings": {
                            "TSC": "pc_test",
                            "CTEST": "true"
                            //"DEBUG":"DEBUG" opens and starts the DEBUG mode
                        },
                        "buildType": "Release"
                    },
                }
            }
        },
        "cmake.statusbar.visibility": "compact",
        "cmake.enableTraceLogging": true,
        "clangd.checkUpdates": true,
        "cmake.parallelJobs": 8,
        "cmake.ctest.parallelJobs": 16,
        "jupyter.debugJustMyCode": false,
        "C_Cpp.workspaceSymbols": "All",
        "C_Cpp.autocomplete": "Disabled",
        "highlight-icemode.borderWidth": "3px",
        "highlight-icemode.borderRadius": "5px",
        "cmake.ctestPath": "",
}

Append test code snippet

#include "subproject.h"
#include "gmock/gmock-actions.h"
#include "gmock/gmock-matchers.h"
#include "gmock/gmock-spec-builders.h"
#include "gtest/gtest-param-test.h"
#include "gtest/gtest.h"
#include <chrono>
#include <cstddef>
#include <thread>
#include "mockLog.h"
#include "mockSystemAPI.h"

using ::testing::StartsWith;
using ::testing::InSequence;
using ::testing::Return;

using ::testing::AllOf;
using ::testing::Gt;
using ::testing::Eq;
using ::testing::Lt;
using ::testing::Ne;
using ::testing::FloatLE;
using ::testing::MatchesRegex;
using ::testing::StartsWith;
using ::testing::HasSubstr;
using ::testing::StrEq;

// Test set for delay start parameter
// Here, you can directly use:: testing::Range to generate automatically
// const std::initializer_list<int> delayset = {0,1,2,3,4,5,6};      
// Whether to restart the test set of parameters
const std::initializer_list<bool>  boolset = {true, false};  
// Test case set of destruction
const std::initializer_list<bool> destoryset = {true, false};
// fork returns the test case set of pid value
const std::initializer_list<int> pidset = {987, 660, 562, 0, -1};              
// Test case set for argvec
const std::initializer_list<std::list<std::string>> argset = {
    std::list<std::string>{std::string("--webdir"), std::string("/dev/shm")},
    std::list<std::string>{std::string("--lastpid"), std::string("0"), std::string("--webdir"), std::string("/dev/shm")},
    std::list<std::string>{std::string("--lastpid"), std::string("980"), std::string("--webdir"), std::string("/dev/shm")},
    std::list<std::string>{std::string("--webdir"), std::string("/dev/shm"), std::string("--lastpid"), std::string("980")},
    std::list<std::string>{std::string("--baseport"),std::string("30000"), std::string("--debug")},
    std::list<std::string>{std::string("--baseport"),std::string("30000"), std::string("--debug"), std::string("--lastpid"), std::string("980")}
};



class MockSubProject {
public:
    MockSubProject() = default;
    MockSubProject(monitor::RpcConf&& _name, MockLog& _mklogger, MockSystemAPI& _mkSystemAPI) \
                    : _res(monitor::SubProject::create(_mklogger, _name, _mkSystemAPI, true, false)) 
    {}
    MockSubProject(MockSubProject&&) = default;
    MockSubProject(const MockSubProject&) = delete;
    auto operator=(MockSubProject&&) -> MockSubProject& = default;
    auto operator=(const MockSubProject&) -> MockSubProject& = delete;
    ~MockSubProject() {Release();}

    void Release() {_res= nullptr;}
    
    std::shared_ptr<monitor::SubProject> _res;
};

class SubProject_Cons : public::testing::TestWithParam<::testing::tuple<std::string,\
                                     std::string, int,int,int,int,std::list<std::string>, \
                                     std::list<std::string>,std::list<std::string>, bool, \
                                     bool, int>>
{
    // Interface time consuming
    enum { MAX_TIME_OUT = 20};
protected:
    void SetUp() override
    {
        
        std::tie(name.program, name.version, name.delay, name.killSignal, 
        name.checkTime, name.maxTimeout, name.comList, name.argvec, 
        name.env, boot, destory, pid_return) = GetParam();
        mk= MockSubProject(std::move(name), mklogger, mockSystemAPI);
        // This variable is used for test cases
        _curTimePoint = std::chrono::steady_clock::now();
    }
    // Here, you must explicitly release in advance, otherwise the generated child process will end abnormally when the program terminates
    void TearDown() override
    {
        mk.Release();
        auto _now = std::chrono::steady_clock::now();
        auto duration = std::chrono::duration_cast<std::chrono::milliseconds>(_now
        - _curTimePoint);
        EXPECT_LT(duration.count(),MAX_TIME_OUT) << "It's too long!"; //Timeout error
    }
private:
    monitor::RpcConf    name;
    MockLog             mklogger;
    MockSubProject      mk;
    MockSystemAPI       mockSystemAPI;
    bool                boot = false;
    bool                destory = false;
    pid_t               pid_return = -1;

    // Time consuming tests require variables
    std::chrono::steady_clock::time_point _curTimePoint;
};

// program  version   delay  
// killSignal   checkTime maxTimeout  
// comList argvec   env
INSTANTIATE_TEST_SUITE_P(P1, SubProject_Cons,\
    testing::Combine(testing::Values(std::string("progarm1")),testing::Values(std::string("1.0.1")), testing::Range(0,7,1), \
                    testing::Values(9),testing::Values(2), testing::Values(2), \
                    testing::Values(std::list<std::string>()), testing::ValuesIn(argset), \
                    testing::Values(std::list<std::string>()),\
                    testing::ValuesIn(boolset), testing::ValuesIn(destoryset), testing::ValuesIn(pidset)));


// std::string  				 program; 			// Program name (full path)
// std::string 					 version; 			// Program version number
// int  						 delay = 0; 			// Program delay start seconds
// int  						 killSignal = 9; 		// The semaphore when the program terminates. The default is 9. You can specify a custom semaphore
// int  						 checkTime = 0; 		// Maximum single timeout response time of program
// int  						 maxTimeout = 0; 		// Maximum allowed timeout times of program
// std::list<std::string> 		 comList; 			// List of service ports to be monitored by the program
// std::list<std::string>  		 argvec; 				// Program start additional parameters
// std::list<std::string> 		 env; 				// Environment variables required for program startup

//Test SubProject::onConfUpdate
TEST_P(SubProject_Cons, onConfUpdate) {
    // Set the expected number of calls
    EXPECT_CALL(mklogger, error)
    .Times(0);
    mk._res->onConfUpdate();
    // Test business expectations, onConfUpdate
    EXPECT_EQ(mk._res->total_arg.size(), name.argvec.size() + 1) << "onConfUpdate The number of group parameters is abnormal!";
    EXPECT_EQ(mk._res->total_arg.front(), name.program) << "onConfUpdate total_arg Group parameters are abnormal!";
    mk._res->total_arg.pop_front();
    // This is because STD:: List < > supports direct = = operation comparison, and the stl overload of = = operation is element by element comparison.
    EXPECT_EQ(mk._res->total_arg, name.argvec) << "onConfUpdate argvec Group parameters are abnormal!";;
};

// Test SubProject::create
TEST_P(SubProject_Cons, create) {
    // Set the expected number of calls, expect_ A feature of call is that the prediction behavior must be registered before calling, and then called
    // So at the bottom, you need to explicitly call create once, and all EXPECT_CALL should be registered before the actual call.
    EXPECT_CALL(mockSystemAPI, fork()).WillOnce(Return(pid_return));
    if (pid_return == -1 ) {
        EXPECT_CALL(mklogger, error(ManagerEC::process_create,"Child process creation",false,testing::_, StrEq(name.program)))
        .Times(1);    
    } else if (pid_return == 0) {
        InSequence s;
        if (name.delay > 0 && boot) {
            EXPECT_CALL(mockSystemAPI, sleep_for(Eq(std::chrono::seconds(name.delay))))
            .Times(1);   
        }
        EXPECT_CALL(mklogger, info(ManagerEC::process_create,"Child process creation",true,testing::_, StrEq(name.program)))
        .Times(1);    
        EXPECT_CALL(mockSystemAPI, execvpe(StrEq(name.program),testing::_, testing::_))
        .Times(1);   
    } 
    
    mk._res->create(boot);
    // EXPECT_THAT and EXPECT_CALL is different. It will not have an impact until it is called.
    if (pid_return > 0) {
        EXPECT_THAT(mk._res->get_pid(), Eq(pid_return)) << "The subprocess was not created successfully" ;
    }
};

// Test SubProject::destroy
TEST_P(SubProject_Cons, destroy) {
    // At startup, there should be no pid value
    EXPECT_THAT(mk._res->get_pid(), Eq(-1)) << "The subprocess was not created successfully";
    // Set the return value of fork before create
    ON_CALL(mockSystemAPI, fork()).WillByDefault(Return(pid_return));
    // Start it first to prevent interference with subsequent judgment. The mock triggered in this step will not be affected by the following EXPECT_CALL record
    mk._res->create(false);
    if (pid_return > 0) {
        EXPECT_THAT(mk._res->get_pid(), Ne(-1)) << "The subprocess was not created successfully";
        if (destory) {
            InSequence s;
            EXPECT_CALL(mockSystemAPI, kill(mk._res->get_pid(),name.killSignal))
            .Times(1);
            EXPECT_CALL(mockSystemAPI, waitpid(mk._res->get_pid(), nullptr, 0))
            .Times(1);
            EXPECT_CALL(mklogger, error(ManagerEC::service_exited,"Sub service exit",true,testing::_, StrEq(name.program)))
            .Times(1);  
        } else  {
            InSequence s;
            EXPECT_CALL(mockSystemAPI, kill(mk._res->get_pid(),name.killSignal))
            .Times(1);
            EXPECT_CALL(mockSystemAPI, waitpid(mk._res->get_pid(), nullptr, 0))
            .Times(1);
            EXPECT_CALL(mklogger, info(ManagerEC::service_exited,"Sub service exit",true,testing::_, StrEq(name.program)))
            .Times(1);
        }
    } else if (pid_return <= 0) {
        EXPECT_CALL(mockSystemAPI, kill(testing::_,testing::_))
        .Times(0);
        EXPECT_CALL(mockSystemAPI, waitpid(testing::_, nullptr, 0))
        .Times(0);
        EXPECT_CALL(mklogger, error(ManagerEC::service_exited,"Sub service exit",true,testing::_, StrEq(name.program)))
        .Times(0);  
    }
    mk._res->destroy(destory);
    EXPECT_THAT(mk._res->get_pid(), Eq(-1)) << "The subprocess failed to log off successfully";
};

// Test SubProject::reboot
TEST_P(SubProject_Cons, reboot) {
    // At startup, there should be no pid value
    EXPECT_THAT(mk._res->get_pid(), Eq(-1)) << "The subprocess was not created successfully";
    // Set the return value of fork before create
    ON_CALL(mockSystemAPI, fork()).WillByDefault(Return(pid_return));
    // Start it first to prevent interference with subsequent judgment. The mock triggered in this step will not be affected by the following EXPECT_CALL record
    mk._res->create(false);
    // destory expectations
    if (pid_return > 0) {
        EXPECT_CALL(mklogger, info(ManagerEC::service_exited,"Sub service exit",true,testing::_, StrEq(name.program)))
        .Times(1);    
    }
    // create expectations
    if (pid_return == -1 ) {
        EXPECT_CALL(mklogger, error(ManagerEC::process_create,"Child process creation",false,testing::_, StrEq(name.program)))
        .Times(1);    
    } else if (pid_return == 0) {
        InSequence s;
        // Because create false is passed in here, it will not be executed. Note that the last Times is 0
        // if (name.delay > 0 && boot) {
        EXPECT_CALL(mockSystemAPI, sleep_for(Eq(std::chrono::seconds(name.delay))))
        .Times(0);   
        // }
        EXPECT_CALL(mklogger, info(ManagerEC::process_create,"Child process creation",true,testing::_, StrEq(name.program)))
        .Times(1);    
        EXPECT_CALL(mockSystemAPI, execvpe(StrEq(name.program),testing::_, testing::_))
        .Times(1);   
    } 
    // error parameter is boot
    if (boot) {
        EXPECT_CALL(mklogger, error(ManagerEC::reboot,"Sub service restart",true,testing::_, StrEq(name.program))).Times(1);    
    } else  {
        EXPECT_CALL(mklogger, info(ManagerEC::reboot,"Sub service restart",true,testing::_, StrEq(name.program))).Times(1);    
    }

    mk._res->reboot(boot);
    // There will be an exit when the mk object is destructed
    // The following EXPECT_CALL corresponds to what happens after the reboot function exits and mk destructs.
    if (pid_return > 0) {
        EXPECT_CALL(mklogger, info(ManagerEC::service_exited,"Sub service exit",true,testing::_, StrEq(name.program))).Times(1);    
    }
};


Tags: C++ unit testing Visual Studio Code gtest

Posted by shrimp on Sat, 21 May 2022 04:34:32 +0300