How to encrypt ncnn model

0x0 introduction

Deep learning model files usually carry the core of the algorithm. Model encryption is a means to protect intellectual property rights

Although client-side encryption only raises the threshold for cracking to some extent, it is better than no threshold, right (x)

0x1 convert model to param bin + bin

There are two models of ncnn: plaintext param and binary param bin

  • param is plain text, which can be easily opened, read and modified with an editor, or viewed with netron visualization tool
  • param.bin is the binary storage form of param, which is generated by ncnn's ncnn2mem tool. It cannot be opened directly, but it can be viewed with netron visualization tool or hexadecimal editor
$ ncnn2mem resnet.param resnet.bin resnet.id.h resnet.mem.h

It's just that you can't see the written text. Maybe it's a leaf blocking the eyes?

0x2 convert the model into C code embedded program

ncnn2mem tool generates RESNET mem. H file, representing param in the form of C array Contents of bin and bin

static const unsigned char resnet_param_bin[] = { .... };
static const unsigned char resnet_bin[] = { .... };

include this file, load the interface with memory, and directly embed and compile the model into the program as code

#include "resnet.mem.h"

ncnn::Net net;
net.load_param(resnet_param_bin);
net.load_model(resnet_bin);

It is enough to provide exe during distribution. Users cannot directly obtain the model file, but they can use objdump or hexadecimal editor to extract the model from the static area of exe

I can't see the model file. Is it enough to deal with the boss(

0x3 use a special encryption library to encrypt the model

I use openssl to put param The bin and bin files are encrypted into param by AES bin.enc and bin enc

Well, if you don't know the key, you can't decrypt it anyway. The model file is very secure, which is also the method used by many client applications

The program implements the following three steps to load the encryption model

  1. Read enc file
  2. Decrypt to memory
  3. Load model from memory

It can be seen that most of the commonly used encryption and decryption algorithms in openssl or openpattern can be obtained quickly

For 3, when the program loads the model, the continuous memory and keywords with the size of enc are searched on the heap memory, and the model is pulled out of the memory

I can't see it without running. There is a complete model in memory during running, which is actually quite good

0x4 custom encryption algorithm and data reading

This is my own recommended method. The advantage is that there will be no complete model content in memory at any time. It can be loaded while decrypting

Tencent/ncnn​github.com

Use the ncnn model loading interface of the ncnn::DataReader parameter type

In order to concisely implement decryption and demonstrate the usage of DataReader, I implemented one with ordinary xor confusion. In fact, it can be encrypted and decrypted arbitrarily

#include "datareader.h"

class MyEncryptedDataReader : public ncnn::DataReader
{
public:
    MyEncryptedDataReader(const char* filepath, unsigned char _key);
    ~MyEncryptedDataReader();
    virtual size_t read(void* buf, size_t size) const;
private:
    FILE* fp;
    unsigned char key;
};

MyEncryptedDataReader::MyEncryptedDataReader(const char* filepath, unsigned char _key)
{
    fp = fopen(filepath, "rb");
    key = _key;
}

MyEncryptedDataReader::~MyEncryptedDataReader()
{
    fclose(fp);
    key = 0;
}

size_t MyEncryptedDataReader::read(void* buf, size_t size) const
{
    size_t nread = fread(buf, 1, size, fp);

    // xor decrypt
    unsigned char* p = (unsigned char*)buf;
    for (size_t i = 0; i < nread; i++)
    {
        p[i] ^= key;
    }

    return nread;
}
unsigned char key1 = 123;
unsigned char key2 = 33;

ncnn::Net net;
squeezenet.load_param_bin(MyEncryptedDataReader("resnet.param.bin.enc", key1));
squeezenet.load_model(MyEncryptedDataReader("resnet.bin.enc", key2));

load_param_bin and load_model can reuse the same DataReader and merge the two files into one for easy distribution

$ cat resnet.param.bin.enc resnet.bin.enc > resnet.enc
unsigned char key = 123;

MyEncryptedDataReader medr("resnet.enc", key)

ncnn::Net net;
squeezenet.load_param_bin(medr);
squeezenet.load_model(medr);

0x5 add some custom op to the model

ncnn can customize the op, register the custom op at runtime, and directly change param. There are many operations, so that the decrypted model will not be understood

For example, the original param is like this

ConvolutionDepthWise     conv1              1 1 data conv1 0=64 1=3 3=2 4=1 5=1 6=576 7=64 9=1
Convolution              conv2              1 1 conv1 conv2 0=128 1=1 5=1 6=8192 9=1

I changed it to this

ConvolutionDepthWise     conv1              1 1 data conv1 0=64 1=3 3=2 4=1 5=1 6=576 7=64 9=1
AwesomeNorm              norm1              1 1 conv1 norm1
Convolution              conv2              1 1 norm1 conv2 0=128 1=1 5=1 6=8192 9=1

My custom op is called AwesomeNorm, but I don't actually do anything. It's just a Noop, which doesn't affect the operation or performance, but others will always think about what norm it is

For another example, I changed it to this

ConvolutionDepthWise     conv1              1 1 data conv1 0=64 1=3 3=2 4=1 5=1 6=576 7=64 9=1
Convolution              conv2              1 1 conv1 conv2 0=64 1=1 5=1 6=8192 9=1
ConvolutionDepthWise     conv3              1 1 conv2 conv3 0=64 1=3 3=2 4=1 5=1 6=576 7=64 9=1
Convolution              conv4              1 1 conv3 conv4 0=128 1=1 5=1 6=8192 9=1
ConvolutionDepthWise     conv5              1 1 conv4 conv5 0=128 1=3 4=1 5=1 6=1152 7=128 9=1
Convolution              conv6              1 1 conv5 conv6 0=128 1=1 5=1 6=16384 9=1

I added two layers in front of the model and two layers behind it. The new 4-layer conv parameters are initialized randomly. In actual use, only the original effective middle part is inferred

ex.input("conv2", data);
ex.extract("conv4", output);

In the case of unclear input and output, others can't use the plaintext model, but I can use it

To take a worse example, I changed it to this

MyConvolution            conv1              1 1 data conv1
MyBatchNorm              norm1              1 1 conv1 norm1

My custom op is called myrevolution and MyBatchNorm, but it actually calls ncnn low-level op api to make revolution depthwise and revolution

Tencent/ncnn​github.com

Myrevolution will write  0 = 64 1 = 3 3 = 2 4 = 1 5 = 1 6 = 576 7 = 64 9 = 1  in the implementation, and MyBatchNorm will write  0 = 128 1 = 1 5 = 1 6 = 8192 9 = 1  in the implementation

Therefore, even if you see the plaintext param, it is easy to be deceived by the name and think that only one convolution has been done, which is too bad!

0x6 welcome to join QQ group to learn more strange skills that are rarely known 233

QQ group number and group entry password see ncnn github homepage readme

https://github.com/Tencent/ncnn​github.com

Tags: Python Java Programming jvm Encryption

Posted by capslock118 on Thu, 05 May 2022 09:03:14 +0300