Detailed explanation of monotonicity of decision-making

Necessary and sufficient conditions for monotonicity of decision making

For two decision points, if at I, the large decision point is better than the small decision point, then at [i+1,n], the large decision point is better than the small decision point

When proving, there is no need for mathematical proof, and it is enough to prove that the cost array is "convex" by typing a table
J < j + 1 < = I < I + 1 prove cost array
Cross less than include


1D dynamic programming optimization

Preliminary optimization of 1D/1D dynamic programming
The so-called 1D/1D dynamic programming refers to the dynamic programming equation with the number of states O(n) and the decision quantity of each state O(n).
The time complexity of direct solution is O(n2), but most of these equations can be solved through reasonable organization and optimization
To optimize the time complexity of O(nlogn) and even O(n). Here I want to talk about some of the more preliminary classic
Understanding of optimization methods.
In this paper, we don't want to prove and deduce too much. We mainly want to explain the establishment, transformation and solution methods of classical models.
Due to my limited knowledge and level, if there are any mistakes and omissions, please make more corrections. In addition, also
I hope Daniel will introduce more in-depth things about dynamic programming optimization to us.
In this paper, two ways are used to represent a function: f(x) and f[x]. The function value expressed in square brackets can be before planning
All are calculated (constants), and the function values expressed in parentheses must be calculated (variables) in the planning process. Whatever it is
Once the function value is determined, it will not be changed in future calculations.

We always think along the idea of "what is the optimal decision of f(x)". Let's change another angle,
For a calculated state f(j), "what states can be updated by f(j)". In this way, every step of the process
Some state decisions may not be optimal, but when the algorithm ends, the decisions corresponding to all States must be optimal
At first, only the function value of f(1) is calculated, so the current optimal decision of all States is 1.
Now, obviously, the value of f(2) has been determined: its most promising decision can only be 1. We update this decision with decision 2
Watch. Due to the monotony of decision-making, we know that the new decision table can only have this form:

This means that we can use dichotomy to find the "turning point", because if at a point x, if decision 2 is more important
Good, then all States larger than x are better than decision 2; If decision 1 is better on x, then all States smaller than x are
Decision 1 is better.
Now that both decision 1 and decision 2 have been updated, f(3) has been determined. Now use decision 3 to update all States.
According to the monotonicity of decision-making, the current decision table can only have the following two types:
Such a decision means that it will never happen:
Then, our update algorithm is:
1. Check whether decision 3 is better at point B of the interval [b,e] of decision 2. If so, discard all decision 2 and replace it
Interval allocation decision 3; If not, find the turning point in the interval [b,e] of decision 2.
2. If the answer to question 1 is "yes", examine decision 1 in the same way.
At this stage, I believe that the implementation algorithm of decision monotonicity has been clear: using a stack to maintain data accounts for
Each element holds the starting and ending positions of a decision. Obviously, these positions are connected and incremented in turn. When inserting
When entering a new decision, scan the stack from back to front. For each old decision, do two things:
1. If the new decision is better at the starting point of the old decision, withdraw from the stack, abandon the old decision in full, and merge its interval into
In the new decision, continue to scan the next decision.
2. If the old decision-making is the starting point, the old decision-making must be at the old turning point; Binary search
Then the new decision goes into the stack and ends.
Since a decision will not enter after it is out of the stack, the sharing time is O(1), but due to the existence of binary search,
Therefore, the time complexity of the whole algorithm is O(nlogn).

noi2009a poet small G corresponding 1D dynamic programming optimization analysis

1 //Don't submit it to Luogu. There is a problem with the data of Luogu. Go to codeVs
  2 #include<cstdio>
  3 #include<cstring>
  4 #define ll long double 
  5 //node[i] stands for I, in which l stands for the starting point of decision-making, r stands for the end point of decision-making, and p is the value of decision-making 
  6 struct node{int  l,r,p;}q[100100];
  7 #define MAX 1000000000000000000LL 
  8 #define N 100100
  9 ll sum[N],f[N];
 10 int n,l,p,T;
 11 char ch[35];
 12 //Find the function of | y|^p 
 13 ll pow(ll y){
 14     if(y<0)y=-y;
 15     ll ans=1;
 16     for (int i=1;i<=p;i++) ans*=y;
 17     return ans;
 18 }
 19 //Computing uncoordinated scheduling  
 20 ll calc(int x,int y){
 21     return f[x]+pow(sum[y]-sum[x]+(y-x-1)-l);
 22 }
 23 //Binary search to find the turning point of decision-making 
 24 //node represents the old decision point and x represents the value of the new decision point
 25 //This function is to find the value of the new decision point in the range of the old decision point 
 26 int find(node t,int x){
 27     int l=t.l,r=t.r;
 28     while(l<=r){//When the head is less than or equal to the tail, that is, there are still numbers to find 
 29         //mid = (head + tail) / 2 
 30         int mid=(l+r)>>1;
 31         //In the case of less than, that is, the new decision-making point is more, we should continue to expand the starting point of the new decision-making point,
 32         //Until the old decision-making point is better 
 33         if (calc(x,mid)<=calc(t.p,mid)) r=mid-1;
 34         //Greater than, that is, when the old decision point is better, we will search the starting point of the scope of the new decision point later
 35         else l=mid+1;
 36     }
 37     //Return header 
 38     return l;
 39 }
 41 void dp(){
 42     //Simulate the stack with an array, with the head set to 1 and the tail set to 0 
 43     int head=1,tail=0;
 44     //Put the first decision into the stack. The starting point of decision 1 is 0, the ending point is n, and the initial value of the decision is set to 0, because the initial decision is 0 
 45     q[++tail]=(node){0,n,0};
 46     //Find f[i] 
 47     for (int i=1;i<=n;i++){
 48         //If the end point of the decision action on the stack head node is less than i, it means that the decision can no longer act on i, so we need to replace the decision
 49         //Head < = tail the head of the stack is less than or equal to the tail, indicating that there are new decisions in the stack, so replace them with new decisions 
 50         //Because of head + +, the head node points to the new decision 
 51         if(q[head].r<i&&head<=tail) head++;
 52         //The value of f[i] is calculated by a new decision that can act on the position of I, so the calculated value of f[i] is the optimal value 
 53         f[i]=calc(q[head].p,i);
 54         //calc(i,n) means to calculate the value of f[n] through the decision with value i
 55         //calc(q[tail].p,n) means to pass the old decision Q [tail] P to calculate the value of f[n]
 56         //Calc (i, n) < = Calc (Q [tail]. P, n) means that the decision with the value of i is better than the old decision, so we can go to the next step
 57         //Otherwise, if the old decision is better, we don't need to change the decision 
 59         //Every calculated f[i] will make I a new decision, because I have the value of f[i],
 60         //So I can use the value of f[i] to help calculate other F [k] (k > I), so f[i] is a new decision 
 62         /*
 63         For example, in our example 1, when i==3, we pass Q [head] The decision of P = 2 calculates f[3]
 64         For example, n here is 4. If Calc (3,4) < = Calc (2,4), the new decision with a value of 3 is better,
 65         We need to find out the action range of decision 3 in the action range of decision 2 
 66         */ 
 69         //If the new decision is better, find out the scope of the new decision 
 70         //Determine the end point of the new decision point 
 71         if (calc(i,n)<=calc(q[tail].p,n)){
 72             //If the head of the stack is less than or equal to the tail, head < = tail, it indicates that there are elements in the stack 
 73             //calc(q[tail].p,q[tail].l) indicates the starting point of using the decision value of the old decision point to decide the old decision point 
 74             //calc(i,q[tail].l) uses the new decision point to decide the starting point of the old decision point 
 75             //If the latter is smaller than the former, it means that the new decision is better, then we should look for the decision-making scope of the new decision-making point in a broader scope
 76             //In other words, when the action scope of the new decision point covers the old decision point, we should look for the action orientation of the new decision point in the old decision point 
 77             while(head<=tail&&calc(q[tail].p,q[tail].l)>=calc(i,q[tail].l)) tail--;
 78             //Head > tail indicates that the stack is empty, which means that the new decision point is the best, so put the new decision point into the stack 
 79             if(head>tail)q[++tail]=(node){i,n,i};
 80             else{
 81                 //Otherwise, we have to find the scope of the new decision-making point
 82                 //x returns the starting point of the new decision point range 
 83                 int x=find(q[tail],i);
 84                 //The tail of the old decision point is set to x-1 
 85                 q[tail].r=x-1;
 86                 //Stack new decision points 
 87                 q[++tail]=(node){x,n,i};
 88             }
 89         }
 90     }
 91 }
 93 int main(){
 94     scanf("%d",&T);
 95     while(T--){
 96         scanf("%d%d%d",&n,&l,&p);
 97         for (int i=1;i<=n;i++) scanf("%s",ch),sum[i]=sum[i-1]+strlen(ch);
 98         dp();
 99         if(f[n]>MAX)
100             puts("Too hard to arrange");
101         else 
102             printf("%lld\n",(long long)f[n]);
103         puts("--------------------");
104     }
105     return 0;
106 }

Tags: Algorithm Dynamic Programming

Posted by Geof on Sun, 22 May 2022 09:04:14 +0300