An Intuiative Formulation of Model Predictive Controls
Feb 3, 2026
The problem of MPC can be boiled down to the following formulation
umint∈[1,n]∑j(ut,xt)=J(U,X)xt+1=f(xt,ut)x∈[xmin,xmax]u∈[umin,umax]x0 given (current sensor values)
Where
x is the state
u is the action
J is the cost function
f is the system dynamics (the model in question), namely given the current state and some action, what is the state at the next timestep
Note that I am currently abusing notation quite significantly. In particular, we imagine U and X to be the joined vectors of actions and states over the entire prediction horizon, i.e.
U={u1,u2,…,un}X={x1,x2,…,xn}
As a (not so arbitrary) example, we can imagine x could be the state of a car, u is the torque commands provided by the driver, f is the car dynamics, and J is some cost function that penalizes deviation from a desired trajectory.
Simplifying the formulation
We can observe pretty easily that
x1=f(x0,u0)x2=f(x1,u1)=f(f(x0,u0),u1)⋮
Specifically, we can find that X is entirely determined by x0 and U. Thus, we can simplify X to be a function of x0 and U, i.e.
Xf(x0,U)={x0,f(x0,u0),…f(f(x0,u0),u1)}
Then, we can write the cost as
J^(U)=J(U,Xf(x0,U))
and thus reduce the problem to an optimization over just U.