An Intuiative Formulation of Model Predictive Controls

Feb 3, 2026

The problem of MPC can be boiled down to the following formulation

minu(t[1,n]j(ut,xt)=J(U,X))xt+1=f(xt,ut)x[xmin,xmax]u[umin,umax]x0 given (current sensor values)\min_{\vec u} \left(\sum_{t\in[1,n]}j(\vec u_t, \vec x_t) = J(\mathcal U, \mathcal X)\right)\\ \vec x_{t+1} = f(\vec x_t, \vec u_t)\\ \vec x\in [\vec x_{min}, \vec x_{max}]\\ \vec u\in [\vec u_{min}, \vec u_{max}]\\ x_0\text{ given (current sensor values)}

Where

Note that I am currently abusing notation quite significantly. In particular, we imagine U\mathcal U and X\mathcal X to be the joined vectors of actions and states over the entire prediction horizon, i.e. U={u1,u2,,un}X={x1,x2,,xn} \mathcal U = \{\vec u_1, \vec u_2, \ldots, \vec u_n\}\\ \mathcal X = \{\vec x_1, \vec x_2, \ldots, \vec x_n\}

As a (not so arbitrary) example, we can imagine x could be the state of a car, u is the torque commands provided by the driver, f is the car dynamics, and J is some cost function that penalizes deviation from a desired trajectory.

Simplifying the formulation

We can observe pretty easily that x1=f(x0,u0)x2=f(x1,u1)=f(f(x0,u0),u1) x_1=f(x_0,u_0)\\ x_2=f(x_1,u_1)=f(f(x_0,u_0),u_1)\\ \vdots Specifically, we can find that X\mathcal X is entirely determined by x0x_0 and U\mathcal U. Thus, we can simplify X\mathcal X to be a function of x0x_0 and U\mathcal U, i.e. Xf(x0,U)={x0,f(x0,u0),f(f(x0,u0),u1)} \mathcal X_f(x_0, \mathcal U) = \{x_0, f(x_0, u_0), \dots f(f(x_0, u_0), u_1) \} Then, we can write the cost as J^(U)=J(U,Xf(x0,U)) \hat J(\mathcal U) = J(\mathcal U, \mathcal X_f(x_0, \mathcal U)) and thus reduce the problem to an optimization over just U\mathcal U.