Global Trajectory Optimization

The most up-to-date version of this code is global_opt_compiled.py in the feb-system-integration repo.

Notes

  • I have found WORHP to be more reliable in some funny cases where the points are distributed unevenly, but I know that's a huge pain to install, so don't worry about it too much. More testing is needed.
  • This code is closer to a motion planning algorithm than a path planning one. Really, the path planning bit is handled by the cone ordering algorithm, which takes the sets of blue and yellow cones and finds what the actual track looks like. This then finds the optimal trajectory through that track.

API

Here’s how to use it. See below for how it works.

constructor

After importing, create a new global opt object:

g = GlobalOptCompiled(N)

where N is the number of points to expect in the discretized path.

You can also pass some other options. Here’s the full constructor:

GlobalOptCompiled(N, 
                  nlp_solver='ipopt', # solver library. use 'worhp' if you can
                  solver_opts=None,   # options for solver (defaults are good)
                  car_params={
                      'l_r': 1.4987,  # length from CG to front axle (m)
                      'l_f': 1.5213,  # length from CG to back axle (m)
                      'm'  : 1.0,     # mass (1.0 => force=acc; don't change)
                  },
                  bbox = {
                      'l': 2.7+0.5,   # length of car bounding box plus tolerance
                      'w': 1.6+0.5,   # width of car bounding box plus tolerance
                  },
                  DF_MIN  =      - 0.5, # min steering angle
                  DF_MAX  =        0.5, # max steering angle
                  ACC_MIN =      - 3.0, # max braking acceleration (negative) 
                  ACC_MAX =        2.0, # max forward acceleration
                  ACC_MAX_FN = ACC_MAX  # also limit fwd acc < ACC_MAX_FN(velocity)
                  DF_DOT_MIN =   - 0.5, # max steering derivative (left) (rad/s)
                  DF_DOT_MAX =     0.5, # max steering derivative (right) (rad/s)
                  V_MIN =          0.0, # minimum forward velocity
                  V_MAX =         25.0, # maximum forward velocity
                  FRIC_MAX =      12.0) # maximum acceleration (based on tire grip)

FRIC_MAX and ACC_MAX should be of type casadi.Function with the signature (i0)->(o0) (ie, one input and one output). They are functions of velocity, in m/s. FRIC_MAX can also be passed as a float, corresponding simply to a constant function.

Note

The best way to get the settings is through the GlobalOptSettings class. This way it’s all standardized.

To use the GlobalOptSettings class, do this:

from global_opt_settings import GlobalOptSettings
g = GlobalOptCompiled(**GlobalOptSettings)

code generation

To compile a solver, use this method:

g.construct_solver(
		generate_c=False, 
		compile_c=False, 
		use_c=False, 
		gcc_opt_flag='-Ofast'
):

If generate_c is true, it will generate new C code for the solver, overwriting the previous code.

If compile_c is true, it will compile the (new or old) C code for the solver.

If use_c is true, it will import the (new or old) binary and use that as the solver. Otherwise it will just use the original expression graph evaluated in CasADi’s virtual machines.

gcc_opt_flag sets the optimization level used by GCC. The default, -Ofast, is maximum optimization, but this can take a long time to compile (on the order of minutes) and also sometimes fails with weird cryptic C stuff about symbols not being found. If that happens, you can take it one tick down, using -O3, which doesn’t sacrifice much performance and fixes the issue. If you’re debugging and just need faster compile times, set it to -O2 or -O1 or even just an empty string for the fastest possible compile time.

If you already have a compiled solver and don’t want to compile or construct one from scratch, you can just call the g.load_solver() method. This doesn’t go through the effort of setting up the whole expression graph, so it’s faster.

Make sure the N of the compiled solver and the N you passed to the constructor are the same!

solving the problem

To run the solver, use g.solve:

g.solve(left, right)

where left and right are the left and right sides of the track. They should be of shape (N, 2), where N is the same as the one you passed in to the constructor. These points determine the discretization of the track - the optimizer simply linearly interpolates between corresponding pairs of points. Choosing these points given a path of cones is a good deal of effort. Yash's cone ordering algorithm will be the final answer to this problem, but for now I've implemented an okay solution with the PairPath class in MPC/custom_opt/pairpath.py.

PairPath uses the vertices of a Delaunay triangulation to find the center, then uses that to order and evenly distribute points.

The solve will return a dictionary with three items:

  • z is the full state of the car. It has shape (N, 4), and its format is .
  • u is the control inputs. It has shape (N, 2), and its format is .
  • t is the timestamps corresponding to each element of z and u. It will always be an increasing series, but it is not evenly spaced.

fixing the data

data with inconsistent timesteps is really quite difficult to deal with, so we need to fix that. Use the g.to_constant_tgrid method:

res = g.solve(left, right)
states, controls = g.to_constant_tgrid(dt=0.1, **res)

(enter whatever timestep you like for dt)

This returns two numpy arrays, here states and controls. states has shape (M, 4), and controls has shape (M, 2). These states and controls represent a real, feasible (to within the error of the integrator) path with even time steps of dt seconds.

How it works

Variables

We’re given the track in pairs of points defining the boundaries. We use this to create a parameterized path through the track in coordinates, then add the rest of the state and controls at each point. The time between points - - is also a variable.

Here’s an illustration:

center

we’re given and . We set up the optimization problem:

Now let’s define all those variables and functions!

Let our to be all of our variables together (please excuse this abuse of notation):

Variable Conventions

These are the same variable names as MPC. The state is , where and are the location of the car, is the heading (from the -axis), and is velocity. The control inputs are where is the car's forward acceleration and is the angle of the front wheels.

Our cost function is simply the lap time:

Constraints

Now we have to make our constraint functions. First, we’ll introduce the vehicle dynamics. Let represent a function which integrates the dynamics. Then the constraints are:

where and .

This function is implemented using the midpoint method in discrete_custom_integrator from MPC/sim/sim.py.

Now we also want to add constraints based on the car’s limits:

  • The car can only accelerate so fast, and that acceleration depends on the current motor speed.
    • for some function determined empirically using data from dynamics: Pasted image 20240425095440.png
  • The car can only brake so hard.
  • The steering angle is mechanically limited
  • the steering wheel can only be turned so fast
  • the tire friction is limited, so centripetal and driving acceleration must not exceed friction limits. These friction limits depend on velocity, since aero increases downforce.
    center
    this is an example plot from the OpenLapSim documentation. We need to gather true data from dynanics.

These are all implemented fairly simply, using direct constraints on the input variables. The last one is a bit silly:

centripetal acceleration is given by: . is an optimization variable that we use directly, but is nontrivial to compute. It is a function of steering angle and velocity. We can take it directly from the dynamics:

which is where the expression for centripetal acceleration comes from.

ac = (self.v**2 / self.car_params['l_r']) * sin(arctan(self.car_params['l_r']/(self.car_params['l_f'] + self.car_params['l_r']) * tan(self.u[:, 1])))

Then, we can combine centripetal and forward acceleration with the Pythagorean theorem, since we know they’re orthogonal. This net acceleration can then be constrained to be less than some function of velocity.

Cone Constraints

it turns out our car isn't just a point - it's a whole car! This means we have to make sure we don't run into the cones.
The naive solution is just to bound to a smaller interval than , but it turns out that this doesn't work because the car has length as well as width. If we're going around a corner, we might hit cones with the front or back of the car, even if the sides are ok:

center

The fix for this issue is very simple. We simply add a constraint saying that none of the points defining the track may be within a certain bounding box of the car.

First, we transform the points to the reference frame of the car by getting the vector from the car to the track points then rotating by the negative of the car's heading.

The additional offset at the end is to correct for the difference between the car's center of mass and its centroid.

Once we have that, we just need to make sure and .

Unfortunately, I found that a strict rectangle like this makes the car confused. If it runs head on into a point, it doesn't know that it has to turn to get around it. Mathematically, this corresponds to the constraint having very limited information encoded in its derivative.

To fix this, the actual constraint is an approximation of a rectangle.

When plotted, we get this:

center

The exponent of 6 was chosen empirically to give a very rectangular shape while still encoding some roundness in the derivative and keeping numbers low enough to prevent numerical instability.

This worked much better. To get runtime down even more, we also limit the cones we consider to those 5 indices before and after the current index. This is possible since our progress along the track is entirely determined by our index in the track boundary pairs, unlike MPC where it's very hard to quantify ahead of time which points are nearest.

This reduces the number of points we must compute from to , which can be a massive improvement for high .

implementation

It’s all implemented using the base casadi.nlpsol class. This is used rather than the Opti stack because it’s more fully featured, we get more control, and most importantly, it has far better support for code generation, which will significantly reduce init and solve times.