ADCS.controller.plan_and_track_lqr_disturbed module¶
- class ADCS.controller.plan_and_track_lqr_disturbed.Plan_and_Track_LQR_Disturbed(est_sat, planner_settings, dist_gain=0.1)[source]¶
Bases:
PlanAndTrackBasePlan-and-Track TVLQR controller with integrated disturbance estimation.
This controller extends the standard Plan-and-Track framework by augmenting the tracking controller with an explicit disturbance estimate. Trajectories are planned using the ALTRO planner with the
findKwDistformulation, which produces time-varying LQR gains compatible with an augmented-state system that includes a constant disturbance model.The controller then executes the trajectory in closed loop using TVLQR feedback while continuously updating an internal estimate of the unknown disturbance torque.
Relationship to the base framework¶
This class derives from
PlanAndTrackBaseand therefore reuses:Planner construction via
_init_planner().Environment propagation via
_propagate_environment().Common trajectory optimization via
_calculate_trajectory_common().
Disturbance-augmented dynamics¶
Let the nominal discrete-time attitude dynamics (after linearization about the planned trajectory) be
\[\mathbf{x}_{k+1} = A_k \mathbf{x}_k + B_k \mathbf{u}_k,\]where \(\mathbf{x}\) is the attitude state and \(\mathbf{u}\) is the control input. A constant disturbance torque \(\mathbf{d} \in \mathbb{R}^3\) enters additively through a disturbance matrix \(C_k\):
\[\mathbf{x}_{k+1} = A_k \mathbf{x}_k + B_k \mathbf{u}_k + C_k \mathbf{d}.\]The disturbance is modeled as constant:
\[\dot{\mathbf{d}} = \mathbf{0}.\]Defining the augmented state
\[\begin{split}\tilde{\mathbf{x}}_k = \begin{bmatrix} \mathbf{x}_k \\ \mathbf{d}_k \end{bmatrix},\end{split}\]the augmented dynamics become
\[\begin{split}\tilde{\mathbf{x}}_{k+1} = \begin{bmatrix} A_k & C_k \\ 0 & I \end{bmatrix} \tilde{\mathbf{x}}_k + \begin{bmatrix} B_k \\ 0 \end{bmatrix} \mathbf{u}_k.\end{split}\]The
findKwDistformulation used by the planner computes TVLQR gains \(K_k\) for this augmented system, enabling feedback laws of the form\[\mathbf{u}_k = \mathbf{u}_k^\ast - K_k \left( \tilde{\mathbf{x}}_k - \tilde{\mathbf{x}}_k^\ast \right),\]where \((\cdot)^\ast\) denotes the planned nominal trajectory.
Online disturbance adaptation¶
In addition to the planner-provided augmented gains, this controller maintains a simple integrator-based disturbance estimate \(\hat{\mathbf{d}}\). At each control step, the estimate is updated using the angular velocity tracking error:
\[\hat{\mathbf{d}}_{k+1} = \hat{\mathbf{d}}_k + \gamma \left( \boldsymbol{\omega}_k - \boldsymbol{\omega}_k^\ast \right) \Delta t,\]where:
\(\boldsymbol{\omega}_k\) is the measured angular velocity,
\(\boldsymbol{\omega}_k^\ast\) is the nominal angular velocity from the trajectory,
\(\gamma\) is a user-selected disturbance adaptation gain,
\(\Delta t\) is the TVLQR discretization step.
This estimate is injected into the active trajectory prior to computing the tracking control.
Intended use¶
This controller is suited for scenarios with approximately constant or slowly varying external torques, such as:
Residual magnetic dipole moments.
Solar radiation pressure biases.
Unmodeled reaction wheel friction.
- param est_sat:
Estimated satellite model with actuators and sensors.
- type est_sat:
- param planner_settings:
ALTRO trajectory planner configuration bundle.
- type planner_settings:
PlannerSettings- param dist_gain:
Scalar gain controlling the disturbance adaptation rate.
- type dist_gain:
float
- return:
None.
- rtype:
None
- calculate_trajectory(t_start, duration, x_0, os_0, goals, verbose=False)[source]¶
Plan an optimal trajectory using ALTRO with disturbance-aware TVLQR gains.
This method invokes the shared planning pipeline provided by
_calculate_trajectory_common()and returns a trajectory configured for disturbance estimation.Trajectory contents¶
The resulting
Trajectoryobject contains:Nominal state and control sequences.
Time-varying LQR gains compatible with the augmented disturbance model.
Auxiliary planner data needed for tracking.
The trajectory is marked with
use_disturbance_estimation = True, which enables disturbance-related logic inside the trajectory during tracking.- param t_start:
Planning start time in J2000 centuries.
- type t_start:
float
- param duration:
Planning horizon length in seconds.
- type duration:
float
- param x_0:
Initial state vector used to seed the optimizer.
- type x_0:
numpy.ndarray
- param os_0:
Initial orbital state used to seed environment propagation.
- type os_0:
- param goals:
Goal list defining the pointing objectives.
- type goals:
GoalList- param verbose:
If true, enable planner verbosity and diagnostic output.
- type verbose:
bool
- return:
Planned trajectory with disturbance-aware TVLQR gains.
- rtype:
Trajectory
- Parameters:
t_start (float)
duration (float)
x_0 (ndarray)
os_0 (Orbital_State)
goals (GoalList)
verbose (bool)
- Return type:
- find_u(x_hat, sens, est_sat, os_hat, goal=None)[source]¶
Compute TVLQR tracking control with disturbance compensation.
This method applies time-varying LQR feedback around the active trajectory while compensating for an estimated constant disturbance torque.
Control law¶
Let \(t\) be the current time extracted from
os_hat.J2000. The controller computes the nominal tracking control using the active trajectory:\[\]mathbf{u}(t) = mathbf{u}^ast(t) - K(t) left(mathbf{x}(t) - mathbf{x}^ast(t)right) - hat{mathbf{d}}(t),
where:
\(\mathbf{u}^\ast(t)\) is the nominal control,
\(K(t)\) is the TVLQR gain,
\(\hat{\mathbf{d}}(t)\) is the current disturbance estimate.
Disturbance update¶
After computing the control, the disturbance estimate is updated using the angular velocity error according to
\[\]hat{mathbf{d}} leftarrow hat{mathbf{d}} + gamma left(boldsymbol{omega} - boldsymbol{omega}^ast right) Delta t.
Actuator limits¶
The resulting control vector is clipped elementwise to the actuator limits specified by
planner_settings.umax.Validity checks¶
A runtime error is raised if no active trajectory is set or if the current time lies outside the trajectory’s valid time window, as determined by
is_valid_time().- param x_hat:
Estimated state vector.
- type x_hat:
numpy.typing.NDArray[numpy.float64]
- param sens:
Sensor measurement vector. Not directly used.
- type sens:
numpy.typing.NDArray[numpy.float64]
- param est_sat:
Estimated satellite model.
- type est_sat:
- param os_hat:
Estimated orbital state providing the current time.
- type os_hat:
- param goal_vector_eci:
Goal direction in ECI frame. Not used.
- type goal_vector_eci:
typing.Optional[numpy.typing.NDArray[numpy.float64]]
- param w_ref:
Reference angular velocity. Not used.
- type w_ref:
typing.Optional[numpy.typing.NDArray[numpy.float64]]
- return:
Control vector after TVLQR feedback and disturbance compensation.
- rtype:
numpy.typing.NDArray[numpy.float64]
- Parameters:
x_hat (ndarray[tuple[int, ...], dtype[float64]])
sens (ndarray[tuple[int, ...], dtype[float64]])
est_sat (EstimatedSatellite)
os_hat (Orbital_State)
goal (Goal | None)
- Return type:
ndarray[tuple[int, …], dtype[float64]]
- reset_disturbance_estimate()[source]¶
Reset the internal disturbance estimate to zero.
This method sets the disturbance estimate back to
\[\]hat{mathbf{d}} = mathbf{0},
effectively disabling disturbance compensation until the estimate is built up again through subsequent tracking updates.
- Returns:
None.
- Return type:
None
- dist_estimate: NDArray[np.float64]¶
- dist_gain: float¶
- Parameters:
est_sat (EstimatedSatellite)
planner_settings (PlannerSettings)
dist_gain (float)