torchfilter.data
Dataset utilities for learning & evaluating state estimators in PyTorch.
Package Contents
Classes
A dataset interface for pre-training particle filter measurement models. |
|
A dataset interface that returns single-step training examples: |
|
A data preprocessor for producing training subsequences from |
Functions
|
Helper for splitting a list of trajectories into a list of overlapping |
- class torchfilter.data.ParticleFilterMeasurementDataset(trajectories: List[types.TrajectoryNumpy], *, covariance: np.ndarray, samples_per_pair: int, **kwargs)[source]
Bases:
torch.utils.data.Dataset
A dataset interface for pre-training particle filter measurement models.
Centers Gaussian distributions around our ground-truth states, and provides examples for learning the log-likelihood.
- Parameters:
trajectories (List[torchfilter.types.TrajectoryNumpy]) – List of trajectories.
- Keyword Arguments:
covariance (np.ndarray) – Covariance of Gaussian PDFs.
samples_per_pair (int) – Number of training examples to provide for each state/observation pair. Half of these will typically be generated close to the example, and the other half far away.
- __getitem__(self, index) Tuple[types.StatesNumpy, types.ObservationsNumpy, np.ndarray] [source]
Get a state/observation/log-likelihood sample from our dataset. Nominally, we want our measurement model to predict the returned log-likelihood as the PDF of the
p(observation | state)
distribution.- Parameters:
index (int) – Subsequence number in our dataset.
- Returns:
tuple –
(state, observation, log-likelihood)
tuple.
- class torchfilter.data.SingleStepDataset(trajectories: List[types.TrajectoryNumpy])[source]
Bases:
torch.utils.data.Dataset
A dataset interface that returns single-step training examples:
(previous_state, state, observation, control)
By default, extracts these examples from a list of trajectories.
- Parameters:
trajectories (List[torchfilter.types.TrajectoryNumpy]) – List of trajectories.
- __getitem__(self, index: int) Tuple[types.StatesNumpy, types.StatesNumpy, types.ObservationsNumpy, types.ControlsNumpy] [source]
Get a single-step prediction sample from our dataset.
- Parameters:
index (int) – Subsequence number in our dataset.
- Returns:
tuple –
(previous_state, state, observation, control)
tuple that contains data for a single subsequence. Each tuple member should be either a numpy array or dict of numpy arrays with shape(subsequence_length, ...)
.
- torchfilter.data.split_trajectories(trajectories: List[types.TrajectoryNumpy], subsequence_length: int) List[types.TrajectoryNumpy] [source]
Helper for splitting a list of trajectories into a list of overlapping subsequences.
For each trajectory, assuming a subsequence length of 10, this function includes in its output overlapping subsequences corresponding to timesteps…
[0:10], [10:20], [20:30], ...
as well as…
[5:15], [15:25], [25:30], ...
- Parameters:
trajectories (List[torchfilter.base.TrajectoryNumpy]) – List of trajectories.
subsequence_length (int) – # of timesteps per subsequence.
- Returns:
List[torchfilter.base.TrajectoryNumpy] – List of subsequences.
- class torchfilter.data.SubsequenceDataset(trajectories: List[types.TrajectoryNumpy], subsequence_length: int)[source]
Bases:
torch.utils.data.Dataset
A data preprocessor for producing training subsequences from a list of trajectories.
Thin wrapper around
torchfilter.data.split_trajectories()
.- Parameters:
trajectories (list) – list of trajectories, where each is a tuple of
(states, observations, controls)
. Each tuple member should be either a numpy array or dict of numpy arrays with shape(T, ...)
.subsequence_length (int) – # of timesteps per subsequence.
- __getitem__(self, index: int) types.TrajectoryNumpy [source]
Get a subsequence from our dataset.
- Parameters:
index (int) – Subsequence number in our dataset.
- Returns:
tuple –
(states, observations, controls)
tuple that contains data for a single subsequence. Each tuple member should be either a numpy array or dict of numpy arrays with shape(subsequence_length, ...)
.