:orphan: :mod:`torchfilter.data._particle_filter_measurement_dataset` ============================================================ .. py:module:: torchfilter.data._particle_filter_measurement_dataset .. autoapi-nested-parse:: Private module; avoid importing from directly. Module Contents --------------- Classes ~~~~~~~ .. autoapisummary:: torchfilter.data._particle_filter_measurement_dataset.ParticleFilterMeasurementDataset .. py:class:: ParticleFilterMeasurementDataset(trajectories: List[types.TrajectoryNumpy], *, covariance: np.ndarray, samples_per_pair: int, **kwargs) Bases: :class:`torch.utils.data.Dataset` .. autoapi-inheritance-diagram:: torchfilter.data._particle_filter_measurement_dataset.ParticleFilterMeasurementDataset :parts: 1 A dataset interface for pre-training particle filter measurement models. Centers Gaussian distributions around our ground-truth states, and provides examples for learning the log-likelihood. :param trajectories: List of trajectories. :type trajectories: List[torchfilter.types.TrajectoryNumpy] :keyword covariance: Covariance of Gaussian PDFs. :kwtype covariance: np.ndarray :keyword samples_per_pair: Number of training examples to provide for each state/observation pair. Half of these will typically be generated close to the example, and the other half far away. :kwtype samples_per_pair: int .. method:: __getitem__(self, index) -> Tuple[types.StatesNumpy, types.ObservationsNumpy, np.ndarray] Get a state/observation/log-likelihood sample from our dataset. Nominally, we want our measurement model to predict the returned log-likelihood as the PDF of the ``p(observation | state)`` distribution. :param index: Subsequence number in our dataset. :type index: int :returns: *tuple* -- ``(state, observation, log-likelihood)`` tuple. .. method:: __len__(self) -> int Total number of samples in the dataset. :returns: *int* -- Length of dataset.