Class PhotonPoseEstimator

java.lang.Object
org.photonvision.PhotonPoseEstimator

public class PhotonPoseEstimator extends Object
The PhotonPoseEstimator class filters or combines readings from all the AprilTags visible at a given timestamp on the field to produce a single robot in field pose, using the strategy set below. Example usage can be found in our apriltagExample example project.
  • Field Details

  • Constructor Details

  • Method Details

    • getFieldTags

      Get the AprilTagFieldLayout being used by the PositionEstimator.

      Note: Setting the origin of this layout will affect the results from this class.

      Returns:
      the AprilTagFieldLayout
    • setFieldTags

      public void setFieldTags(AprilTagFieldLayout fieldTags)
      Set the AprilTagFieldLayout being used by the PositionEstimator.

      Note: Setting the origin of this layout will affect the results from this class.

      Parameters:
      fieldTags - the AprilTagFieldLayout
    • getTagModel

      Get the TargetModel representing the tags being detected. This is used for on-rio multitag.

      By default, this is TargetModel.kAprilTag36h11.

    • setTagModel

      public void setTagModel(TargetModel tagModel)
      Set the TargetModel representing the tags being detected. This is used for on-rio multitag.
      Parameters:
      tagModel - E.g. TargetModel.kAprilTag16h5.
    • getPrimaryStrategy

      Deprecated, for removal: This API element is subject to removal in a future version.
      Use individual estimation methods instead.
      Get the Position Estimation Strategy being used by the Position Estimator.
      Returns:
      the strategy
    • setPrimaryStrategy

      Deprecated, for removal: This API element is subject to removal in a future version.
      Use individual estimation methods instead.
      Set the Position Estimation Strategy used by the Position Estimator.
      Parameters:
      strategy - the strategy to set
    • setMultiTagFallbackStrategy

      Deprecated, for removal: This API element is subject to removal in a future version.
      Use individual estimation methods instead.
      Set the Position Estimation Strategy used in multi-tag mode when only one tag can be seen. Must NOT be MULTI_TAG_PNP
      Parameters:
      strategy - the strategy to set
    • getReferencePose

      Deprecated, for removal: This API element is subject to removal in a future version.
      Use individual estimation methods instead.
      Return the reference position that is being used by the estimator.
      Returns:
      the referencePose
    • setReferencePose

      @Deprecated(forRemoval=true, since="2026") public void setReferencePose(Pose3d referencePose)
      Deprecated, for removal: This API element is subject to removal in a future version.
      Use individual estimation methods instead.
      Update the stored reference pose for use when using the CLOSEST_TO_REFERENCE_POSE strategy.
      Parameters:
      referencePose - the referencePose to set
    • setReferencePose

      @Deprecated(forRemoval=true, since="2026") public void setReferencePose(Pose2d referencePose)
      Deprecated, for removal: This API element is subject to removal in a future version.
      Use individual estimation methods instead.
      Update the stored reference pose for use when using the CLOSEST_TO_REFERENCE_POSE strategy.
      Parameters:
      referencePose - the referencePose to set
    • setLastPose

      @Deprecated(forRemoval=true, since="2026") public void setLastPose(Pose3d lastPose)
      Deprecated, for removal: This API element is subject to removal in a future version.
      Use individual estimation methods instead.
      Update the stored last pose. Useful for setting the initial estimate when using the CLOSEST_TO_LAST_POSE strategy.
      Parameters:
      lastPose - the lastPose to set
    • setLastPose

      @Deprecated(forRemoval=true, since="2026") public void setLastPose(Pose2d lastPose)
      Deprecated, for removal: This API element is subject to removal in a future version.
      Use individual estimation methods instead.
      Update the stored last pose. Useful for setting the initial estimate when using the CLOSEST_TO_LAST_POSE strategy.
      Parameters:
      lastPose - the lastPose to set
    • addHeadingData

      public void addHeadingData(double timestampSeconds, Rotation3d heading)
      Add robot heading data to buffer. Must be called periodically for the PNP_DISTANCE_TRIG_SOLVE strategy.
      Parameters:
      timestampSeconds - Timestamp of the robot heading data.
      heading - Field-relative robot heading at given timestamp. Standard WPILIB field coordinates.
    • addHeadingData

      public void addHeadingData(double timestampSeconds, Rotation2d heading)
      Add robot heading data to buffer. Must be called periodically for the PNP_DISTANCE_TRIG_SOLVE strategy.
      Parameters:
      timestampSeconds - Timestamp of the robot heading data.
      heading - Field-relative robot heading at given timestamp. Standard WPILIB field coordinates.
    • resetHeadingData

      public void resetHeadingData(double timestampSeconds, Rotation3d heading)
      Clears all heading data in the buffer, and adds a new seed. Useful for preventing estimates from utilizing heading data provided prior to a pose or rotation reset.
      Parameters:
      timestampSeconds - Timestamp of the robot heading data.
      heading - Field-relative robot heading at given timestamp. Standard WPILIB field coordinates.
    • resetHeadingData

      public void resetHeadingData(double timestampSeconds, Rotation2d heading)
      Clears all heading data in the buffer, and adds a new seed. Useful for preventing estimates from utilizing heading data provided prior to a pose or rotation reset.
      Parameters:
      timestampSeconds - Timestamp of the robot heading data.
      heading - Field-relative robot heading at given timestamp. Standard WPILIB field coordinates.
    • getRobotToCameraTransform

      Returns:
      The current transform from the center of the robot to the camera mount position
    • setRobotToCameraTransform

      public void setRobotToCameraTransform(Transform3d robotToCamera)
      Useful for pan and tilt mechanisms and such.
      Parameters:
      robotToCamera - The current transform from the center of the robot to the camera mount position
    • update

      Deprecated, for removal: This API element is subject to removal in a future version.
      Use individual estimation methods instead.
      Updates the estimated position of the robot, assuming no camera calibration is required for the selected strategy. Returns empty if:
      • The timestamp of the provided pipeline result is the same as in the previous call to update().
      • No targets were found in the pipeline results.
      Will report a warning if strategy is multi-tag-on-rio because camera calibration data is not provided in this overload.
      Parameters:
      cameraResult - The latest pipeline result from the camera
      Returns:
      An EstimatedRobotPose with an estimated pose, timestamp, and targets used to create the estimate.
    • update

      @Deprecated(forRemoval=true, since="2026") public Optional<EstimatedRobotPose> update(PhotonPipelineResult cameraResult, Optional<Matrix<N3,N3>> cameraMatrix, Optional<Matrix<N8,N1>> distCoeffs)
      Deprecated, for removal: This API element is subject to removal in a future version.
      Use individual estimation methods instead.
      Updates the estimated position of the robot. Returns empty if:
      • The timestamp of the provided pipeline result is the same as in the previous call to update().
      • No targets were found in the pipeline results.
      • The strategy is CONSTRAINED_SOLVEPNP, but no constrainedPnpParams were provided (use the other function overload).
      Parameters:
      cameraMatrix - Camera calibration data for multi-tag-on-rio strategy - can be empty otherwise
      distCoeffs - Camera calibration data for multi-tag-on-rio strategy - can be empty otherwise
      Returns:
      An EstimatedRobotPose with an estimated pose, timestamp, and targets used to create the estimate.
    • update

      Deprecated, for removal: This API element is subject to removal in a future version.
      Use individual estimation methods instead.
      Updates the estimated position of the robot. Returns empty if:
      • The timestamp of the provided pipeline result is the same as in the previous call to update().
      • No targets were found in the pipeline results.
      • The strategy is CONSTRAINED_SOLVEPNP, but the provided constrainedPnpParams are empty.
      Parameters:
      cameraMatrix - Camera calibration data for multi-tag-on-rio strategy - can be empty otherwise
      distCoeffs - Camera calibration data for multi-tag-on-rio strategy - can be empty otherwise
      constrainedPnpParams - Constrained SolvePNP params, if needed.
      Returns:
      An EstimatedRobotPose with an estimated pose, timestamp, and targets used to create the estimate.
    • estimatePnpDistanceTrigSolvePose

      Return the estimated position of the robot by using distance data from best visible tag to compute a Pose. This runs on the RoboRIO in order to access the robot's yaw heading, and MUST have addHeadingData called every frame so heading data is up-to-date.

      Yields a Pose2d in estimatedRobotPose (0 for z, roll, pitch)

      https://www.chiefdelphi.com/t/frc-6328-mechanical-advantage-2025-build-thread/477314/98

      Parameters:
      cameraResult - A pipeline result from the camera.
      Returns:
      An EstimatedRobotPose with an estimated pose, timestamp, and targets used to create the estimate.
    • estimateConstrainedSolvepnpPose

      public Optional<EstimatedRobotPose> estimateConstrainedSolvepnpPose(PhotonPipelineResult cameraResult, Matrix<N3,N3> cameraMatrix, Matrix<N8,N1> distCoeffs, Pose3d seedPose, boolean headingFree, double headingScaleFactor)
      Return the estimated position of the robot by solving a constrained version of the Perspective-n-Point problem with the robot's drivebase flat on the floor. This computation takes place on the RoboRIO, and typically takes not more than 2ms. See ConstrainedSolvepnpJni for tuning handles this strategy exposes. Internally, the cost function is a sum-squared of pixel reprojection error + (optionally) heading error * heading scale factor. This strategy needs addHeadingData called every frame so heading data is up-to-date.
      Parameters:
      cameraResult - A pipeline result from the camera.
      cameraMatrix - Camera intrinsics from camera calibration data.
      distCoeffs - Distortion coefficients from camera calibration data.
      seedPose - An initial guess at robot pose, refined via optimization. Better guesses will converge faster. Can come from any pose source, but some battle-tested sources are estimateCoprocMultiTagPose(PhotonPipelineResult), or estimateLowestAmbiguityPose(PhotonPipelineResult) if MultiTag results aren't available.
      headingFree - If true, heading is completely free to vary. If false, heading excursions from the provided heading measurement will be penalized
      headingScaleFactor - If headingFree is false, this weights the cost of changing our robot heading estimate against the tag corner reprojection error cont.
      Returns:
      An EstimatedRobotPose with an estimated pose, timestamp, and targets used to create the estimate.
    • estimateCoprocMultiTagPose

      Return the estimated position of the robot by using all visible tags to compute a single pose estimate on coprocessor. This option needs to be enabled on the PhotonVision web UI as well.
      Parameters:
      cameraResult - A pipeline result from the camera.
      Returns:
      An EstimatedRobotPose with an estimated pose, timestamp, and targets used to create the estimate.
    • estimateRioMultiTagPose

      public Optional<EstimatedRobotPose> estimateRioMultiTagPose(PhotonPipelineResult cameraResult, Matrix<N3,N3> cameraMatrix, Matrix<N8,N1> distCoeffs)
      Return the estimated position of the robot by using all visible tags to compute a single pose estimate on the RoboRIO. This can take a lot of time due to the RIO's weak computing power.
      Parameters:
      cameraResult - A pipeline result from the camera.
      cameraMatrix - Camera intrinsics from camera calibration data
      distCoeffs - Distortion coefficients from camera calibration data.
      Returns:
      An EstimatedRobotPose with an estimated pose, timestamp, and targets used to create the estimate.
    • estimateLowestAmbiguityPose

      Return the estimated position of the robot with the lowest position ambiguity from a pipeline result.
      Parameters:
      cameraResult - A pipeline result from the camera.
      Returns:
      An EstimatedRobotPose with an estimated pose, timestamp, and targets used to create the estimate.
    • estimateClosestToCameraHeightPose

      Return the estimated position of the robot using the target with the lowest delta height difference between the estimated and actual height of the camera.
      Parameters:
      cameraResult - A pipeline result from the camera.
      Returns:
      An EstimatedRobotPose with an estimated pose, timestamp, and targets used to create the estimate.
    • estimateClosestToReferencePose

      Return the estimated position of the robot using the target with the lowest delta in the vector magnitude between it and the reference pose.
      Parameters:
      cameraResult - A pipeline result from the camera.
      referencePose - reference pose to check vector magnitude difference against.
      Returns:
      An EstimatedRobotPose with an estimated pose, timestamp, and targets used to create the estimate.
    • estimateAverageBestTargetsPose

      Return the average of the best target poses using ambiguity as weight.
      Parameters:
      cameraResult - A pipeline result from the camera.
      Returns:
      An EstimatedRobotPose with an estimated pose, timestamp, and targets used to create the estimate.