Class PhotonCameraSim

java.lang.Object
org.photonvision.simulation.PhotonCameraSim
All Implemented Interfaces:
AutoCloseable

public class PhotonCameraSim extends Object implements AutoCloseable
A handle for simulating PhotonCamera values. Processing simulated targets through this class will change the associated PhotonCamera's results.
  • Field Details

  • Constructor Details

    • PhotonCameraSim

      public PhotonCameraSim(PhotonCamera camera)
      Constructs a handle for simulating PhotonCamera values. Processing simulated targets through this class will change the associated PhotonCamera's results.

      This constructor's camera has a 90 deg FOV with no simulated lag!

      By default, the minimum target area is 100 pixels and there is no maximum sight range.

      Parameters:
      camera - The camera to be simulated
    • PhotonCameraSim

      public PhotonCameraSim(PhotonCamera camera, SimCameraProperties prop)
      Constructs a handle for simulating PhotonCamera values. Processing simulated targets through this class will change the associated PhotonCamera's results.

      By default, the minimum target area is 100 pixels and there is no maximum sight range.

      Parameters:
      camera - The camera to be simulated
      prop - Properties of this camera such as FOV and FPS
    • PhotonCameraSim

      public PhotonCameraSim(PhotonCamera camera, SimCameraProperties prop, double minTargetAreaPercent, double maxSightRangeMeters)
      Constructs a handle for simulating PhotonCamera values. Processing simulated targets through this class will change the associated PhotonCamera's results.
      Parameters:
      camera - The camera to be simulated
      prop - Properties of this camera such as FOV and FPS
      minTargetAreaPercent - The minimum percentage(0 - 100) a detected target must take up of the camera's image to be processed. Match this with your contour filtering settings in the PhotonVision GUI.
      maxSightRangeMeters - Maximum distance at which the target is illuminated to your camera. Note that minimum target area of the image is separate from this.
  • Method Details

    • close

      public void close()
      Specified by:
      close in interface AutoCloseable
    • getCamera

      public PhotonCamera getCamera()
    • getMinTargetAreaPercent

      public double getMinTargetAreaPercent()
    • getMinTargetAreaPixels

      public double getMinTargetAreaPixels()
    • getMaxSightRangeMeters

      public double getMaxSightRangeMeters()
    • getTargetSortMode

      public PhotonTargetSortMode getTargetSortMode()
    • getVideoSimRaw

      public CvSource getVideoSimRaw()
    • getVideoSimFrameRaw

      public org.opencv.core.Mat getVideoSimFrameRaw()
    • canSeeTargetPose

      public boolean canSeeTargetPose(Pose3d camPose, VisionTargetSim target)
      Determines if this target's pose should be visible to the camera without considering its projected image points. Does not account for image area.
      Parameters:
      camPose - Camera's 3d pose
      target - Vision target containing pose and shape
      Returns:
      If this vision target can be seen before image projection.
    • canSeeCorners

      public boolean canSeeCorners(org.opencv.core.Point[] points)
      Determines if all target points are inside the camera's image.
      Parameters:
      points - The target's 2d image points
    • consumeNextEntryTime

      public Optional<Long> consumeNextEntryTime()
      Determine if this camera should process a new frame based on performance metrics and the time since the last update. This returns an Optional which is either empty if no update should occur or a Long of the timestamp in microseconds of when the frame which should be received by NT. If a timestamp is returned, the last frame update time becomes that timestamp.
      Returns:
      Optional long which is empty while blocked or the NT entry timestamp in microseconds if ready
    • setMinTargetAreaPercent

      public void setMinTargetAreaPercent(double areaPercent)
      The minimum percentage(0 - 100) a detected target must take up of the camera's image to be processed.
    • setMinTargetAreaPixels

      public void setMinTargetAreaPixels(double areaPx)
      The minimum number of pixels a detected target must take up in the camera's image to be processed.
    • setMaxSightRange

      public void setMaxSightRange(double rangeMeters)
      Maximum distance at which the target is illuminated to your camera. Note that minimum target area of the image is separate from this.
    • setTargetSortMode

      public void setTargetSortMode(PhotonTargetSortMode sortMode)
      Defines the order the targets are sorted in the pipeline result.
    • enableRawStream

      public void enableRawStream(boolean enabled)
      Sets whether the raw video stream simulation is enabled.

      Note: This may increase loop times.

    • enableDrawWireframe

      public void enableDrawWireframe(boolean enabled)
      Sets whether a wireframe of the field is drawn to the raw video stream.

      Note: This will dramatically increase loop times.

    • setWireframeResolution

      public void setWireframeResolution(double resolution)
      Sets the resolution of the drawn wireframe if enabled. Drawn line segments will be subdivided into smaller segments based on a threshold set by the resolution.
      Parameters:
      resolution - Resolution as a fraction(0 - 1) of the video frame's diagonal length in pixels
    • enableProcessedStream

      public void enableProcessedStream(boolean enabled)
      Sets whether the processed video stream simulation is enabled.
    • process

      public PhotonPipelineResult process(double latencyMillis, Pose3d cameraPose, List<VisionTargetSim> targets)
    • submitProcessedFrame

      public void submitProcessedFrame(PhotonPipelineResult result)
      Simulate one processed frame of vision data, putting one result to NT at current timestamp. Image capture time is assumed be (current time - latency).
      Parameters:
      result - The pipeline result to submit
    • submitProcessedFrame

      public void submitProcessedFrame(PhotonPipelineResult result, long receiveTimestamp)
      Simulate one processed frame of vision data, putting one result to NT. Image capture timestamp overrides getTimestampSeconds() for more precise latency simulation.
      Parameters:
      result - The pipeline result to submit
      receiveTimestamp - The (sim) timestamp when this result was read by NT in microseconds