Class SimCameraProperties

java.lang.Object
org.photonvision.simulation.SimCameraProperties

public class SimCameraProperties extends Object
Calibration and performance values for this camera.

The resolution will affect the accuracy of projected(3d to 2d) target corners and similarly the severity of image noise on estimation(2d to 3d).

The camera intrinsics and distortion coefficients describe the results of calibration, and how to map between 3d field points and 2d image points.

The performance values (framerate/exposure time, latency) determine how often results should be updated and with how much latency in simulation. High exposure time causes motion blur which can inhibit target detection while moving. Note that latency estimation does not account for network latency and the latency reported will always be perfect.

  • Constructor Details

    • SimCameraProperties

      public SimCameraProperties()
      Default constructor which is the same as PERFECT_90DEG()
    • SimCameraProperties

      public SimCameraProperties(String path, int width, int height) throws IOException
      Reads camera properties from a photonvision config.json file. This is only the resolution, camera intrinsics, distortion coefficients, and average/std. dev. pixel error. Other camera properties must be set.
      Parameters:
      path - Path to the config.json file
      width - The width of the desired resolution in the JSON
      height - The height of the desired resolution in the JSON
      Throws:
      IOException - If reading the JSON fails, either from an invalid path or a missing/invalid calibrated resolution.
    • SimCameraProperties

      public SimCameraProperties(Path path, int width, int height) throws IOException
      Reads camera properties from a photonvision config.json file. This is only the resolution, camera intrinsics, distortion coefficients, and average/std. dev. pixel error. Other camera properties must be set.
      Parameters:
      path - Path to the config.json file
      width - The width of the desired resolution in the JSON
      height - The height of the desired resolution in the JSON
      Throws:
      IOException - If reading the JSON fails, either from an invalid path or a missing/invalid calibrated resolution.
  • Method Details

    • setRandomSeed

      public void setRandomSeed(long seed)
    • setCalibration

      public void setCalibration(int resWidth, int resHeight, Rotation2d fovDiag)
    • setCalibration

      public void setCalibration(int resWidth, int resHeight, Matrix<N3,N3> camIntrinsics, Matrix<N8,N1> distCoeffs)
    • setCalibError

      public void setCalibError(double avgErrorPx, double errorStdDevPx)
    • setFPS

      public void setFPS(double fps)
      Parameters:
      fps - The average frames per second the camera should process at. Exposure time limits FPS if set!
    • setExposureTimeMs

      public void setExposureTimeMs(double exposureTimeMs)
      Parameters:
      exposureTimeMs - The amount of time the "shutter" is open for one frame. Affects motion blur. Frame speed(from FPS) is limited to this!
    • setAvgLatencyMs

      public void setAvgLatencyMs(double avgLatencyMs)
      Parameters:
      avgLatencyMs - The average latency (from image capture to data published) in milliseconds a frame should have
    • setLatencyStdDevMs

      public void setLatencyStdDevMs(double latencyStdDevMs)
      Parameters:
      latencyStdDevMs - The standard deviation in milliseconds of the latency
    • getResWidth

      public int getResWidth()
    • getResHeight

      public int getResHeight()
    • getResArea

      public int getResArea()
    • getAspectRatio

      public double getAspectRatio()
      Width:height
    • getIntrinsics

      public Matrix<N3,N3> getIntrinsics()
    • getDistCoeffs

      public Vector<N8> getDistCoeffs()
    • getFPS

      public double getFPS()
    • getFrameSpeedMs

      public double getFrameSpeedMs()
    • getExposureTimeMs

      public double getExposureTimeMs()
    • getAvgLatencyMs

      public double getAvgLatencyMs()
    • getLatencyStdDevMs

      public double getLatencyStdDevMs()
    • copy

      public SimCameraProperties copy()
    • getContourAreaPercent

      public double getContourAreaPercent(org.opencv.core.Point[] points)
      The percentage(0 - 100) of this camera's resolution the contour takes up in pixels of the image.
      Parameters:
      points - Points of the contour
    • getPixelYaw

      public Rotation2d getPixelYaw(double pixelX)
      The yaw from the principal point of this camera to the pixel x value. Positive values left.
    • getPixelPitch

      public Rotation2d getPixelPitch(double pixelY)
      The pitch from the principal point of this camera to the pixel y value. Pitch is positive down.

      Note that this angle is naively computed and may be incorrect. See getCorrectedPixelRot(Point).

    • getPixelRot

      public Rotation3d getPixelRot(org.opencv.core.Point point)
      Finds the yaw and pitch to the given image point. Yaw is positive left, and pitch is positive down.

      Note that pitch is naively computed and may be incorrect. See getCorrectedPixelRot(Point).

    • getCorrectedPixelRot

      public Rotation3d getCorrectedPixelRot(org.opencv.core.Point point)
      Gives the yaw and pitch of the line intersecting the camera lens and the given pixel coordinates on the sensor. Yaw is positive left, and pitch positive down.

      The pitch traditionally calculated from pixel offsets do not correctly account for non-zero values of yaw because of perspective distortion (not to be confused with lens distortion)-- for example, the pitch angle is naively calculated as:

      pitch = arctan(pixel y offset / focal length y)
      However, using focal length as a side of the associated right triangle is not correct when the pixel x value is not 0, because the distance from this pixel (projected on the x-axis) to the camera lens increases. Projecting a line back out of the camera with these naive angles will not intersect the 3d point that was originally projected into this 2d pixel. Instead, this length should be:
      focal length y ⟶ (focal length y / cos(arctan(pixel x offset / focal length x)))
      Returns:
      Rotation3d with yaw and pitch of the line projected out of the camera from the given pixel (roll is zero).
    • getHorizFOV

      public Rotation2d getHorizFOV()
    • getVertFOV

      public Rotation2d getVertFOV()
    • getDiagFOV

      public Rotation2d getDiagFOV()
    • getVisibleLine

      public Pair<Double,Double> getVisibleLine(RotTrlTransform3d camRt, Translation3d a, Translation3d b)
      Determines where the line segment defined by the two given translations intersects the camera's frustum/field-of-vision, if at all.

      The line is parametrized so any of its points p = t * (b - a) + a. This method returns these values of t, minimum first, defining the region of the line segment which is visible in the frustum. If both ends of the line segment are visible, this simply returns {0, 1}. If, for example, point b is visible while a is not, and half of the line segment is inside the camera frustum, {0.5, 1} would be returned.

      Parameters:
      camRt - The change in basis from world coordinates to camera coordinates. See RotTrlTransform3d.makeRelativeTo(Pose3d).
      a - The initial translation of the line
      b - The final translation of the line
      Returns:
      A Pair of Doubles. The values may be null:
      • {Double, Double} : Two parametrized values(t), minimum first, representing which segment of the line is visible in the camera frustum.
      • {Double, null} : One value(t) representing a single intersection point. For example, the line only intersects the intersection of two adjacent viewplanes.
      • {null, null} : No values. The line segment is not visible in the camera frustum.
    • estPixelNoise

      public org.opencv.core.Point[] estPixelNoise(org.opencv.core.Point[] points)
      Returns these points after applying this camera's estimated noise.
    • estLatencyMs

      public double estLatencyMs()
      Returns:
      Noisy estimation of a frame's processing latency in milliseconds
    • estMsUntilNextFrame

      public double estMsUntilNextFrame()
      Returns:
      Estimate how long until the next frame should be processed in milliseconds
    • PERFECT_90DEG

      public static SimCameraProperties PERFECT_90DEG()
      960x720 resolution, 90 degree FOV, "perfect" lagless camera
    • PI4_LIFECAM_320_240

      public static SimCameraProperties PI4_LIFECAM_320_240()
    • PI4_LIFECAM_640_480

      public static SimCameraProperties PI4_LIFECAM_640_480()
    • LL2_640_480

      public static SimCameraProperties LL2_640_480()
    • LL2_960_720

      public static SimCameraProperties LL2_960_720()
    • LL2_1280_720

      public static SimCameraProperties LL2_1280_720()