Package org.photonvision.simulation
Class PhotonCameraSim
java.lang.Object
org.photonvision.simulation.PhotonCameraSim
- All Implemented Interfaces:
AutoCloseable
A handle for simulating
PhotonCamera
values. Processing simulated targets through this
class will change the associated PhotonCamera's results.-
Field Summary
Modifier and TypeFieldDescriptionfinal SimCameraProperties
This simulated camera'sSimCameraProperties
protected NTTopicSet
-
Constructor Summary
ConstructorDescriptionPhotonCameraSim
(PhotonCamera camera) Constructs a handle for simulatingPhotonCamera
values.PhotonCameraSim
(PhotonCamera camera, SimCameraProperties prop) Constructs a handle for simulatingPhotonCamera
values.PhotonCameraSim
(PhotonCamera camera, SimCameraProperties prop, double minTargetAreaPercent, double maxSightRangeMeters) Constructs a handle for simulatingPhotonCamera
values. -
Method Summary
Modifier and TypeMethodDescriptionboolean
canSeeCorners
(org.opencv.core.Point[] points) Determines if all target points are inside the camera's image.boolean
canSeeTargetPose
(Pose3d camPose, VisionTargetSim target) Determines if this target's pose should be visible to the camera without considering its projected image points.void
close()
Determine if this camera should process a new frame based on performance metrics and the time since the last update.void
enableDrawWireframe
(boolean enabled) Sets whether a wireframe of the field is drawn to the raw video stream.void
enableProcessedStream
(boolean enabled) Sets whether the processed video stream simulation is enabled.void
enableRawStream
(boolean enabled) Sets whether the raw video stream simulation is enabled.double
double
double
org.opencv.core.Mat
process
(double latencyMillis, Pose3d cameraPose, List<VisionTargetSim> targets) void
setMaxSightRange
(double rangeMeters) Maximum distance at which the target is illuminated to your camera.void
setMinTargetAreaPercent
(double areaPercent) The minimum percentage(0 - 100) a detected target must take up of the camera's image to be processed.void
setMinTargetAreaPixels
(double areaPx) The minimum number of pixels a detected target must take up in the camera's image to be processed.void
setTargetSortMode
(PhotonTargetSortMode sortMode) Defines the order the targets are sorted in the pipeline result.void
setWireframeResolution
(double resolution) Sets the resolution of the drawn wireframe if enabled.void
Simulate one processed frame of vision data, putting one result to NT at current timestamp.void
submitProcessedFrame
(PhotonPipelineResult result, long receiveTimestamp) Simulate one processed frame of vision data, putting one result to NT.
-
Field Details
-
ts
-
prop
This simulated camera'sSimCameraProperties
-
-
Constructor Details
-
PhotonCameraSim
Constructs a handle for simulatingPhotonCamera
values. Processing simulated targets through this class will change the associated PhotonCamera's results.This constructor's camera has a 90 deg FOV with no simulated lag!
By default, the minimum target area is 100 pixels and there is no maximum sight range.
- Parameters:
camera
- The camera to be simulated
-
PhotonCameraSim
Constructs a handle for simulatingPhotonCamera
values. Processing simulated targets through this class will change the associated PhotonCamera's results.By default, the minimum target area is 100 pixels and there is no maximum sight range.
- Parameters:
camera
- The camera to be simulatedprop
- Properties of this camera such as FOV and FPS
-
PhotonCameraSim
public PhotonCameraSim(PhotonCamera camera, SimCameraProperties prop, double minTargetAreaPercent, double maxSightRangeMeters) Constructs a handle for simulatingPhotonCamera
values. Processing simulated targets through this class will change the associated PhotonCamera's results.- Parameters:
camera
- The camera to be simulatedprop
- Properties of this camera such as FOV and FPSminTargetAreaPercent
- The minimum percentage(0 - 100) a detected target must take up of the camera's image to be processed. Match this with your contour filtering settings in the PhotonVision GUI.maxSightRangeMeters
- Maximum distance at which the target is illuminated to your camera. Note that minimum target area of the image is separate from this.
-
-
Method Details
-
close
public void close()- Specified by:
close
in interfaceAutoCloseable
-
getCamera
-
getMinTargetAreaPercent
public double getMinTargetAreaPercent() -
getMinTargetAreaPixels
public double getMinTargetAreaPixels() -
getMaxSightRangeMeters
public double getMaxSightRangeMeters() -
getTargetSortMode
-
getVideoSimRaw
-
getVideoSimFrameRaw
public org.opencv.core.Mat getVideoSimFrameRaw() -
canSeeTargetPose
Determines if this target's pose should be visible to the camera without considering its projected image points. Does not account for image area.- Parameters:
camPose
- Camera's 3d posetarget
- Vision target containing pose and shape- Returns:
- If this vision target can be seen before image projection.
-
canSeeCorners
public boolean canSeeCorners(org.opencv.core.Point[] points) Determines if all target points are inside the camera's image.- Parameters:
points
- The target's 2d image points
-
consumeNextEntryTime
Determine if this camera should process a new frame based on performance metrics and the time since the last update. This returns an Optional which is either empty if no update should occur or a Long of the timestamp in microseconds of when the frame which should be received by NT. If a timestamp is returned, the last frame update time becomes that timestamp.- Returns:
- Optional long which is empty while blocked or the NT entry timestamp in microseconds if ready
-
setMinTargetAreaPercent
public void setMinTargetAreaPercent(double areaPercent) The minimum percentage(0 - 100) a detected target must take up of the camera's image to be processed. -
setMinTargetAreaPixels
public void setMinTargetAreaPixels(double areaPx) The minimum number of pixels a detected target must take up in the camera's image to be processed. -
setMaxSightRange
public void setMaxSightRange(double rangeMeters) Maximum distance at which the target is illuminated to your camera. Note that minimum target area of the image is separate from this. -
setTargetSortMode
Defines the order the targets are sorted in the pipeline result. -
enableRawStream
public void enableRawStream(boolean enabled) Sets whether the raw video stream simulation is enabled.Note: This may increase loop times.
-
enableDrawWireframe
public void enableDrawWireframe(boolean enabled) Sets whether a wireframe of the field is drawn to the raw video stream.Note: This will dramatically increase loop times.
-
setWireframeResolution
public void setWireframeResolution(double resolution) Sets the resolution of the drawn wireframe if enabled. Drawn line segments will be subdivided into smaller segments based on a threshold set by the resolution.- Parameters:
resolution
- Resolution as a fraction(0 - 1) of the video frame's diagonal length in pixels
-
enableProcessedStream
public void enableProcessedStream(boolean enabled) Sets whether the processed video stream simulation is enabled. -
process
public PhotonPipelineResult process(double latencyMillis, Pose3d cameraPose, List<VisionTargetSim> targets) -
submitProcessedFrame
Simulate one processed frame of vision data, putting one result to NT at current timestamp. Image capture time is assumed be (current time - latency).- Parameters:
result
- The pipeline result to submit
-
submitProcessedFrame
Simulate one processed frame of vision data, putting one result to NT. Image capture timestamp overridesgetTimestampSeconds()
for more precise latency simulation.- Parameters:
result
- The pipeline result to submitreceiveTimestamp
- The (sim) timestamp when this result was read by NT in microseconds
-