Class OpenCVHelp
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionstatic org.opencv.core.Point
avgPoint
(org.opencv.core.Point[] points) static org.opencv.core.Point[]
cornersToPoints
(List<TargetCorner> corners) static org.opencv.core.Point[]
cornersToPoints
(TargetCorner... corners) static List<org.opencv.core.Point>
distortPoints
(List<org.opencv.core.Point> pointsList, org.opencv.core.Mat cameraMatrix, org.opencv.core.Mat distCoeffs) Distort a list of points in pixels using the OPENCV5/8 models.static void
static org.opencv.core.Rect
getBoundingRect
(org.opencv.core.Point[] points) Gets the (upright) rectangle which bounds this contour.static org.opencv.core.Point[]
getConvexHull
(org.opencv.core.Point[] points) Gets the convex hull contour (the outline) of a list of points.static org.opencv.core.RotatedRect
getMinAreaRect
(org.opencv.core.Point[] points) Gets the rotated rectangle with minimum area which bounds this contour.static org.opencv.core.Mat
matrixToMat
(org.ejml.simple.SimpleMatrix matrix) matToMatrix
(org.opencv.core.Mat mat) static List<TargetCorner>
pointsToCorners
(org.opencv.core.MatOfPoint2f matInput) static List<TargetCorner>
pointsToCorners
(org.opencv.core.Point... points) static org.opencv.core.Point[]
projectPoints
(Matrix<N3, N3> cameraMatrix, Matrix<N8, N1> distCoeffs, RotTrlTransform3d camRt, List<Translation3d> objectTranslations) Project object points from the 3d world into the 2d camera image.static <T> List<T>
reorderCircular
(List<T> elements, boolean backwards, int shiftStart) Reorders the list, optionally indexing backwards and wrapping around to the last element after the first, and shifting all indices in the direction of indexing.static org.opencv.core.MatOfPoint3f
rotationToRvec
(Rotation3d rotation) Creates a newMatOfPoint3f
with this 3d rotation.static Rotation3d
rvecToRotation
(org.opencv.core.Mat rvecInput) Returns a 3d rotation from thisMat
.solvePNP_SQPNP
(Matrix<N3, N3> cameraMatrix, Matrix<N8, N1> distCoeffs, List<Translation3d> objectTrls, org.opencv.core.Point[] imagePoints) Finds the transformation that maps the camera's pose to the origin of the supplied object.solvePNP_SQUARE
(Matrix<N3, N3> cameraMatrix, Matrix<N8, N1> distCoeffs, List<Translation3d> modelTrls, org.opencv.core.Point[] imagePoints) Finds the transformation(s) that map the camera's pose to the target's pose.static org.opencv.core.MatOfPoint3f
translationToTvec
(Translation3d... translations) Creates a newMatOfPoint3f
with these 3d translations.static Translation3d
tvecToTranslation
(org.opencv.core.Mat tvecInput) Returns a new 3d translation from thisMat
.static org.opencv.core.Point[]
undistortPoints
(Matrix<N3, N3> cameraMatrix, Matrix<N8, N1> distCoeffs, org.opencv.core.Point[] points) Undistort 2d image points using a given camera's intrinsics and distortion.
-
Constructor Details
-
OpenCVHelp
public OpenCVHelp()
-
-
Method Details
-
forceLoadOpenCV
public static void forceLoadOpenCV() -
matrixToMat
public static org.opencv.core.Mat matrixToMat(org.ejml.simple.SimpleMatrix matrix) -
matToMatrix
-
translationToTvec
Creates a newMatOfPoint3f
with these 3d translations. The opencv tvec is a vector with three elements representing {x, y, z} in the EDN coordinate system.- Parameters:
translations
- The translations to convert into a MatOfPoint3f
-
tvecToTranslation
Returns a new 3d translation from thisMat
. The opencv tvec is a vector with three elements representing {x, y, z} in the EDN coordinate system.- Parameters:
tvecInput
- The tvec to create a Translation3d from
-
rotationToRvec
Creates a newMatOfPoint3f
with this 3d rotation. The opencv rvec Mat is a vector with three elements representing the axis scaled by the angle in the EDN coordinate system. (angle = norm, and axis = rvec / norm)- Parameters:
rotation
- The rotation to convert into a MatOfPoint3f
-
rvecToRotation
Returns a 3d rotation from thisMat
. The opencv rvec Mat is a vector with three elements representing the axis scaled by the angle in the EDN coordinate system. (angle = norm, and axis = rvec / norm)- Parameters:
rvecInput
- The rvec to create a Rotation3d from
-
avgPoint
public static org.opencv.core.Point avgPoint(org.opencv.core.Point[] points) -
cornersToPoints
-
cornersToPoints
-
pointsToCorners
-
pointsToCorners
-
reorderCircular
Reorders the list, optionally indexing backwards and wrapping around to the last element after the first, and shifting all indices in the direction of indexing.e.g.
({1,2,3}, false, 1) == {2,3,1}
({1,2,3}, true, 0) == {1,3,2}
({1,2,3}, true, 1) == {3,2,1}
- Type Parameters:
T
- Element type- Parameters:
elements
- list elementsbackwards
- If indexing should happen in reverse (0, size-1, size-2, ...)shiftStart
- How much the initial index should be shifted (instead of starting at index 0, start at shiftStart, negated if backwards)- Returns:
- Reordered list
-
distortPoints
public static List<org.opencv.core.Point> distortPoints(List<org.opencv.core.Point> pointsList, org.opencv.core.Mat cameraMatrix, org.opencv.core.Mat distCoeffs) Distort a list of points in pixels using the OPENCV5/8 models. See image-rotation.md or https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html for the math here.- Parameters:
pointsList
- the undistorted pointscameraMatrix
- standard OpenCV camera matdistCoeffs
- standard OpenCV distortion coefficients. Must OPENCV5 or OPENCV8
-
projectPoints
public static org.opencv.core.Point[] projectPoints(Matrix<N3, N3> cameraMatrix, Matrix<N8, N1> distCoeffs, RotTrlTransform3d camRt, List<Translation3d> objectTranslations) Project object points from the 3d world into the 2d camera image. The camera properties(intrinsics, distortion) determine the results of this projection.- Parameters:
cameraMatrix
- the camera intrinsics matrix in standard opencv formdistCoeffs
- the camera distortion matrix in standard opencv formcamRt
- The change in basis from world coordinates to camera coordinates. SeeRotTrlTransform3d.makeRelativeTo(Pose3d)
.objectTranslations
- The 3d points to be projected- Returns:
- The 2d points in pixels which correspond to the camera's image of the 3d points
-
undistortPoints
public static org.opencv.core.Point[] undistortPoints(Matrix<N3, N3> cameraMatrix, Matrix<N8, N1> distCoeffs, org.opencv.core.Point[] points) Undistort 2d image points using a given camera's intrinsics and distortion.2d image points from
projectPoints()
will naturally be distorted, so this operation is important if the image points need to be directly used (e.g. 2d yaw/pitch).- Parameters:
cameraMatrix
- The camera intrinsics matrix in standard opencv formdistCoeffs
- The camera distortion matrix in standard opencv formpoints
- The distorted image points- Returns:
- The undistorted image points
-
getBoundingRect
public static org.opencv.core.Rect getBoundingRect(org.opencv.core.Point[] points) Gets the (upright) rectangle which bounds this contour.Note that rectangle size and position are stored with ints and do not have sub-pixel accuracy.
- Parameters:
points
- The points to be bounded- Returns:
- Rectangle bounding the given points
-
getMinAreaRect
public static org.opencv.core.RotatedRect getMinAreaRect(org.opencv.core.Point[] points) Gets the rotated rectangle with minimum area which bounds this contour.Note that rectangle size and position are stored with floats and have sub-pixel accuracy.
- Parameters:
points
- The points to be bounded- Returns:
- Rotated rectangle bounding the given points
-
getConvexHull
public static org.opencv.core.Point[] getConvexHull(org.opencv.core.Point[] points) Gets the convex hull contour (the outline) of a list of points.- Parameters:
points
- The input contour- Returns:
- The subset of points defining the convex hull. Note that these use ints and not floats.
-
solvePNP_SQUARE
public static Optional<PnpResult> solvePNP_SQUARE(Matrix<N3, N3> cameraMatrix, Matrix<N8, N1> distCoeffs, List<Translation3d> modelTrls, org.opencv.core.Point[] imagePoints) Finds the transformation(s) that map the camera's pose to the target's pose. The camera's pose relative to the target is determined by the supplied 3d points of the target's model and their associated 2d points imaged by the camera. The supplied model translations must be relative to the target's pose.For planar targets, there may be an alternate solution which is plausible given the 2d image points. This has an associated "ambiguity" which describes the ratio of reprojection error between the "best" and "alternate" solution.
This method is intended for use with individual AprilTags, and will not work unless 4 points are provided.
- Parameters:
cameraMatrix
- The camera intrinsics matrix in standard opencv formdistCoeffs
- The camera distortion matrix in standard opencv formmodelTrls
- The translations of the object corners. These should have the object pose as their origin. These must come in a specific, pose-relative order (in NWU):- Point 0: [0, -squareLength / 2, squareLength / 2]
- Point 1: [0, squareLength / 2, squareLength / 2]
- Point 2: [0, squareLength / 2, -squareLength / 2]
- Point 3: [0, -squareLength / 2, -squareLength / 2]
imagePoints
- The projection of these 3d object points into the 2d camera image. The order should match the given object point translations.- Returns:
- The resulting transformation that maps the camera pose to the target pose and the ambiguity if an alternate solution is available.
-
solvePNP_SQPNP
public static Optional<PnpResult> solvePNP_SQPNP(Matrix<N3, N3> cameraMatrix, Matrix<N8, N1> distCoeffs, List<Translation3d> objectTrls, org.opencv.core.Point[] imagePoints) Finds the transformation that maps the camera's pose to the origin of the supplied object. An "object" is simply a set of known 3d translations that correspond to the given 2d points. If, for example, the object translations are given relative to close-right corner of the blue alliance(the default origin), a camera-to-origin transformation is returned. If the translations are relative to a target's pose, a camera-to-target transformation is returned.There must be at least 3 points to use this method. This does not return an alternate solution-- if you are intending to use solvePNP on a single AprilTag, see
solvePNP_SQUARE(edu.wpi.first.math.Matrix<edu.wpi.first.math.numbers.N3, edu.wpi.first.math.numbers.N3>, edu.wpi.first.math.Matrix<edu.wpi.first.math.numbers.N8, edu.wpi.first.math.numbers.N1>, java.util.List<edu.wpi.first.math.geometry.Translation3d>, org.opencv.core.Point[])
instead.- Parameters:
cameraMatrix
- The camera intrinsics matrix in standard opencv formdistCoeffs
- The camera distortion matrix in standard opencv formobjectTrls
- The translations of the object corners, relative to the field.imagePoints
- The projection of these 3d object points into the 2d camera image. The order should match the given object point translations.- Returns:
- The resulting transformation that maps the camera pose to the target pose. If the 3d model points are supplied relative to the origin, this transformation brings the camera to the origin.
-