opentl::modelprojection::Warp Class Reference

Class for calculating and executing the projection (extrinsic and intrinsic transformation) from object body to 2D camera (sensor) space. More...

Inherited by opentl::modelprojection::WarpBack.

List of all members.

Public Member Functions

void copyT2Vector (math::Transform &T, std::vector< double > &vec) const
 Copy T matrix to vector of doubles.
void copyVector2T (std::vector< double > &vec, math::Transform &T) const
 Copy vector of doubles to T matrix.
input::ImageSensorgetImageSensor (unsigned int sensorId=0) const
 Return Pointer to ImageSensor instance.
std::size_t getImageSensorCount () const
 Return the number of sensors/cameras accessible by the Warp class.
void getPoseCameraT (opentl::core::cvdata::Pose &objPose, math::Transform &camT, int cameraIdx=0, int linkIdx=0) const
 Gets extrinsic transformation matrix T (object to sensor space). (requires to run warpUpdate() first).
void getPoseWorldT (opentl::core::cvdata::Pose &objPose, math::Transform &worldT, std::vector< math::Transform > *jacT=NULL, int linkIdx=0) const
 Gets world transformation matrix T (object to world space). (requires to run warpUpdate() first).
void getScreenBoundingCircle (const boost::shared_ptr< opentl::core::State > &state, const opentl::models::ObjModel &objModel, double &x, double &y, double &radius, int camIdx, int partIdx=0)
 Calculate bounding circle (center and radius) of the object screen projection, at a given pose.
void getScreenBoundingLimits (const boost::shared_ptr< opentl::core::State > &state, const opentl::models::ObjModel &objModel, double &x, double &y, double &width, double &height, int camIdx, int partIdx=0)
 Calculate rectangular bounding limits in screen coordinates, at a given pose.
void getScreenBoundingLimitsOfFace (const boost::shared_ptr< opentl::core::State > &state, const opentl::models::ObjModel &objModel, unsigned int faceId, double &x, double &y, double &width, double &height, int camIdx, int partIdx=0)
 Calculate rectangular bounding limits of a given mesh polygon in screen coordinates, at a given pose.
void getScreenRotatedBoundingLimits (const boost::shared_ptr< opentl::core::State > &state, const opentl::models::ObjModel &objModel, double &x, double &y, double &width, double &height, double &angle, int camIdx, int partIdx=0)
 Calculate rotated bounding box of a given mesh in screen coordinates, at a given pose.
 Warp (std::vector< input::ImageSensor * > &sensors)
 Constructor, link the Warp to the respective object/sensor pair.
void warpPoint (opentl::core::cvdata::Pose &objPose, const double *objPoint, double *screenPoint, std::vector< double > *screenGradientX=NULL, std::vector< double > *screenGradientY=NULL, int cameraIdx=0, int linkIdx=0) const
 THIS IS FAST! Project (extrinsic+intrinsic transformation) an object body point to screen. Projection for a given object point under a given object pose.
void warpPoint (opentl::core::cvdata::Pose &objPose, const math::Vector4 &objPoint, math::Vector2 &screenPoint, std::vector< math::Vector2 > *jacobian=NULL, int cameraIdx=0, int linkIdx=0) const
 Project (extrinsic+intrinsic transformation) an object body point to screen. Projection for a given object point under a given object pose. This is a wrapper around the warpPoint() double based version.
void warpPoints (opentl::core::cvdata::Pose &objPose, int nPoints, const std::vector< double > &objPointsX, const std::vector< double > &objPointsY, const std::vector< double > &objPointsZ, std::vector< double > &screenPointsX, std::vector< double > &screenPointsY, std::vector< std::vector< double > > *screenGradientsX=NULL, std::vector< std::vector< double > > *screenGradientsY=NULL, int cameraIdx=0, int linkIdx=0) const
 Project (extrinsic+intrinsic transformation) multiple points on screen.
void warpUpdate (opentl::core::State &state, int cameraIdx=-1) const
 Recalculate world and/or camera transformation matrices T from given target states and sensor models, possibly both (finite+compositional) pose update.
void warpUpdate (std::vector< boost::shared_ptr< opentl::core::State > > &states, int cameraIdx=-1) const
 Update a vector of target states (internally, it calls warpUpdate() for each single state).
virtual ~Warp ()
 Destructor.

Protected Attributes

std::vector
< input::ImageSensor * > & 
mSensors
 Pointers to the Sensor Models.


Detailed Description

Class for calculating and executing the projection (extrinsic and intrinsic transformation) from object body to 2D camera (sensor) space.

Class for calculating and executing the back-projection (extrinsic and intrinsic transformation) from 2D camera (sensor) to 2D/3D object (body) space.

Author:
erwin.roth@weihenstephan.org

Constructor & Destructor Documentation

opentl::modelprojection::Warp::Warp ( std::vector< input::ImageSensor * > &  sensors  ) 

Constructor, link the Warp to the respective object/sensor pair.

Parameters:
sensors Vector of ImageSensors pointers.

virtual opentl::modelprojection::Warp::~Warp (  )  [virtual]

Destructor.


Member Function Documentation

void opentl::modelprojection::Warp::copyT2Vector ( math::Transform T,
std::vector< double > &  vec 
) const [inline]

Copy T matrix to vector of doubles.

void opentl::modelprojection::Warp::copyVector2T ( std::vector< double > &  vec,
math::Transform T 
) const [inline]

Copy vector of doubles to T matrix.

input::ImageSensor& opentl::modelprojection::Warp::getImageSensor ( unsigned int  sensorId = 0  )  const [inline]

Return Pointer to ImageSensor instance.

Parameters:
sensorId Index within warp sensor vector.

std::size_t opentl::modelprojection::Warp::getImageSensorCount (  )  const [inline]

Return the number of sensors/cameras accessible by the Warp class.

Returns:
Number of sensors/cameras.

void opentl::modelprojection::Warp::getPoseCameraT ( opentl::core::cvdata::Pose objPose,
math::Transform camT,
int  cameraIdx = 0,
int  linkIdx = 0 
) const

Gets extrinsic transformation matrix T (object to sensor space). (requires to run warpUpdate() first).

Parameters:
objPose Object Pose from you want to get the Extrinsic-T-Matrix.
camT return value: Extrinsic Tranformation Matrix T (object to sensor space) of specified link
cameraIdx get the transformation matrix for camera with index cameraIdx.
linkIdx Optional, specifies the link index for articulated objectModels, default=0 (=base frame).

void opentl::modelprojection::Warp::getPoseWorldT ( opentl::core::cvdata::Pose objPose,
math::Transform worldT,
std::vector< math::Transform > *  jacT = NULL,
int  linkIdx = 0 
) const

Gets world transformation matrix T (object to world space). (requires to run warpUpdate() first).

Parameters:
objPose Object Pose from you want to get the World-T-Matrix.
worldT return value: Tranformation Matrix T (object to sensor space) of specified link
jacT Optional, pointer is set with Jacobian of world T.
linkIdx Optional, specifies the link index for articulated objectModels, default=0 (=base frame).

void opentl::modelprojection::Warp::getScreenBoundingCircle ( const boost::shared_ptr< opentl::core::State > &  state,
const opentl::models::ObjModel objModel,
double &  x,
double &  y,
double &  radius,
int  camIdx,
int  partIdx = 0 
)

Calculate bounding circle (center and radius) of the object screen projection, at a given pose.

Parameters:
state State of the object
objModel Object model
x Circle center (x)
y Circle center (y)
radius Circle radius
camIdx Camera index
partIdx Link index

void opentl::modelprojection::Warp::getScreenBoundingLimits ( const boost::shared_ptr< opentl::core::State > &  state,
const opentl::models::ObjModel objModel,
double &  x,
double &  y,
double &  width,
double &  height,
int  camIdx,
int  partIdx = 0 
)

Calculate rectangular bounding limits in screen coordinates, at a given pose.

Parameters:
state State of the object
objModel Object model
x upper-left corner (x)
y upper-left corner (y)
width Size in x-direction
height Size in y-direction
camIdx Camera index
partIdx Link index

void opentl::modelprojection::Warp::getScreenBoundingLimitsOfFace ( const boost::shared_ptr< opentl::core::State > &  state,
const opentl::models::ObjModel objModel,
unsigned int  faceId,
double &  x,
double &  y,
double &  width,
double &  height,
int  camIdx,
int  partIdx = 0 
)

Calculate rectangular bounding limits of a given mesh polygon in screen coordinates, at a given pose.

Parameters:
state State of the object
objModel Object model
faceId Single-face id
x upper-left corner (x)
y upper-left corner (y)
width Size in x-direction
height Size in y-direction
camIdx Camera index
partIdx Link index

void opentl::modelprojection::Warp::getScreenRotatedBoundingLimits ( const boost::shared_ptr< opentl::core::State > &  state,
const opentl::models::ObjModel objModel,
double &  x,
double &  y,
double &  width,
double &  height,
double &  angle,
int  camIdx,
int  partIdx = 0 
)

Calculate rotated bounding box of a given mesh in screen coordinates, at a given pose.

Parameters:
state State of the object
objModel Object model
(x,y) Output center of bounding box
(width,height) Output size of bounding box
angle Output rotation angle (w.r.t. the x axis)
camIdx Camera index
partIdx Link index

void opentl::modelprojection::Warp::warpPoint ( opentl::core::cvdata::Pose objPose,
const double *  objPoint,
double *  screenPoint,
std::vector< double > *  screenGradientX = NULL,
std::vector< double > *  screenGradientY = NULL,
int  cameraIdx = 0,
int  linkIdx = 0 
) const

THIS IS FAST! Project (extrinsic+intrinsic transformation) an object body point to screen. Projection for a given object point under a given object pose.

Parameters:
objPose Pointer to related object pose.
objPoint 3D ObjModel body point in homogeneous coordinates (Vector4).
screenPoint 2D Screen point
screenGradientX pointer to screen gradient in x-direction
screenGradientY pointer to screen gradient in y-direction
cameraIdx camera index
linkIdx Optional, specifies the link index (for articulated models).

void opentl::modelprojection::Warp::warpPoint ( opentl::core::cvdata::Pose objPose,
const math::Vector4 objPoint,
math::Vector2 screenPoint,
std::vector< math::Vector2 > *  jacobian = NULL,
int  cameraIdx = 0,
int  linkIdx = 0 
) const

Project (extrinsic+intrinsic transformation) an object body point to screen. Projection for a given object point under a given object pose. This is a wrapper around the warpPoint() double based version.

Parameters:
objPose Pointer to related object pose.
objPoint 3D ObjModel body point in homogeneous coordinates (Vector4).
screenPoint 2D Screen point (Vector2).
jacobian Optional, generates the gradient in the projection plane related to the degree of freedoms (=STL vector rows)
cameraIdx Optional, specifies the camera index.
linkIdx Optional, specifies the link index (for articulated models).s

void opentl::modelprojection::Warp::warpPoints ( opentl::core::cvdata::Pose objPose,
int  nPoints,
const std::vector< double > &  objPointsX,
const std::vector< double > &  objPointsY,
const std::vector< double > &  objPointsZ,
std::vector< double > &  screenPointsX,
std::vector< double > &  screenPointsY,
std::vector< std::vector< double > > *  screenGradientsX = NULL,
std::vector< std::vector< double > > *  screenGradientsY = NULL,
int  cameraIdx = 0,
int  linkIdx = 0 
) const

Project (extrinsic+intrinsic transformation) multiple points on screen.

Parameters:
objPose Object Pose with you want to use the warping of points.
nPoints Actual number of points to be processed (may be less than the dimension of input vectors).
objPointsX 3D ObjModel body points (x).
objPointsY 3D ObjModel body points (y).
objPointsZ 3D ObjModel body points (z).
screenPointsX 2D Screen points (x).
screenPointsY 2D Screen points (y).
screenGradientsX Optional, gradients of screen points (X coordinates), w.r.t. incremental degrees of freedom
screenGradientsY Optional, gradients of screen points (Y coordinates), w.r.t. incremental degrees of freedom
cameraIdx Optional, specifies the camera index.
linkIdx Optional, specifies the link index (for articulated models).

void opentl::modelprojection::Warp::warpUpdate ( opentl::core::State state,
int  cameraIdx = -1 
) const

Recalculate world and/or camera transformation matrices T from given target states and sensor models, possibly both (finite+compositional) pose update.

Parameters:
state Single state to be updated.
cameraIdx Camera index (default = -1, update all camera T matrices and Jacobians).

void opentl::modelprojection::Warp::warpUpdate ( std::vector< boost::shared_ptr< opentl::core::State > > &  states,
int  cameraIdx = -1 
) const

Update a vector of target states (internally, it calls warpUpdate() for each single state).

Parameters:
states Vector of states to be updated.
cameraIdx Camera index (default = -1, update all camera T matrices and Jacobians).


Member Data Documentation

Pointers to the Sensor Models.


Generated on Fri Jul 31 17:38:39 2009 for OpenTL by  doxygen 1.5.8