opentl::modalities::Modality Class Reference

Inherits opentl::core::util::ParameterContainer.

Inherited by opentl::modalities::BackgroundSub, opentl::modalities::Blobs, opentl::modalities::ColourGMMGPU, opentl::modalities::ColourHist2D, opentl::modalities::ContourCCD, opentl::modalities::DataFusion, opentl::modalities::DummyModality, opentl::modalities::HarrisKeypoints, opentl::modalities::HistoOrientedGrad, opentl::modalities::HoughLines, opentl::modalities::IntensityEdges, opentl::modalities::Motion, opentl::modalities::OpticalFlow, opentl::modalities::SurfFeatures, and opentl::modalities::TemplateMap.

List of all members.

Public Types


Public Member Functions

virtual void addChild (modalities::Modality *childFeature, core::cvdata::T_LEVEL childOutputLevel)
 Add one child at a time to this class adding more than one child is currently only possible for data fusion class instances child feature pointer and child output level for measurement.
virtual modalities::Modalityclone () const =0
 Clone this class and all potential childs RECURSIVELY!! (deep copy).
virtual void getCamIdx (std::set< int > *outCamIdx)
 returns all used camera indexes of descendant child incl. camIdx from this class instance
int getCamIdx ()
 returns the camera index of this class instance
modalities::ModalitygetChildModalityFeature (std::size_t m)
modalities::ModalitygetChildModalityObject (std::size_t m)
modalities::ModalitygetChildModalityPixel (std::size_t m)
 access to the internal modalities
unsigned int getFeatureProcessingScale ()
 returns the current resolution number for feature-level matching (0 = full resolution)
int getIterFeature ()
 Get the featurelevel iteration counter.
int getIterObject ()
 Get the object-level iteration counter.
int getIterPixel ()
 Get the pixel-level iteration counter.
int getNofChildren () const
 Return overall number of children modalities.
unsigned int getPixelProcessingScale ()
 returns the current scale number for pixel-level matching
opentl::modelprojection::WarpgetWarp ()
 returns the pointer to the warp class
opentl::modelprojection::WarpBackgetWarpBack ()
 returns the pointer to the warp-back class
virtual void init ()
 Needed init() method for parameter handling.
virtual int matchFeatLevel (const TargetPtrVector &targets, T_MEAS_FEATPtrVector &outputMeas, std::size_t partitionIdx)
 Matching on feature level
  • match projected model features with detected image features.

virtual int matchObjLevel (const TargetPtrVector &targets, T_MEAS_OBJPtrVector &outputMeas, std::size_t partitionIdx)
 Matching on object level
  • compute a local, maximum-likelihood estimate of pose (evtl. state), using either pixel- or feature-level likelihood functions.

virtual int matchPixLevel (const TargetPtrVector &targets, T_MEAS_PIXPtrVector &outputMeas, std::size_t partitionIdx)
 Matching on pixel level
  • compare pixel-level image data with expected map, by projecting model data under a predicted state.

 Modality (const Modality &c)
 Abstract class - copy constructor.
 Modality (opentl::modelprojection::WarpBack *warpBack, int camIdx)
 Modality (opentl::modelprojection::Warp *warp, int camIdx)
 Constructor.
virtual int preProcess (const opentl::core::cvdata::Image &image, const std::vector< std::vector< int > > &preProcessROIs)
 Pre-processing. Model-independent processing operations on given sensor data.
void resetChildrenIter ()
 Reset the iteration counters of all children modalities (recursively).
void resetFeatureIter ()
 Reset the pixel-level iteration counter.
void resetObjectIter ()
 Reset the pixel-level iteration counter.
void resetPixelIter ()
 Reset the pixel-level iteration counter.
virtual int sampleModelFeatures (const TargetPtrVector &targets)
 Sample visible model features M_off, from the off-line given ShapeAppearance model (i.e. off-line).
void setFeatureProcessingScale (unsigned int res)
 sets the current scale number for pixel-level matching
void setPixelProcessingScale (unsigned int res)
 sets the current scale number for pixel-level matching
virtual int updateModelFeatures (const TargetPtrVector &targets)
 Update model features M_on, from the on-line image stream.
virtual ~Modality ()

Protected Member Functions

virtual int callChildrenMatches (const TargetPtrVector &targets, std::size_t partitionIdx)
 calls match() function of all children (added with addChild()) in a recursive way
virtual void cloneChildrenFrom (const modalities::Modality &feature)
 clones the children in a recursive way
virtual void setParent (Modality *parent)
 sets the parent of this class instance

Protected Attributes

int mCamIdx
 The sensor index related to this feature.
std::vector< std::pair
< modalities::Modality
*, T_MEAS_FEATPtrVector > > 
mChildModalityFeature
std::vector< std::pair
< modalities::Modality
*, T_MEAS_OBJPtrVector > > 
mChildModalityObject
std::vector< std::pair
< modalities::Modality
*, T_MEAS_PIXPtrVector > > 
mChildModalityPixel
 Holds pointers to the child features and the related measurement structures per target, allocated by the calling class instance. ATTENTION: only used by the calling Feature class -> the calling class instance allocates the structures, and the called class (matchXXXLevel(...)) writes into these structures.
bool mDeleteChildren
 Flag if children are owned by this instance => needed for deletion.
double mDtimestamp
 The timestamp related to D (D = Unassociated, pose-independent output of pre-processing).
unsigned int mFeatureProcessingScale
int mIterFeature
int mIterObject
int mIterPixel
 This is the current iteration number, that is reset for each new image, and increased for each call to matchXLevel() NOTE: The reset can be performed recursively for the whole processing tree, by calling Likelihood::resetModalityIter(). This is currently done at the beginning of Tracker::correct(), where all calls to Likelihood::explicit/implicitModel() are done.
unsigned int mNofChildren
 Total number of children this feature is parent of.
ModalitymParent
 pointer to _ONE_ parent node (no cycle within tree/pipeline allowed!)
unsigned int mPixelProcessingScale
 Current resolution/scale for pixel- and feature-level processing (match); default = 0 (lowest resolution).
std::set< int > mRecusiveCamIndexes
 Holding all camera indexes for all descendant children (WITHOUT ourown camIdx!).
opentl::modelprojection::WarpmWarp
 Pointer to the global warp instance.
opentl::modelprojection::WarpBackmWarpBack
 Pointer to the global back-warp instance.


Member Enumeration Documentation

Enumerator:
COMPUTE_Z 
COMPUTE_H 
COMPUTE_JAC 
COMPUTE_E 
COMPUTE_R 


Constructor & Destructor Documentation

opentl::modalities::Modality::Modality ( opentl::modelprojection::Warp warp,
int  camIdx 
)

Constructor.

opentl::modalities::Modality::Modality ( opentl::modelprojection::WarpBack warpBack,
int  camIdx 
)

virtual opentl::modalities::Modality::~Modality (  )  [virtual]

opentl::modalities::Modality::Modality ( const Modality c  ) 

Abstract class - copy constructor.


Member Function Documentation

virtual void opentl::modalities::Modality::addChild ( modalities::Modality childFeature,
core::cvdata::T_LEVEL  childOutputLevel 
) [virtual]

Add one child at a time to this class adding more than one child is currently only possible for data fusion class instances child feature pointer and child output level for measurement.

virtual int opentl::modalities::Modality::callChildrenMatches ( const TargetPtrVector targets,
std::size_t  partitionIdx 
) [protected, virtual]

calls match() function of all children (added with addChild()) in a recursive way

virtual modalities::Modality* opentl::modalities::Modality::clone (  )  const [pure virtual]

virtual void opentl::modalities::Modality::cloneChildrenFrom ( const modalities::Modality feature  )  [protected, virtual]

clones the children in a recursive way

virtual void opentl::modalities::Modality::getCamIdx ( std::set< int > *  outCamIdx  )  [virtual]

returns all used camera indexes of descendant child incl. camIdx from this class instance

int opentl::modalities::Modality::getCamIdx (  )  [inline]

returns the camera index of this class instance

Reimplemented in opentl::modalities::ContourCCD.

modalities::Modality* opentl::modalities::Modality::getChildModalityFeature ( std::size_t  m  ) 

modalities::Modality* opentl::modalities::Modality::getChildModalityObject ( std::size_t  m  ) 

modalities::Modality* opentl::modalities::Modality::getChildModalityPixel ( std::size_t  m  ) 

access to the internal modalities

unsigned int opentl::modalities::Modality::getFeatureProcessingScale (  )  [inline]

returns the current resolution number for feature-level matching (0 = full resolution)

int opentl::modalities::Modality::getIterFeature (  )  [inline]

Get the featurelevel iteration counter.

int opentl::modalities::Modality::getIterObject (  )  [inline]

Get the object-level iteration counter.

int opentl::modalities::Modality::getIterPixel (  )  [inline]

Get the pixel-level iteration counter.

int opentl::modalities::Modality::getNofChildren (  )  const [inline]

Return overall number of children modalities.

unsigned int opentl::modalities::Modality::getPixelProcessingScale (  )  [inline]

returns the current scale number for pixel-level matching

opentl::modelprojection::Warp* opentl::modalities::Modality::getWarp (  )  [inline]

returns the pointer to the warp class

opentl::modelprojection::WarpBack* opentl::modalities::Modality::getWarpBack (  )  [inline]

returns the pointer to the warp-back class

virtual void opentl::modalities::Modality::init (  )  [inline, virtual]

virtual int opentl::modalities::Modality::matchFeatLevel ( const TargetPtrVector targets,
T_MEAS_FEATPtrVector outputMeas,
std::size_t  partitionIdx 
) [inline, virtual]

virtual int opentl::modalities::Modality::matchObjLevel ( const TargetPtrVector targets,
T_MEAS_OBJPtrVector outputMeas,
std::size_t  partitionIdx 
) [inline, virtual]

Matching on object level

  • compute a local, maximum-likelihood estimate of pose (evtl. state), using either pixel- or feature-level likelihood functions.

Parameters:
states Predicted state
outputMeas State-space measurement and residuals (expected state = prediction, observed state = ML estimate)

Reimplemented in opentl::modalities::ContourCCD, opentl::modalities::DataFusion, opentl::modalities::HarrisKeypoints, opentl::modalities::HistoOrientedGrad, opentl::modalities::OpticalFlow, opentl::modalities::SurfFeatures, and opentl::modalities::TemplateMap.

virtual int opentl::modalities::Modality::matchPixLevel ( const TargetPtrVector targets,
T_MEAS_PIXPtrVector outputMeas,
std::size_t  partitionIdx 
) [inline, virtual]

Matching on pixel level

  • compare pixel-level image data with expected map, by projecting model data under a predicted state.

Parameters:
states Predicted state
outputMeas Pixel-space measurement (can be a pixel map of residuals, or a unique residual value)

Reimplemented in opentl::modalities::BackgroundSub, opentl::modalities::ColourGMMGPU, opentl::modalities::ColourHist2D, opentl::modalities::DataFusion, opentl::modalities::DummyModality, opentl::modalities::IntensityEdges, opentl::modalities::Motion, and opentl::modalities::TemplateMap.

virtual int opentl::modalities::Modality::preProcess ( const opentl::core::cvdata::Image image,
const std::vector< std::vector< int > > &  preProcessROIs 
) [inline, virtual]

Pre-processing. Model-independent processing operations on given sensor data.

  • process the raw image (or for more process steps the preProcessed image, e.g. a binary image) to obtain an information suitable for matching, within a given ROI
  • examples: detect SURF key-points, edge detection, color space conversion, optical flow computation, ...
  • IMPORTANT: the output of the preProcess function must be stored as a static member (within a std::vector) in the derived class in order to be able to share the pre-processed data among multiple threads. The dim of the std::vector is the camera index.
  • IMPORTANT: the preprocess function can communicate with other functions via static variables ONLY -> especially important if an initialize() is called within preProcess()
    Parameters:
    image Input sensor data (e.g. camera image in RGB)
    preProcessROIs Regions of interest (x0,y0,width,height), per target

Reimplemented in opentl::modalities::BackgroundSub, opentl::modalities::ColourGMMGPU, opentl::modalities::ColourHist2D, opentl::modalities::ContourCCD, opentl::modalities::DummyModality, opentl::modalities::HarrisKeypoints, opentl::modalities::HistoOrientedGrad, opentl::modalities::HoughLines, opentl::modalities::IntensityEdges, opentl::modalities::Motion, opentl::modalities::OpticalFlow, opentl::modalities::SurfFeatures, and opentl::modalities::TemplateMap.

void opentl::modalities::Modality::resetChildrenIter (  ) 

Reset the iteration counters of all children modalities (recursively).

void opentl::modalities::Modality::resetFeatureIter (  )  [inline]

Reset the pixel-level iteration counter.

void opentl::modalities::Modality::resetObjectIter (  )  [inline]

Reset the pixel-level iteration counter.

void opentl::modalities::Modality::resetPixelIter (  )  [inline]

Reset the pixel-level iteration counter.

virtual int opentl::modalities::Modality::sampleModelFeatures ( const TargetPtrVector targets  )  [virtual]

Sample visible model features M_off, from the off-line given ShapeAppearance model (i.e. off-line).

  • Compute visible parts (edges, surfaces, ...) of object model X, from camera Y, at PREDICTED pose P by rendering the complete scene (GLScene) from the given camera view

  • Sample good features for tracking (contour points, keypoints, color histograms, ...)

Q: How do we decide if a single scene should consist of multiple rendered objectModels (= elements of "states" vector represent different objectModels) in order to deal with (partial) object occlusion or if just a single object should be rendered?

A: all elements of the std::vector<boost::shared_ptr<core::State> >* states should be rendered into a single scene e.g. if Kalman filters are used the vector of states is made by a "bank of Kalman filters" (one filter per target)

IMPORTANT: This operation is done at a predicted pose hypothesis, BEFORE matching

Reimplemented in opentl::modalities::ColourHist2D, opentl::modalities::ContourCCD, opentl::modalities::DataFusion, opentl::modalities::HarrisKeypoints, opentl::modalities::HistoOrientedGrad, opentl::modalities::HoughLines, opentl::modalities::IntensityEdges, opentl::modalities::OpticalFlow, opentl::modalities::SurfFeatures, and opentl::modalities::TemplateMap.

void opentl::modalities::Modality::setFeatureProcessingScale ( unsigned int  res  )  [inline]

sets the current scale number for pixel-level matching

virtual void opentl::modalities::Modality::setParent ( Modality parent  )  [protected, virtual]

sets the parent of this class instance

void opentl::modalities::Modality::setPixelProcessingScale ( unsigned int  res  )  [inline]

sets the current scale number for pixel-level matching

virtual int opentl::modalities::Modality::updateModelFeatures ( const TargetPtrVector targets  )  [virtual]

Update model features M_on, from the on-line image stream.

  • Take the estimated pose P_est (AFTER tracker correction)
  • Match at feature level (h,z) = color histograms, keypoints, etc.
  • Use z for updating the reference model database: z(P_est) -> M_on

IMPORTANT: This operation is done at the estimated pose, AFTER tracking (correction)

Reimplemented in opentl::modalities::ColourHist2D, opentl::modalities::DataFusion, opentl::modalities::HarrisKeypoints, opentl::modalities::HistoOrientedGrad, opentl::modalities::OpticalFlow, and opentl::modalities::TemplateMap.


Member Data Documentation

The sensor index related to this feature.

Holds pointers to the child features and the related measurement structures per target, allocated by the calling class instance. ATTENTION: only used by the calling Feature class -> the calling class instance allocates the structures, and the called class (matchXXXLevel(...)) writes into these structures.

Main index: modality-measurement data pair Data pair:

1. Child modality (=lower level) class -> works also for multithreaded childs 2. Double vector of measurement data (1. index = Target)

Flag if children are owned by this instance => needed for deletion.

The timestamp related to D (D = Unassociated, pose-independent output of pre-processing).

This is the current iteration number, that is reset for each new image, and increased for each call to matchXLevel() NOTE: The reset can be performed recursively for the whole processing tree, by calling Likelihood::resetModalityIter(). This is currently done at the beginning of Tracker::correct(), where all calls to Likelihood::explicit/implicitModel() are done.

Total number of children this feature is parent of.

pointer to _ONE_ parent node (no cycle within tree/pipeline allowed!)

Current resolution/scale for pixel- and feature-level processing (match); default = 0 (lowest resolution).

Holding all camera indexes for all descendant children (WITHOUT ourown camIdx!).

Pointer to the global warp instance.

Pointer to the global back-warp instance.


Generated on Thu Jun 10 21:08:10 2010 for OpenTL by  doxygen 1.5.8