Given the 2 point clouds:

`pcl::PointCloud<pcl::PointXYZ>::Ptr XPm1 (new pcl::PointCloud<pcl::PointXYZ>);`

pcl::PointCloud<pcl::PointXYZ>::Ptr XPm2 (new pcl::PointCloud<pcl::PointXYZ>);

the following three ways to register the two point clouds by means of a **rigid transformation** are equivalent :

**1) Singular Value Decomposition-based estimation of the rigid transform**

`pcl::registration::TransformationEstimationSVD <pcl::PointXYZ, pcl::PointXYZ> te;`

`Eigen::Matrix4f T;`

`te.estimateRigidTransformation (*XPm2, *XPm1, T);`

**2) explicit Levenberg Marquardt-based estimation used in the ICP ****algorithm**

`pcl::IterativeClosestPoint<pcl::PointXYZ, pcl::PointXYZ> icp;`

`icp.setInputSource (XPm2);`

`icp.setInputTarget (XPm1);`

`typedef pcl::registration::TransformationEstimationLM <pcl::PointXYZ, pcl::PointXYZ> te;`

`boost::shared_ptr<te> teLM (new te);`

`icp.setTransformationEstimation (teLM);`

`pcl::PointCloud<pcl::PointXYZ> Final;`

`icp.align (Final);`

`std::cout << "has converged:" << icp.hasConverged() << " score: " << icp.getFitnessScore() << std::endl;`

`std::cout << icp.getFinalTransformation() << std::endl;`

`T = icp.getFinalTransformation();`

**3) implicit ****Levenberg Marquardt-based estimation used in the ICP algorithm**

`pcl::IterativeClosestPointNonLinear<pcl::PointXYZ, pcl::PointXYZ> icp;`

`icp.setInputSource (XPm2);`

`icp.setInputTarget (XPm1);`

`pcl::PointCloud<pcl::PointXYZ> Final;`

`icp.align (Final);`

`std::cout << "has converged:" << icp.hasConverged() << " score: " << icp.getFitnessScore() << std::endl;`

`std::cout << icp.getFinalTransformation() << std::endl;`

`T = icp.getFinalTransformation();`

### Like this:

Like Loading...

*Related*

## Comments

I faced a problem in icp ,

I hope you can help me as soon as possible.

I have a stereo vision sensor that produces a several point cloud (XYZI)

This sensor placed on the moving car

According to ICP algorithm

[t0, pointcloud0] –(m0)–>[t1, pointcloud1]–(m1)–>[t2, pointcloud2] –(m2)–>[t3, pointcloud3] –(m3)–>[t4, pointcloud4] …. and so on

Where m0,m1,m2,…..is the transformation matrix which I get from the icp.getFinalTransformation()

To calculating the path from t0 to t5

If the p0 is the pose for sensor in the t0 so to get in order to get the pose in the time tic t5

m is matrix transformation I get from the icp.getFinalTransformation()

p1 = m0 * p0

p2 = m1 * p1

p3 = m2 * p2

p4= m3 * p3

p5 = m4 * p4

is this solution is correct, because I get always wrong pose when I compute the sensor pose in t 5

Thanks a lot

Hey Shadi,

it’s not easy to help without data. Does it work with t4? And t6?