Image registration is used widely in applications for mapping one image to another. It is a fundamental task in many imaging applications. Existing image registration methods are either feature-based or intensity-based. Feature-based methods first extract relevant image features, and then find a geometrical transformation that best matches the two corresponding sets of features extracted from the two images. The geometrical transformation is estimated directly from the observed image intensities of the two images by an intensity-based image registration method. Most existing methods of both types assume that the mapping transformation has a parametric form or satisfies certain regularity conditions (e.g., it is a smooth function with continuous first or higher order derivatives). They often estimate the mapping transformation globally by solving a global minimization/maximization problem. Such global smoothing methods usually cannot uncover the ill-posed nature of the image registration problem, namely, the mapping transformation is not well defined at certain places, including places where the true image intensity surface is straight.
In this thesis, we suggest solving the image registration problem locally, by first studying the local properties of a mapping transformation. Some concepts for describing such local properties are suggested, and an intensity-based local smoothing method for estimating the geometrical transformation is proposed. We also develop a feature-based method based on those concepts. Both theoretical and numerical studies show that our methods are effective in various applications.