A Cross-View Image Matching Method with Feature Enhancement
A Cross-View Image Matching Method with Feature Enhancement
Blog Article
Most cross-view image matching algorithms focus on designing network structures with excellent performance, ignoring the content information of the image.At the same time, there are non-fixed targets such as cars, ships, and pedestrians in ground perspective images and aerial perspective images.Differences in perspective, direction, and scale cause serious interference with the cross-view matching process.This paper proposes a cross-view image iphone 14 price chicago matching method with feature enhancement, which first transforms the empty image to generate a transformation image aligned with the ground–aerial image domain to establish a preliminary geometric correspondence between the ground-space image.
Then, the rich feature information of the deep network and the edge information of the cross-convolution layer are used to establish the feature correspondence between the ground-space images.The feature fusion module enhances the tolerance of the network model to scale differences, improving the interference problem of transient non-fixed targets on the matching performance in the images.Finally, the maximum pooling and feature aggregation strategies are adopted to aggregate local features with obvious distinguishability into global features to complete the accurate matching between ground images.The experimental results show that the proposed method has good advance and elliot pecan tree for sale high accuracy on CVUSA, which is commonly used in public datasets, reaching 92.
23%, 98.47%, and 99.74% on the top 1, top 5 and top 10 indicators, respectively, outperforming the original method in the dataset with a limited field of view and image center, better completing the cross-perspective image matching task.