Opencv tutorial, Combine two overlapping videos frames into one, MatchTemplate example

Combine two overlapping frame by simple correlation

This tutorial use matchTemplate openv function to combine two images into one.

Merge Two mat video frames

Today morning, I read this post combine two overlapping video frames on StackOverflow. 
The problem is to merge 2 images together or two video streams. I am thinking about the most simple not so much dummy solution. I find a really simple solution that can be extended to a more complex one. 

First of all, I took 2 images from the phone like on the picture (color images). The program selects Rectangles region from both source images and resize end extract this ROI rectangles. The idea is to find the "best" overlapping Rect regions by normalized correlation and combine these images on a place with maximal correspondence.


Merge Two mat video frames


Merge Two mat video frames



I know well solution by SIFT and SURF, but it is a little bit laborious than this. It is not the best solution but one of the simplest. If your cameras are fixed in stable position between themselves. This is a good solution. I hold my phone in hand :)

You can also use this simple approach to video. The speed depends only on a number of rectangle candidates you want to compare. You can improve this by smart region to compare selection.

overlapping regions by optical flow

Also, I am thinking about the idea to use optical flow by putting images from the camera at the same time to sequence behind each other. From the possible overlapping regions in one images extract good features to track and find them in the region of second images.

Opencv matchTemplate code Example

#include <Windows.h>
#include <fstream>
#include <iostream>
#include "opencv2\highgui.hpp"
#include "opencv2\imgproc.hpp"
#include "opencv2/imgcodecs/imgcodecs.hpp"
#include "opencv2/videoio/videoio.hpp"

using namespace cv;
using namespace std;

int main(int argcconst char** argv)
{
Mat OneCamInput;
Mat SecondCamInput;
// load and resize source images of video
OneCamInput = imread("1.JPG");
SecondCamInput = imread("2.JPG");

resize(OneCamInput, OneCamInput, Size(800600));
resize(SecondCamInput,SecondCamInput, Size(800600));

// Show Both imput images
imshow("input1", OneCamInput);
waitKey(1000);
imshow("input2", SecondCamInput);
waitKey(1000);

//Convert both to gray images
cvtColor(OneCamInput, OneCamInput, COLOR_BGR2GRAY);
cvtColor(SecondCamInput, SecondCamInput, COLOR_BGR2GRAY);

// Prepare Mat for MatchTemplates
Mat res(11, CV_32F);

// Prepare values for max correspondance
float resMax = 0;
Rect RectOver1;
Rect RectOver2;
int iRes;

// For loop over over compared rectangles with different size
for (int i = 20; i <= OneCamInput.cols / 4; i = i + 1) {
      // Extract rectangles from both source images
       Mat M1 = OneCamInput(Rect(OneCamInput.cols - i, 0, i, OneCamInput.rows));
       Mat M2 = SecondCamInput(Rect(0 , 0, i, SecondCamInput.rows));
       imshow("Overlap Rect1", M1);
       waitKey(10);
       imshow("Overlap Rect2", M2);
       waitKey(10);
      // Match how similar selected rectangles are
       matchTemplate(M1, M2, res, TM_CCOEFF_NORMED);
         // Convert Mat res(1, 1, CV_32F); to flow
         float resFloat = ((float *)(res.data))[0];
//Save max correspondence
       if (resFloat >= resMax) {
             resMax = resFloat;
             cout << res << endl;
             iRes = i;
             RectOver1 = Rect(OneCamInput.cols - i, 0, i, OneCamInput.rows);
             RectOver2 = Rect(00, i, SecondCamInput.rows);
     }
}
Mat HM;
// Crop original images defined by border of max correspondence
Mat On1Res = OneCamInput(Rect(00OneCamInput.cols -iRes -RectOver1.width/6 , OneCamInput.rows));
Mat On2Res = SecondCamInput(Rect(00SecondCamInput.cols - iRes , OneCamInput.rows));
//Horizontal merge of this two images together
hconcat(On1Res, On2Res, HM);
imshow("Result", HM);
waitKey(1000);
imwrite("Result.jpg", HM);
}










Next Post Previous Post
No Comment
Add Comment
comment url