I'm not aware of how to do the comparison. Mahalanobis distance is within the range 0-1 and Euclidean is in and around 10^3. So how can we get the improvement. I tried reconstruction and then the similarity.
Active1 year, 10 months ago
I am working on a Face Recognition Project and I am using libfacerec. While predicting the labels the library uses norm() which calculates the absolute difference. How can I use Mahalanobis Distance to improve my accuracy?OpenCV2 has a function:
which requires me to calculate icovar by using
but, this function expects samples to be stored either as separate matrices or as rows/columns of a single matrix. I don't know how to provide data to this function, i.e. how to make samples separate matrices or as rows of a single matrix. Please Help.I wish to change the following code:
thinkquesterthinkquester
2 Answers
First of all this. From my personal experience I can tell you, that for a PCA the distance metric doesn't really have any significant impact on the recognition rate. I know some papers report it, but I can't acknowledge it on my image databases. As for your question on how to calculate the Mahalanobis distance. There's a close relationship between a PCA and the Mahalanobis distance, see 'Improving Eigenfaces' at http://www.cognotics.com/opencv/servo_2007_series/part_5/page_5.html, which is also given in [1]. Just for sake of completeness, the project this post refers to is given at: https://github.com/bytefish/libfacerec.
Code
Without any further testing I would rewrite the cognotics thing into:
References
[1] Moghaddam, B. and Pentland, A. 'Probabilistic Visual Learning for Object Representation' In Pattern Analysis and Machine Intelligence, IEEE Transactions on Vol. 19, No. 7. (1997), pp. 696-710
bytefishbytefish3,50711 gold badge2121 silver badges3030 bronze badges
The way I've been doing it for my assignments is to flatten each image. So if the image is 24x24 I reshape it to 1x1024.
Then I stack the 2 images I wish to compare into one array. So now you have a 2x1024 matrix/array.
I put that 2x1024 into calcCovarMatrix() to get the covariance array (using the COVAR_ROWS flag). Then you invert it using invert().
Then you pass your two images and your inverted covariance into Mahalanobis()
john ktejikjohn ktejik2,86722 gold badges2727 silver badges3737 bronze badges
Not the answer you're looking for? Browse other questions tagged opencvcomputer-visionface-detectionface-recognitionpca or ask your own question.
Active4 years, 5 months ago
I'm using OpenCV to test the similarity between two images taken from the same environment.
I have a series of photos of the same moving environment. So being A and B two binary images of the edges of two sequential images of this environment, I do the following:
The matrixes are all the same type and have the following values:
The mahalanobis distance function throws an error as following:
OpenCV Error: Assertion failed (type v2.type() && type icovar.type() && sz v2.size() && len icovar.rows && len icovar.cols) in Mahalanobis, file /Users/felipefujioka/Documents/Developer/tg/opencv-3.0.0-beta/modules/core/src/matmul.cpp, line 2486 libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /Users/felipefujioka/Documents/Developer/tg/opencv-3.0.0-beta/modules/core/src/matmul.cpp:2486: error: (-215) type v2.type() && type icovar.type() && sz v2.size() && len icovar.rows && len icovar.cols in function Mahalanobis
I'd apreciate to know where I'm wrong. Thanks in advance.
Felipe Jun
Felipe JunFelipe Jun
2 Answers
You mix
KornelKornela
with ma
and b
with mb
in your code. Have you tried with Mahalanobis(ma, mb, icovar)
?4,35322 gold badges1414 silver badges2424 bronze badges
According with docs:http://docs.opencv.org/modules/core/doc/operations_on_arrays.html#double%20Mahalanobis%28InputArray%20v1,%20InputArray%20v2,%20InputArray%20icovar%29
A and B must be 1D array not matrices
valentinvalentin