Find pattern in image

From: Chris (CHRISSS)16 Mar 2019 15:39
To: Peter (BOUGHTONP) 3 of 22
I have images like this (squiggly lines added to test template matching):


And I was using this as the template:


OpenCV has feature detection which can detect features in one image and map it to features found in another image. This works great using that template with the top image as the template is extracted from the source image. It detects the shape of each point in the template and finds them in the source image. I've been the same images from the start and assumed it was working properly until this week.



This can then be used with affineWarp to align the first image so the positions of each point line up in the same place as the template. If I use a different image the points don't match up properly and I get this:


So now I am trying something much more complicated and manual. Currently got as far as this - threshold image, detect contours, filter depending on size:


Nearly all the points are picked out as contours, but I need to find known points that I can match up to the template for the alignment.
EDITED: 16 Mar 2019 15:40 by CHRISSS
From: Peter (BOUGHTONP)16 Mar 2019 16:52
To: Chris (CHRISSS) 4 of 22
Ok, I think I understand the issue now.

My (possibly naive) approach would be to extract coordinates of all dot positions from the template and work out the ratios from the centerpoint, e.g. the top two dots are 1 unit high and 0.836 to either side, the bottom outer dots are 1.62 and 1.11 units away from centre, and so on.

I'm assuming the axes are consistently present, so you can do edge detection and/or a separate template to detect where the centre point is - otherwise I'd probably start by looking in the cleanest corner (bottom left) and trying each dot found until enough of them match to identify where the centre would be.

Once you've got an anchor point (the centre) and have identified enough dots that match the template ratios, you can calculate how many pixels are in a unit and resize appropriately.

Dunno if that's the best approach, and it's also assuming OpenCV can actually extract the coords of dots, but seems reasonable enough?

From: Chris (CHRISSS)16 Mar 2019 18:49
To: Peter (BOUGHTONP) 5 of 22
Well this is a project for work but I'm also using it for my dissertation. For work, apparently no, they don't always have the axis. For my dissertation I can make them all have it.

If I can work out the center of the chart bit it might be easier. Although you have given me an idea which might work. Start from each end of the list of points (yeah center coordinates are easy) and try and find a point at known distance/angle from them. Only problem is if the image is scaled more in one direction. I don't really know how similar they will be.

Thanks for the ideas. 4 months to sort it out.
From: Chris (CHRISSS)16 Mar 2019 19:35
To: Chris (CHRISSS) 6 of 22
I tried to do something with 3 nested for loops. The contour detection picked up 1,000 points. So (in the voice of Dr. Evil) 1 billion combinations. And got an out of memory error
From: Peter (BOUGHTONP)16 Mar 2019 21:45
To: Chris (CHRISSS) 7 of 22
There aren't a million points in your contoured image, unless perhaps you're counting individual pixels?

Even so, a million pairs of numbers shouldn't use more than ~32MB, so you may be creating unnecessary variables and/or need to scope things so the GC knows when it can throw them away. (I just checked and the image you posted only has 0.61 million pixels in total, including the blacks, so something isn't right.)

Actually, the dots in your image are ~7px in size, and the gaps are bigger - you can resize down to ~100px wide without merging them, which means you can fit coords into a byte or two (so 1-2MB of memory for a million), and at the same time simplify the threshold/contour/filter process.

For the angle/scaling issue, you can use the axes - the ends should be at the same x/y - if they're not it needs rotating. Without axes... well assuming the squares are supposed to be square, you can probably still get OpenCV to detect rotation/skewing and do the appropriate calculations/transformation.

EDITED: 16 Mar 2019 21:46 by BOUGHTONP
From: Chris (CHRISSS)17 Mar 2019 00:52
To: Peter (BOUGHTONP) 8 of 22
No, not 1,000,000 points in the image, only 1,000. Some are overlapping. A bit more filtering to get rid of them bright it down to 250ish.

But when it was 1000, iterating over all 1000 inside a loop that iterates over all 1000 inside a loop that iterates over all 1000... that is a billion right? Extremely inefficient way to try and find 4 corners on the chart.

Yeah I might focus on the axis for now. And if the width and height are always the same that could give me exactly what I want.

I don't think the detection of the shapes is good enough at that size. Besides, not always any squares on an image.
From: Chris (CHRISSS)17 Mar 2019 01:14
To: Chris (CHRISSS) 9 of 22
Or possibly Fourier transform.

From: Chris (CHRISSS)17 Mar 2019 01:20
To: Chris (CHRISSS) 10 of 22
Or maybe not. I thought that found the center but doesn't look like it.

From: Peter (BOUGHTONP)17 Mar 2019 20:11
To: Chris (CHRISSS) 11 of 22
Yeah, a thousand cubed is a billion, must have misread that, and I guess if you're cloning arrays in nested loops you could create enough variables to use that much memory.

Of course, any time you find yourself nesting loops in any scenario you should take a step back and check what you're doing, especially if it's more than two levels and using the same data - there'll almost certainly be a different approach worth considering. Pretty sure detecting corner/outermost points doesn't require any nesting.

From: Chris (CHRISSS)18 Mar 2019 22:08
To: Peter (BOUGHTONP) 12 of 22
Oh I know, three nested loops is not a good idea with that many elements. I have no idea what I am doing though  8-O
From: Chris (CHRISSS)19 Mar 2019 20:40
To: ALL13 of 22
I thought of one possible way around it. Just get them to upload an image of the chart without all the extra crap around it. I can easily (I think) find 4 points to match up the image then.
From: Peter (BOUGHTONP)19 Mar 2019 22:56
To: Chris (CHRISSS) 14 of 22
Is "them" the staff in a single lab/whatever who might actually listen, or does it include various clients who will blatantly ignore your hand-held instructions the moment you turn your back and go raise an issue claiming there's a bug in what you did despite it being a completely unrelated error that wouldn't occur if they did things how they were told? :@
From: Chris (CHRISSS)19 Mar 2019 23:04
To: Peter (BOUGHTONP) 15 of 22
Haha. Internal staff. Who are probably going to ignore my instructions and raise an issue despite not following instructions :D

I do have another plan though. Which I think would work up to a 9° rotation.

So. Iterate over points, pick 3, starting with two from top left and bottom right. Find the affine transform between those and my set of known points. Check how many points match.

Repeat, selecting second and third from top left with bottom right. Keep going for a bit then move in from the bottom left one with 1 and 2 from top left...

A quick check on my sample image and it's only 3000ish combinations. Not sure how fast affine warping is. I think that would pick out 3 easy to find points.
From: Peter (BOUGHTONP)19 Mar 2019 23:15
To: Chris (CHRISSS) 16 of 22
What's special about 9°?
From: Chris (CHRISSS)19 Mar 2019 23:20
To: Peter (BOUGHTONP) 17 of 22
It's more like 12°. Due to the order it detects the points while contour matching it would pick up the top two first (well last - it seems to work in reverse). Anything higher and the second point in the list would be the wrong point I want to match up to.

From: Chris (CHRISSS)21 Mar 2019 14:47
To: ALL18 of 22
I think I have found a solution that works  :-O~~~ Assuming that the contour detection manages to pick out the 3 specific points that are needed.

It's a lot more computationally... er, computational. Somewhere between 10-30,000 iterations for the tests I've tried. I could probably optimise it further, some tests it's doing where the points are not far enough apart, or not the 3 points are in a straight line.

I still need to do some more testing, but initial tests are looking good.
From: Chris (CHRISSS)21 Mar 2019 21:30
To: ALL19 of 22
I think I found why I was running out of memory in the big loop. OpenCV. Maybe cos it's wrapping C++ in Java. You can call release() on some objects to release it's memory.
From: Peter (BOUGHTONP)21 Mar 2019 23:26
To: Chris (CHRISSS) 20 of 22
Try Java Mission Control and/or Eclipse Memory Analyzer - helps find objects causing memory leaks. Or for stuff like seeing if you've got too many instances of something you only expect to have one of.
From: Chris (CHRISSS)23 Mar 2019 08:54
To: Peter (BOUGHTONP) 21 of 22
Eclipse!? No thanks :S What can I use with IntelliJ?

According to Stack Overflow it's because Java doesn't see the memory being used by the C++ classes so it doesn't know it's using as much memory as it is.

I tried another test, writing an image in a big loop and it crashed using too much memory. If I call release on the image before each loop finishes, the memory use doesn't go up and it works.
From: Peter (BOUGHTONP)23 Mar 2019 15:41
To: Chris (CHRISSS) 22 of 22
You can point Eclipse MAT at any JVM, local or remote. You don't need to be developing with Eclipse JDT to use it - the default download is a standalone non-IDE version.

It can't be used to analyse C/C++, which isn't surprising since it'll be a separate process and different memory structure.

Might be able to use tools from NirSoft or SysInternals to do that, if necessary, but probably not to the same degree of detail/interactivity.

EDITED: 23 Mar 2019 15:43 by BOUGHTONP