You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-- A circle of radius 3 pixels, translating into a total of 16 pixels, is checked for sequential segments of pixels much brighter or much darker than the central one.
32
34
-- For a pixel p to be considered a feature, there must exist a sequential segment of arc_length pixels in the circle around it such that all are greather than (p + thr) or smaller than (p - thr).
33
35
-- After all features in the image are detected, if nonmax is true, the non-maximal suppression is applied, checking all detected features and the features detected in its 8-neighborhood and discard it if its score is non maximal.
@@ -54,13 +56,25 @@ fast (Array fptr) thr (fromIntegral -> arc) (fromIntegral . fromEnum -> non) rat
--^ array containing a grayscale image (color images are not supported)
59
68
->Int
69
+
--^ maximum number of corners to keep, only retains those with highest Harris responses
60
70
->Float
71
+
--^ minimum response in order for a corner to be retained, only used if max_corners = 0
61
72
->Float
73
+
--^ the standard deviation of a circular window (its dimensions will be calculated according to the standard deviation), the covariation matrix will be calculated to a circular neighborhood of this standard deviation (only used when block_size == 0, must be >= 0.5f and <= 5.0f)
62
74
->Int
75
+
--^ square window size, the covariation matrix will be calculated to a square neighborhood of this size (must be >= 3 and <= 31)
63
76
->Float
77
+
--^ struct containing arrays for x and y coordinates and score (Harris response), while arrays orientation and size are set to 0 and 1, respectively, because Harris does not compute that information
-- Extract ORB descriptors from FAST features that hold higher Harris responses. FAST does not compute orientation, thus, orientation of features is calculated using the intensity centroid. As FAST is also not multi-scale enabled, a multi-scale pyramid is calculated by downsampling the input image multiple times followed by FAST feature detection on each scale.
92
+
--
73
93
orb
74
94
::Arraya
95
+
--^ 'Array' containing a grayscale image (color images are not supported)
75
96
->Float
97
+
--^ FAST threshold for which a pixel of the circle around the central pixel is considered to be brighter or darker
76
98
->Int
99
+
--^ maximum number of features to hold (will only keep the max_feat features with higher Harris responses)
77
100
->Float
101
+
--^ factor to downsample the input image, meaning that each level will hold prior level dimensions divided by scl_fctr
78
102
->Int
103
+
--^ number of levels to be computed for the image pyramid
79
104
->Bool
105
+
--^ blur image with a Gaussian filter with sigma=2 before computing descriptors to increase robustness against noise if true
80
106
-> (Features, Arraya)
107
+
--^ 'Features' struct composed of arrays for x and y coordinates, score, orientation and size of selected features
-- C Interface for SIFT feature detector and descriptor.
124
+
--
92
125
sift
93
126
::Arraya
127
+
--^ Array containing a grayscale image (color images are not supported)
94
128
->Int
129
+
--^ number of layers per octave, the number of octaves is computed automatically according to the input image dimensions, the original SIFT paper suggests 3
95
130
->Float
131
+
--^ threshold used to filter out features that have low contrast, the original SIFT paper suggests 0.04
96
132
->Float
133
+
--^ threshold used to filter out features that are too edge-like, the original SIFT paper suggests 10.0
97
134
->Float
135
+
--^ the sigma value used to filter the input image at the first octave, the original SIFT paper suggests 1.6
98
136
->Bool
137
+
--^ if true, the input image dimensions will be doubled and the doubled image will be used for the first octave
99
138
->Float
139
+
--^ the inverse of the difference between the minimum and maximum grayscale intensity value, e.g.: if the ranges are 0-256, the proper intensity_scale value is 1/256, if the ranges are 0-1, the proper intensity-scale value is 1/1
100
140
->Float
141
+
--^ maximum ratio of features to detect, the maximum number of features is calculated by feature_ratio * in.elements(). The maximum number of features is not based on the score, instead, features detected after the limit is reached are discarded
101
142
-> (Features, Arraya)
143
+
--^ Features object composed of arrays for x and y coordinates, score, orientation and size of selected features
144
+
-- Nx128 array containing extracted descriptors, where N is the number of features found by SIFT
102
145
sift (Array fptr) (fromIntegral-> a) b c d (fromIntegral.fromEnum-> e) f g
-- C Interface for SIFT feature detector and descriptor.
161
+
--
113
162
gloh
114
163
::Arraya
164
+
--^ 'Array' containing a grayscale image (color images are not supported)
115
165
->Int
166
+
--^ number of layers per octave, the number of octaves is computed automatically according to the input image dimensions, the original SIFT paper suggests 3
116
167
->Float
168
+
--^ threshold used to filter out features that have low contrast, the original SIFT paper suggests 0.04
117
169
->Float
170
+
--^ threshold used to filter out features that are too edge-like, the original SIFT paper suggests 10.0
118
171
->Float
172
+
--^ the sigma value used to filter the input image at the first octave, the original SIFT paper suggests 1.6
119
173
->Bool
174
+
--^ if true, the input image dimensions will be doubled and the doubled image will be used for the first octave
120
175
->Float
176
+
--^ the inverse of the difference between the minimum and maximum grayscale intensity value, e.g.: if the ranges are 0-256, the proper intensity_scale value is 1/256, if the ranges are 0-1, the proper intensity-scale value is 1/1
121
177
->Float
178
+
--^ maximum ratio of features to detect, the maximum number of features is calculated by feature_ratio * in.elements(). The maximum number of features is not based on the score, instead, features detected after the limit is reached are discarded
122
179
-> (Features, Arraya)
180
+
--^ 'Features' object composed of arrays for x and y coordinates, score, orientation and size of selected features
181
+
--^ Nx272 array containing extracted GLOH descriptors, where N is the number of features found by SIFT
123
182
gloh (Array fptr) (fromIntegral-> a) b c d (fromIntegral.fromEnum-> e) f g
-- Calculates Hamming distances between two 2-dimensional arrays containing features, one of the arrays containing the training data and the other the query data. One of the dimensions of the both arrays must be equal among them, identifying the length of each feature. The other dimension indicates the total number of features in each of the training and query arrays. Two 1-dimensional arrays are created as results, one containg the smallest N distances of the query array and another containing the indices of these distances in the training array. The resulting 1-dimensional arrays have length equal to the number of features contained in the query array.
198
+
--
134
199
hammingMatcher
135
200
::Arraya
201
+
--^ is the 'Array' containing the data to be queried
136
202
->Arraya
203
+
--^ is the 'Array' containing the data used as training data
137
204
->Int
205
+
--^ indicates the dimension to analyze for distance (the dimension indicated here must be of equal length for both query and train arrays)
138
206
->Int
207
+
--^ is the number of smallest distances to return (currently, only 1 is supported)
139
208
-> (Arraya, Arraya)
209
+
--^ is an array of MxN size, where M is equal to the number of query features and N is equal to n_dist. The value at position IxJ indicates the index of the Jth smallest distance to the Ith query value in the train data array. the index of the Ith smallest distance of the Mth query.
210
+
-- is an array of MxN size, where M is equal to the number of query features and N is equal to n_dist. The value at position IxJ indicates the Hamming distance of the Jth smallest distance to the Ith query value in the train data array.
140
211
hammingMatcher a b (fromIntegral-> x) (fromIntegral-> y)
141
212
= op2p2 a b (\p c d e -> af_hamming_matcher p c d e x y)
-- Calculates nearest distances between two 2-dimensional arrays containing features based on the type of distance computation chosen. Currently, AF_SAD (sum of absolute differences), AF_SSD (sum of squared differences) and AF_SHD (hamming distance) are supported. One of the arrays containing the training data and the other the query data. One of the dimensions of the both arrays must be equal among them, identifying the length of each feature. The other dimension indicates the total number of features in each of the training and query arrays. Two 1-dimensional arrays are created as results, one containg the smallest N distances of the query array and another containing the indices of these distances in the training array. The resulting 1-dimensional arrays have length equal to the number of features contained in the query array.
219
+
--
143
220
nearestNeighbor
144
221
::Arraya
222
+
--^ is the array containing the data to be queried
145
223
->Arraya
224
+
--^ is the array containing the data used as training data
146
225
->Int
226
+
--^ indicates the dimension to analyze for distance (the dimension indicated here must be of equal length for both query and train arrays)
147
227
->Int
228
+
--^ is the number of smallest distances to return (currently, only values <= 256 are supported)
148
229
->MatchType
230
+
--^ is the distance computation type. Currently AF_SAD (sum of absolute differences), AF_SSD (sum of squared differences), and AF_SHD (hamming distances) are supported.
149
231
-> (Arraya, Arraya)
232
+
--^ is an array of MxN size, where M is equal to the number of query features and N is equal to n_dist. The value at position IxJ indicates the index of the Jth smallest distance to the Ith query value in the train data array. the index of the Ith smallest distance of the Mth query.
233
+
-- is an array of MxN size, where M is equal to the number of query features and N is equal to n_dist. The value at position IxJ indicates the distance of the Jth smallest distance to the Ith query value in the train data array based on the dist_type chosen.
150
234
nearestNeighbor a b (fromIntegral-> x) (fromIntegral-> y) (fromMatchType -> match)
151
235
= op2p2 a b (\p c d e -> af_nearest_neighbour p c d e x y match)
--^ is the template we are looking for in the image
156
248
->MatchType
249
+
--^ is metric that should be used to calculate the disparity between window in the image and the template image. It can be one of the values defined by the enum af_match_type
157
250
->Arraya
251
+
--^ will have disparity values for the window starting at corresponding pixel position
158
252
matchTemplate a b (fromMatchType -> match)
159
253
= op2 a b (\p c d -> af_match_template p c d match)
160
254
161
255
--| SUSAN corner detector.
162
256
--
163
-
-- SUSAN is an acronym standing for Smallest Univalue Segment Assimilating Nucleus. This method places a circular disc over the pixel to be tested (a.k.a nucleus) to compute the corner measure of that corresponding pixel. The region covered by the circular disc is M, and a pixel in this region is represented by m M where m 0 is the nucleus. Every pixel in the region is compared to the nucleus using the following comparison function:
164
-
--
165
-
-- c(m )=e ((I(m ) I(m 0))/t)6
166
-
-- where t is radius of the region, I is the brightness of the pixel.
167
-
--
168
-
-- Response of SUSAN operator is given by the following equation:
-- where n(M)= m Mc(m ), g is named the geometric threshold and n is the number of pixels in the mask which are within t of the nucleus.
172
-
--
173
-
-- Importance of the parameters, t and g is explained below:
174
-
--
175
-
-- t determines how similar points have to be to the nucleusbefore they are considered to be a part of the univalue segment
259
+
-- SUSAN is an acronym standing for Smallest Univalue Segment Assimilating Nucleus. This method places a circular disc over the pixel to be tested (a.k.a nucleus) to compute the corner measure of that corresponding pixel. The region covered by the circular disc is M, and a pixel in this region is represented by m M where m 0 is the nucleus. Every pixel in the region is compared to the nucleus using the following comparison function:
176
260
--
177
-
-- g determines the minimum size of the univalue segment. For a large enough g, SUSAN operator becomes an edge dectector.
178
261
susan
179
262
::Arraya
180
263
--^ is input grayscale/intensity image
@@ -199,6 +282,8 @@ susan (Array fptr) (fromIntegral -> a) b c d (fromIntegral -> e)
-- Given an image, this function computes two different versions of smoothed input image using the difference smoothing parameters and subtracts one from the other and returns the result.
203
288
--
204
289
dog
@@ -215,20 +300,10 @@ dog a (fromIntegral -> x) (fromIntegral -> y) =
215
300
216
301
--| Homography Estimation.
217
302
--
218
-
-- Homography estimation find a perspective transform between two sets of 2D points.
219
-
-- Currently, two methods are supported for the estimation, RANSAC (RANdom SAmple Consensus)
220
-
-- and LMedS (Least Median of Squares). Both methods work by randomly selecting a subset of 4 points
221
-
-- of the set of source points, computing the eigenvectors of that set and finding the perspective transform.
222
-
-- The process is repeated several times, a maximum of times given by the value passed to the iterations arguments
223
-
-- for RANSAC (for the CPU backend, usually less than that, depending on the quality of the dataset,
224
-
-- but for CUDA and OpenCL backends the transformation will be computed exactly the amount of times passed via
225
-
-- the iterations parameter), the returned value is the one that matches the best number of inliers, which are
226
-
-- all of the points that fall within a maximum L2 distance from the value passed to the inlier_thr argument.
227
-
-- For the LMedS case, the number of iterations is currently hardcoded to meet the following equation:
--Homography estimation find a perspective transform between two sets of 2D points.
230
306
--
231
-
-- where P=0.99, ϵ=40% and p=4.
232
307
homography
233
308
::foralla.AFTypea
234
309
=>Arraya
@@ -250,6 +325,8 @@ homography
250
325
->Int
251
326
--^ maximum number of iterations when htype is AF_HOMOGRAPHY_RANSAC and backend is CPU, if backend is CUDA or OpenCL, iterations is the total number of iterations, an iteration is a selection of 4 random points for which the homography is estimated and evaluated for number of inliers.
252
327
-> (Int, Arraya)
328
+
--^ is a 3x3 array containing the estimated homography.
329
+
-- is the number of inliers that the homography was estimated to comprise, in the case that htype is AF_HOMOGRAPHY_RANSAC, a higher inlier_thr value will increase the estimated inliers. Note that if the number of inliers is too low, it is likely that a bad homography will be returned.
0 commit comments