Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel

estimate_sharpness.py 11 KB

You have to be logged in to leave a comment. Sign In
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
  1. """
  2. Copyright (c) 2009-2010 Arizona Board of Regents. All Rights Reserved.
  3. Contact: Lina Karam (karam@asu.edu) and Niranjan Narvekar (nnarveka@asu.edu)
  4. Image, Video, and Usabilty (IVU) Lab, http://ivulab.asu.edu , Arizona State University
  5. This copyright statement may not be removed from any file containing it or from modifications to these files.
  6. This copyright notice must also be included in any file or product that is derived from the source files.
  7. Redistribution and use of this code in source and binary forms, with or without modification, are permitted provided that the
  8. following conditions are met:
  9. - Redistribution's of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
  10. - Redistribution's in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer
  11. in the documentation and/or other materials provided with the distribution.
  12. - The Image, Video, and Usability Laboratory (IVU Lab, http://ivulab.asu.edu) is acknowledged in any publication that
  13. reports research results using this code, copies of this code, or modifications of this code.
  14. The code and our papers are to be cited in the bibliography as:
  15. N. D. Narvekar and L. J. Karam, "CPBD Sharpness Metric Software", http://ivulab.asu.edu/Quality/CPBD
  16. N. D. Narvekar and L. J. Karam, "A No-Reference Image Blur Metric Based on the Cumulative
  17. Probability of Blur Detection (CPBD)," accepted and to appear in the IEEE Transactions on Image Processing, 2011.
  18. N. D. Narvekar and L. J. Karam, "An Improved No-Reference Sharpness Metric Based on the Probability of Blur Detection," International Workshop on Video Processing and Quality Metrics for Consumer Electronics (VPQM), January 2010, http://www.vpqm.org (pdf)
  19. N. D. Narvekar and L. J. Karam, "A No Reference Perceptual Quality Metric based on Cumulative Probability of Blur Detection," First International Workshop on the Quality of Multimedia Experience (QoMEX), pp. 87-91, July 2009.
  20. DISCLAIMER:
  21. This software is provided by the copyright holders and contributors "as is" and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed. In no event shall the Arizona Board of Regents, Arizona State University, IVU Lab members, authors or contributors be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute
  22. goods or services; loss of use, data, or profits; or business interruption) however caused and on any theory of liability, whether in contract, strict liability, or tort (including negligence or otherwise) arising in any way out of the use of this software, even if advised of the possibility of such damage.
  23. """
  24. import numpy as np
  25. import cv2
  26. from math import atan2, pi
  27. def sobel(image):
  28. # type: (numpy.ndarray) -> numpy.ndarray
  29. """
  30. Find edges using the Sobel approximation to the derivatives.
  31. Inspired by the [Octave implementation](https://sourceforge.net/p/octave/image/ci/default/tree/inst/edge.m#l196).
  32. """
  33. from skimage.filters.edges import HSOBEL_WEIGHTS
  34. h1 = np.array(HSOBEL_WEIGHTS)
  35. h1 /= np.sum(abs(h1)) # normalize h1
  36. from scipy.ndimage import convolve
  37. strength2 = np.square(convolve(image, h1.T))
  38. # Note: https://sourceforge.net/p/octave/image/ci/default/tree/inst/edge.m#l59
  39. thresh2 = 2 * np.sqrt(np.mean(strength2))
  40. strength2[strength2 <= thresh2] = 0
  41. return _simple_thinning(strength2)
  42. def _simple_thinning(strength):
  43. # type: (numpy.ndarray) -> numpy.ndarray
  44. """
  45. Perform a very simple thinning.
  46. Inspired by the [Octave implementation](https://sourceforge.net/p/octave/image/ci/default/tree/inst/edge.m#l512).
  47. """
  48. num_rows, num_cols = strength.shape
  49. zero_column = np.zeros((num_rows, 1))
  50. zero_row = np.zeros((1, num_cols))
  51. x = (
  52. (strength > np.c_[zero_column, strength[:, :-1]]) &
  53. (strength > np.c_[strength[:, 1:], zero_column])
  54. )
  55. y = (
  56. (strength > np.r_[zero_row, strength[:-1, :]]) &
  57. (strength > np.r_[strength[1:, :], zero_row])
  58. )
  59. return x | y
  60. # threshold to characterize blocks as edge/non-edge blocks
  61. THRESHOLD = 0.002
  62. # fitting parameter
  63. BETA = 3.6
  64. # block size
  65. BLOCK_HEIGHT, BLOCK_WIDTH = (64, 64)
  66. # just noticeable widths based on the perceptual experiments
  67. WIDTH_JNB = np.concatenate([5*np.ones(51), 3*np.ones(205)])
  68. def compute(image):
  69. # type: (numpy.ndarray) -> float
  70. """Compute the sharpness metric for the given data."""
  71. # convert the image to double for further processing
  72. image = image.astype(np.float64)
  73. # edge detection using canny and sobel canny edge detection is done to
  74. # classify the blocks as edge or non-edge blocks and sobel edge
  75. # detection is done for the purpose of edge width measurement.
  76. from skimage.feature import canny
  77. canny_edges = canny(image)
  78. sobel_edges = sobel(image)
  79. # edge width calculation
  80. marziliano_widths = marziliano_method(sobel_edges, image)
  81. # sharpness metric calculation
  82. return _calculate_sharpness_metric(image, canny_edges, marziliano_widths)
  83. def marziliano_method(edges, image):
  84. # type: (numpy.ndarray, numpy.ndarray) -> numpy.ndarray
  85. """
  86. Calculate the widths of the given edges.
  87. :return: A matrix with the same dimensions as the given image with 0's at
  88. non-edge locations and edge-widths at the edge locations.
  89. """
  90. # `edge_widths` consists of zero and non-zero values. A zero value
  91. # indicates that there is no edge at that position and a non-zero value
  92. # indicates that there is an edge at that position and the value itself
  93. # gives the edge width.
  94. edge_widths = np.zeros(image.shape)
  95. # find the gradient for the image
  96. gradient_y, gradient_x = np.gradient(image)
  97. # dimensions of the image
  98. img_height, img_width = image.shape
  99. # holds the angle information of the edges
  100. edge_angles = np.zeros(image.shape)
  101. # calculate the angle of the edges
  102. for row in range(img_height):
  103. for col in range(img_width):
  104. if gradient_x[row, col] != 0:
  105. edge_angles[row, col] = atan2(gradient_y[row, col], gradient_x[row, col]) * (180 / pi)
  106. elif gradient_x[row, col] == 0 and gradient_y[row, col] == 0:
  107. edge_angles[row,col] = 0
  108. elif gradient_x[row, col] == 0 and gradient_y[row, col] == pi/2:
  109. edge_angles[row, col] = 90
  110. if np.any(edge_angles):
  111. # quantize the angle
  112. quantized_angles = 45 * np.round(edge_angles / 45)
  113. for row in range(1, img_height - 1):
  114. for col in range(1, img_width - 1):
  115. if edges[row, col] == 1:
  116. # gradient angle = 180 or -180
  117. if quantized_angles[row, col] == 180 or quantized_angles[row, col] == -180:
  118. for margin in range(100 + 1):
  119. inner_border = (col - 1) - margin
  120. outer_border = (col - 2) - margin
  121. # outside image or intensity increasing from left to right
  122. if outer_border < 0 or (image[row, outer_border] - image[row, inner_border]) <= 0:
  123. break
  124. width_left = margin + 1
  125. for margin in range(100 + 1):
  126. inner_border = (col + 1) + margin
  127. outer_border = (col + 2) + margin
  128. # outside image or intensity increasing from left to right
  129. if outer_border >= img_width or (image[row, outer_border] - image[row, inner_border]) >= 0:
  130. break
  131. width_right = margin + 1
  132. edge_widths[row, col] = width_left + width_right
  133. # gradient angle = 0
  134. if quantized_angles[row, col] == 0:
  135. for margin in range(100 + 1):
  136. inner_border = (col - 1) - margin
  137. outer_border = (col - 2) - margin
  138. # outside image or intensity decreasing from left to right
  139. if outer_border < 0 or (image[row, outer_border] - image[row, inner_border]) >= 0:
  140. break
  141. width_left = margin + 1
  142. for margin in range(100 + 1):
  143. inner_border = (col + 1) + margin
  144. outer_border = (col + 2) + margin
  145. # outside image or intensity decreasing from left to right
  146. if outer_border >= img_width or (image[row, outer_border] - image[row, inner_border]) <= 0:
  147. break
  148. width_right = margin + 1
  149. edge_widths[row, col] = width_right + width_left
  150. return edge_widths
  151. def _calculate_sharpness_metric(image, edges, edge_widths):
  152. # type: (numpy.array, numpy.array, numpy.array) -> numpy.float64
  153. # get the size of image
  154. img_height, img_width = image.shape
  155. total_num_edges = 0
  156. hist_pblur = np.zeros(101)
  157. # maximum block indices
  158. num_blocks_vertically = int(img_height / BLOCK_HEIGHT)
  159. num_blocks_horizontally = int(img_width / BLOCK_WIDTH)
  160. # loop over the blocks
  161. for i in range(num_blocks_vertically):
  162. for j in range(num_blocks_horizontally):
  163. # get the row and col indices for the block pixel positions
  164. rows = slice(BLOCK_HEIGHT * i, BLOCK_HEIGHT * (i + 1))
  165. cols = slice(BLOCK_WIDTH * j, BLOCK_WIDTH * (j + 1))
  166. if is_edge_block(edges[rows, cols], THRESHOLD):
  167. block_widths = edge_widths[rows, cols]
  168. # rotate block to simulate column-major boolean indexing
  169. block_widths = np.rot90(np.flipud(block_widths), 3)
  170. block_widths = block_widths[block_widths != 0]
  171. block_contrast = get_block_contrast(image[rows, cols])
  172. block_jnb = WIDTH_JNB[block_contrast]
  173. # calculate the probability of blur detection at the edges
  174. # detected in the block
  175. prob_blur_detection = 1 - np.exp(-abs(block_widths/block_jnb) ** BETA)
  176. # update the statistics using the block information
  177. for probability in prob_blur_detection:
  178. bucket = int(round(probability * 100))
  179. hist_pblur[bucket] += 1
  180. total_num_edges += 1
  181. # normalize the pdf
  182. if total_num_edges > 0:
  183. hist_pblur = hist_pblur / total_num_edges
  184. # calculate the sharpness metric
  185. return np.sum(hist_pblur[:64])
  186. def is_edge_block(block, threshold):
  187. # type: (numpy.ndarray, float) -> bool
  188. """Decide whether the given block is an edge block."""
  189. return np.count_nonzero(block) > (block.size * threshold)
  190. def get_block_contrast(block):
  191. # type: (numpy.ndarray) -> int
  192. return int(np.max(block) - np.min(block))
  193. def estimate_sharpness(image):
  194. if image.ndim == 3:
  195. if image.shape[2] > 1:
  196. image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
  197. else:
  198. image = image[...,0]
  199. return compute(image)
Tip!

Press p or to see the previous file or, n or to see the next file

Comments

Loading...