Skip to content Skip to sidebar Skip to footer

Tensorflow Image_resize Mess Up Image On Unknown Image Size

I have a list of variable size image and wish to standardise them into 256x256 size. I used the following code import tensorflow as tf import matplotlib.pyplot as plt file_conten

Solution 1:

This happens because image_resize() is performing an interpolation between adjacent pixels, and returning floats instead of integers in the range 0-255. That's why NEAREST_NEIGHBOR does work: it takes the value of one of the near pixels without doing further math. Suppose you have some adjacent pixels with values 240, 241. NEAREST_NEIGHBOR will return either 240 or 241. With any other method, the value could be something like 240.5, and is returned without rounding it, I assume intentionally so you can decide what is better for you (floor, round up, etc). The plt.imshow() on the other side, when facing float values, interprets only the decimal part, as if they were pixel values in a full scale between 0.0 and 1.0. To make the above code work, one of the possible solutions would be:

import numpy as np
plt.imshow(img.astype(np.uint8))

Post a Comment for "Tensorflow Image_resize Mess Up Image On Unknown Image Size"