Skip to content Skip to sidebar Skip to footer

Tensorflow/keras: "logits And Labels Must Have The Same First Dimension" How To Squeeze Logits Or Expand Labels?

I'm trying to make a simple CNN classifier model. For my training images (BATCH_SIZEx227x227x1) and labels (BATCH_SIZEx7) datasets, I'm using numpy ndarrays that are fed to the mod

Solution 1:

No, you got the cause all wrong. You are giving one-hot encoded labels, but sparse_categorical_crossentropy expects integer labels, as it does the one-hot encoding itself (hence, sparse).

An easy solution would be to change loss to categorical_crossentropy, not the sparse version. Also note that y_true with shape (7,) is incorrect, it should be (1, 7).

Solution 2:

please consider adding a flatten layer before all the dense layers. I had the same exact issues as you and had to change from categorical_crossentropy to sparse_categorical_crossentropy. Since sprarse_categorical_crossentropy involves one-hot-encoding, your array needs to be of lesser (2D) array from the 4D array that is the output of the CNN layers.

this fixed the issue for me!

Post a Comment for "Tensorflow/keras: "logits And Labels Must Have The Same First Dimension" How To Squeeze Logits Or Expand Labels?"