Tensorflow - Avoid Tensor Size Limit
I'm working on an implementation of the FCN-32 net described in the Long, Shelhamer paper, but have run into a roadblock when upsampling. In order to upsample to original size, oth
Solution 1:
You can split the output classes into multiple operations and concatenate them at the end.
Backprop will work just fine through the concat operation. It should be as trivial as creating two conv2d_transpose
operations, each with half the classes and concat the results appropriately and continue to the loss function from there.
Creating more than 2 conv2d_transpose
operations as necessary will work just as well.
After thinking about this I'm confident it will work. If there's an issue let me know and I'll update the answer.
Post a Comment for "Tensorflow - Avoid Tensor Size Limit"