@hatemel-azab1783

thanks for your effort... that's great... Although the paper mentioned is fantastic and worth reading, they mentioned that "the emulated direct convolution" result in less efficient implementation due to the many columns and rows of zeros. but it's also less intuitive for the first-time user than the transposed convolution.

@kirilltkachuk8649

Either I am missing something, or there is a mistake here. At 7:15 (slide 21) you say that both methods have the same result, but if you look how the top left pixel of the output is computed you will see that this is not correct. In the upper example the value depends on the top left value of the kernel, but in the bottom example it depends on the bottom right value of the kernel. So the results will be different.

EDIT: The kernel must be rotated 180° in the bottom case.

@DjKryx

My deep learning prof told us that Transposed part of the name comes from the fact that, if the standard Conv would use the weights matrix of dimension m x n, TransConv would use the weights matrix of dimension n x m, like you would get from transposing the matrix in the original Conv

@JingZuo-ts5jz

tanks for this amazing video

@yanhairen7293

Thank you for your video. I am wondering if there is a physical understanding(meaning) for transpose convolution ?  For example: In convolution case, if the kernel is a high pass filter, we expect the output image to be an image with high frequency after convolution.  Is there similar intuitive understanding for transpose convolution ?

@ittest4451

thanks for the nice explanation. I have a question. 

from input 2*2 filter 3*3 stride = 2 padding = 1
How come the output is 4*4

the formula is output = (input-1)*stride -2padding + kernel 
(2-1)*2-2(1)+3 = 3 not 4

@bihanbanerjee

thanks prof.

@chinin97

Is it fair to say that one of the differences between the two transpose convolutions (non emulated and emulated), is that there are learnable parameters in the last one and not in the first one ? 
Also, my intuition is that the output of the emulated transpose conv is the same as that of the normal iif the kernel is made up of fixed ones ?
Thank you for the video !

@kamonchatapivanichkul9545

I would like to make sure that Transposed Convolution can carry out 2 ways, right?

@MrArcianox

Only 59 Thumbs up??. Sebastian Raschka This might be well the best explanation on Transposed Convolutions out there. There is a reason why they call it transposed and this is well explained in this video by Stanford  (https://youtu.be/nDPWywWRIRo?t=1696). It boils down to the way the process of convolution can be rewritten as a matrix multiplication. And the upsampling or transpose operation is achieved by transposing that matrix.

@rishidixit7939

For reducing Checkerboard artifacts which strategy seems to be the best ? Also in some stack overflow question it was mentioned to use skip connections to reduce it. Is that suitable or will it affect the model result ?

@GuanlinLi-l8j

should we always replacing transposed conv by upsampling. Is there any samples that transposed conv outperformes upsampling?

@DeepakNSubramani

The concept is fine. I am still having trouble when I put a 2x2 matrix and a 3x3 kernel and do the two process. They give different answers. Perhaps a different kernel is needed to achieve the same 5x5 from the 2x2?

@matthiash4360

Do you have a code example for the upsampling and convolution approach? I am struggling to implement it correctly in Pytorch and am looking for the right way to do it. Would be nice to see how you did it.

@flankechen

thanks for the video, is there a tutorial about backpropogation of deconv ?
I am having a hard time figuring it out

@michael_d2

Thank you Sebastian. Very well explained!

@杨林-r3m

When I do the Transposed conv,how does the 3x3 kernel come?

@rishidixit7939

Transposed Convolution coz in reality convolution is not calculated using sliding window technique as that is computationally expensive.

@lukocius

I just asked my coworker - So where are the weights? How does kernel produce only one output? Nonsense!