All convolutions within a dense block are ReLU-activated and use batch normalization. Channel-intelligent concatenation is only feasible if the height and width Proportions of the info remain unchanged, so convolutions within a dense block are all of stride 1. Pooling levels are inserted between dense blocks for even more https://financefeeds.com/movemaker-aptos-growing-chinese-speaking-region-with-multi-million-dollar-support-via-its-official-community/