Interpretable Convolutional Neural Networks (CNNs) via Feedforward Design
讲座名称 | Interpretable Convolutional Neural Networks (CNNs) via Feedforward Design |
讲座时间 | 2019-06-14 10:00:00 |
讲座地点 | 主楼II-221 |
讲座人 | C.-C Jay Kuo |
讲座人介绍 | ![]() |
讲座内容 |
Given a convolutional neural network (CNN) architecture, its network parameters are determined by backpropagation (BP) nowadays. The underlying mechanism remains to be a black-box after a large amount of theoretical investigation. In this talk, I describe a new interpretable and feedforward (FF) design with the LeNet-5 as an example. The FF-trained CNN is a data-centric approach that derives network parameters based on training data statistics layer by layer in one pass. To build the convolutional layers, we develop a new signal transform, called the Saab (Subspace approximation with adjusted bias) transform. The bias in filter weights is chosen to annihilate nonlinearity of the activation function. To build the fully-connected (FC) layers, we adopt a label-guided linear least squared regression (LSR) method. The classification performances of BP- and FF-trained CNNs on the MNIST and the CIFAR-10 datasets are compared. The computational complexity of the FF design is significantly lower than the BP design and, therefore, the FF-trained CNN is ideal for mobile/edge computing. We also comment on the relationship between BP and FF designs by examining the cross-entropy values at nodes of intermediate layers.
|
转载请注明出处:西安电子科技大学学术信息网
如果您有学术信息或学术动态,欢迎投稿。我们将在第一时间确认并收录,投稿邮箱: meeting@xidian.edu.cn
如果您有学术信息或学术动态,欢迎投稿。我们将在第一时间确认并收录,投稿邮箱: meeting@xidian.edu.cn