CNN最经典的案例应该是LeNet-5这个数字识别的任务了吧。这里可以看下Yann Lecun大牛网页 http://yann.lecun.com/exdb/lenet/index.html， 以及tutorial： http://deeplearning.net/tutorial/lenet.html。

1. 关于每一个C层的feature map的个数。

2. 关于后面的C层。比如S2到C3，并不是一一对应的。

3. 关于每一层的卷积核是怎么来的。

```function net = cnnsetup(net, x, y)
inputmaps = ;
mapsize = size(squeeze(x(:, :, )));

for l =  : numel(net.layers)   %  layer
if strcmp(net.layers{l}.type, 's')
mapsize = mapsize / net.layers{l}.scale;
assert(all(floor(mapsize)==mapsize), ['Layer ' num2str(l) ' size must be integer. Actual: ' num2str(mapsize)]);
for j =  : inputmaps
net.layers{l}.b{j} = ;
end
end
if strcmp(net.layers{l}.type, 'c')
mapsize = mapsize - net.layers{l}.kernelsize + ;
fan_out = net.layers{l}.outputmaps * net.layers{l}.kernelsize ^ ;
for j =  : net.layers{l}.outputmaps  %  output map
fan_in = inputmaps * net.layers{l}.kernelsize ^ ;
for i =  : inputmaps  %  input map
net.layers{l}.k{i}{j} = (rand(net.layers{l}.kernelsize) - 0.5) *  * sqrt( / (fan_in + fan_out));
end
net.layers{l}.b{j} = ;
end
inputmaps = net.layers{l}.outputmaps;
end
end
% 'onum' is the number of labels, that's why it is calculated using size(y, 1). If you have 20 labels so the output of the network will be 20 neurons.
% 'fvnum' is the number of output neurons at the last layer, the layer just before the output layer.
% 'ffb' is the biases of the output neurons.
% 'ffW' is the weights between the last layer and the output neurons. Note that the last layer is fully connected to the output layer, that's why the size of the weights is (onum * fvnum)
fvnum = prod(mapsize) * inputmaps;
onum = size(y, );

net.ffb = zeros(onum, );
net.ffW = (rand(onum, fvnum) - 0.5) *  * sqrt( / (onum + fvnum));
end```

`net.layers{l}.k{i}{j} = (rand(net.layers{l}.kernelsize) - 0.5) * 2 * sqrt(6 / (fan_in + fan_out));`

```function net = cnnff(net, x)
n = numel(net.layers);
net.layers{}.a{} = x;
inputmaps = ;

for l =  : n   %  for each layer
if strcmp(net.layers{l}.type, 'c')
%  !!below can probably be handled by insane matrix operations
for j =  : net.layers{l}.outputmaps   %  for each output map
%  create temp output map
z = zeros(size(net.layers{l - }.a{}) - [net.layers{l}.kernelsize -  net.layers{l}.kernelsize -  ]);
for i =  : inputmaps   %  for each input map
%  convolve with corresponding kernel and add to temp output map
z = z + convn(net.layers{l - }.a{i}, net.layers{l}.k{i}{j}, 'valid');
end
%  add bias, pass through nonlinearity
net.layers{l}.a{j} = sigm(z + net.layers{l}.b{j});
end
%  set number of input maps to this layers number of outputmaps
inputmaps = net.layers{l}.outputmaps;
elseif strcmp(net.layers{l}.type, 's')
%  downsample
for j =  : inputmaps
z = convn(net.layers{l - }.a{j}, ones(net.layers{l}.scale) / (net.layers{l}.scale ^ ), 'valid');   %  !! replace with variable
net.layers{l}.a{j} = z( : net.layers{l}.scale : end,  : net.layers{l}.scale : end, :);
end
end
end

%  concatenate all end layer feature maps into vector
net.fv = [];
for j =  : numel(net.layers{n}.a)
sa = size(net.layers{n}.a{j});
net.fv = [net.fv; reshape(net.layers{n}.a{j}, sa() * sa(), sa())];
end
%  feedforward into output perceptrons
net.o = sigm(net.ffW * net.fv + repmat(net.ffb, , size(net.fv, )));

end```

```for j =  : net.layers{l}.outputmaps   %  for each output map
%  create temp output map
z = zeros(size(net.layers{l - }.a{}) - [net.layers{l}.kernelsize -  net.layers{l}.kernelsize -  ]);
for i =  : inputmaps   %  for each input map
%  convolve with corresponding kernel and add to temp output map
z = z + convn(net.layers{l - }.a{i}, net.layers{l}.k{i}{j}, 'valid');
end
%  add bias, pass through nonlinearity
net.layers{l}.a{j} = sigm(z + net.layers{l}.b{j});end```

Deep Learning 学习随记（八）CNN（Convolutional neural network）理解的更多相关文章

1. Deep Learning 学习随记（七）Convolution and Pooling --卷积和池化

图像大小与参数个数: 前面几章都是针对小图像块处理的,这一章则是针对大图像进行处理的.两者在这的区别还是很明显的,小图像(如8*8,MINIST的28*28)可以采用全连接的方式(即输入层和隐含层直接 ...

2. Deep Learning学习随记（一）稀疏自编码器

最近开始看Deep Learning,随手记点,方便以后查看. 主要参考资料是Stanford 教授 Andrew Ng 的 Deep Learning 教程讲义:http://deeplearnin ...

3. 课程一(Neural Networks and Deep Learning)，第二周（Basics of Neural Network programming）—— 4、Logistic Regression with a Neural Network mindset

Logistic Regression with a Neural Network mindset Welcome to the first (required) programming exerci ...

4. 课程一(Neural Networks and Deep Learning)，第二周（Basics of Neural Network programming）—— 3、Python Basics with numpy (optional)

Python Basics with numpy (optional)Welcome to your first (Optional) programming exercise of the deep ...

5. Deep Learning 学习随记（五）深度网络--续

前面记到了深度网络这一章.当时觉得练习应该挺简单的,用不了多少时间,结果训练时间真够长的...途中debug的时候还手贱的clear了一下,又得从头开始运行.不过最终还是调试成功了,sigh~ 前一篇 ...

6. Deep Learning 学习随记（五）Deep network 深度网络

这一个多周忙别的事去了,忙完了,接着看讲义~ 这章讲的是深度网络(Deep Network).前面讲了自学习网络,通过稀疏自编码和一个logistic回归或者softmax回归连接,显然是3层的.而这 ...

7. Deep Learning 学习随记（四）自学习和非监督特征学习

接着看讲义,接下来这章应该是Self-Taught Learning and Unsupervised Feature Learning. 含义: 从字面上不难理解其意思.这里的self-taught ...

8. Deep Learning学习随记（二）Vectorized、PCA和Whitening

接着上次的记,前面看了稀疏自编码.按照讲义,接下来是Vectorized, 翻译成向量化?暂且这么认为吧. Vectorized: 这节是老师教我们编程技巧了,这个向量化的意思说白了就是利用已经被优化 ...

9. Deep Learning 学习随记（六）Linear Decoder 线性解码

线性解码器(Linear Decoder) 前面第一章提到稀疏自编码器(http://www.cnblogs.com/bzjia-blog/p/SparseAutoencoder.html)的三层网络 ...

随机推荐

1. 前端Js框架汇总

概述: 有些日子没有正襟危坐写博客了,互联网飞速发展的时代,技术更新迭代的速度也在加快.看着Java.Js.Swift在各领域心花路放,也是煞是羡慕.寻了寻.net的消息,也是振奋人心,.net co ...

2. 洛谷P1196 银河英雄传说[带权并查集]

题目描述 公元五八○一年,地球居民迁移至金牛座α第二行星,在那里发表银河联邦 创立宣言,同年改元为宇宙历元年,并开始向银河系深处拓展. 宇宙历七九九年,银河系的两大军事集团在巴米利恩星域爆发战争.泰山 ...

3. 总结A*，Dijkstra，广度优先搜索，深度优先搜索的复杂度比较

广度优先搜索(BFS) 1.将头结点放入队列Q中 2.while Q!=空 u出队 遍历u的邻接表中的每个节点v 将v插入队列中 当使用无向图的邻接表时,复杂度为O(V^2) 当使用有向图的邻接表时, ...

4. Programming In hardware Programming in software

COMPUTER ORGANIZATION AND ARCHITECTURE DESIGNING FOR PERFORMANCE NINTH EDITION

5. CodeForces 152C Pocket Book

Time Limit:2000MS     Memory Limit:262144KB     64bit IO Format:%I64d & %I64u Submit Status Prac ...

6. [转载]C#中int和IntPtr相互转换

方法一. int转IntPtr int i = 12;           IntPtr p = new IntPtr(i); IntPtr转int int myi = (int)p;         ...

7. sql给数据库加锁问题

加锁是在操作数据时进行了,不能事后加锁. 例: begin   tran           insert   表   with(TABLOCKX)     --加锁           (字段列表) ...

8. lipo命令

工作中,xcode工程遇到一个bug file was built for archive which is not the architecture being linked armv7 找了一些资 ...

9. NOIP知识点

基础算法 贪心 枚举 分治 二分 倍增 高精度 模拟 图论 图 最短路(dijkstra.spfa.floyd) 最小生成树(kruskal.prim) 并查集 拓扑排序 二分图染色 Tarjan 树 ...

10. Ubuntu Server 16.04修改IP、DNS、hosts

本文记录下Ubuntu Server 16.04修改IP.DNS.hosts的方法 -------- 1. Ubuntu Server 16.04修改IP sudo vi /etc/network/i ...