LeNet5各层伪代码
扫描二维码
随时随地手机看文章
LeNet5的第一层卷积运算,输入图像大小为inH * inW,卷积核c1CNum个,输出feature map大小为c1H * c1W, 偏置个数、feature map个数与卷积核个数相等。输入图像inmap:inH * inW,卷积核:c1conv: c1CNum * 5 * 5, 输出图像:c1map: c1CNum * c1H * c1W, 偏置c1bias:c1CNum
functioin ForwardC1: for ith convolution kernel: for hth row in feature map i: for coth col in feature map i row h: 令curc1为c1map + i * c1H * c1W + h * c1W + co 令指针curc1指向位置的值为0 for cr in ranges 5: for cc in ranges 5: 令curc1指向位置的值自加 (inmap[(h+cr)*inW + co +cc]乘以c1conv[i*5*5 + cr * 5 + cc]) endfor cc endfor cr 令curc1指向位置的值自加 (c1bias[i]) 将curc1指向的值输入到激活函数,输出赋给它自己 endfor co endfor h endfor i endfunction ForwardC1
S2层池化,输入图像c1map: c1CNum * c1H * c1W, 输出图像:s2map: s2Num * s2H * s2W,其中s2Num = c1CNum,池化权值: s2pooling: s2Num,偏置:s2bias: s2Num
function ForwardS2: for ith s2 feature map: for hth row in feature map i: for coth col in feature map i row h: 令curs2为s2map + i * s2H * s2W + h * s2W + co 令curs2指向的值 为以下四个值的均值乘以s2pooling[i] 加上s2bias[i]: c1map[i, h * 2 * c1W, co * 2], c1map[i, h * 2 * c1W, co * 2+1], c1map[i, (h * 2 +1) * c1W, co * 2], c1map[i, (h * 2+1) * c1W, co * 2 + 1] 将curs2指向的值输入到激活函数,输出赋给它自己 endfo co endfor h endfor i endfunction ForwardS2
C3层卷积:输入图像:s2map: s2Num * s2H * s2W,输出图像:c3map: c3CNum * c3H * c3W,卷积核:c3conv: c3CNum * 5*5,偏置:c3bias: c3CNum:
function ForwardC3: 初始化一个数组s2MapSum: s2H * s2W,各元素为0 for hth in ranges s2H: for coth in ranges s2W: for ith in ranges s2Num: 令s2MapSum[h, co] 自加 s2map[i, h, co] endfor i endfor co endfor h for ith in ranges c3CNum: for hth in ranges c3H: for coth in ranges c3W: 令curc3为c3map + i * c3H * c3W + h * c3W + co 令curc3指向值为0 for ch in ranges 5: for cc in ranges 5: curc3指向的值自加 (s2MapSum[h+ch, co + cc] 乘以 c3conv[ch, cc]) endfor cc endfor ch 令curc3指向的值自加c3bias[i]然后输入激活函数,返回值赋给curc3指向的值 endfor co endfor h endfor i endfunction ForwardC3
S4池化层:输入图像:c3map: c3CNum * c3H * c3W,输出图像:s4map: s4Num * s4H * s4W,其中s4Num等于c3CNum,池化权值s4pooling: s4Num,偏置:s4bias: s4Num:
function ForwardS4: for ith in ranges s4Num: for hth in ranges s4H: for coth in ranges s4W: 令s4map[i,h,co]的值为以下四个值的均值乘以权值s4pooling[i]加上偏置s4bias[i]: c3map[i, h*2, co*2], c3map[i, h*2, co*2 + 1], c3map[i, h*2+1, co*2], c3map[i, h*2+1, co*2+1] 将s4map[i,h,co]的值输入到激活函数,将返回值赋给它自己 endfor co endfor h endfor i endfunction ForwardS4
C5卷积层:输入图像:s4map: s4Num * s4H * s4W, s4H = 5, s4W = 5,输出图像:c5map: c5CNum * c5H * c5W, c5H = 1, c5W = 1, 卷积核:c5conv: c5CNum * 5 *5,偏置:c5bias: c5CNum:
function ForwardC5: 初始化一个数组s4MapSum: s4H * s4W for h in ranges s4H: for co in ranges s4W: for i in ranges s4Num: s4MapSum[h,co] 自加 s4Num[i, h,co] endfor i endfor co endfor h for i in ranges c5CNum: for h in ranges c5H: for co in ranges c5W: 令curc5为c5map + i * c5H * c5W + h * c5W + co 令curc5指向的值为0 for ch in ranges 5: for cc in ranges 5: curc5指向的值自加(s4MapSum[h + ch, co + cc] 乘以 c5conv[i, ch, cc]) endfor cc endfor ch curc5指向的值自加c5bias[i]然后输入到激活函数,返回值赋值给curc5指向的值 endfor co endfor h endfor i endfunction ForwardC5
全连接输出层:输入图像:c5map: c5CNum * c5H * c5W,输出图像:outmap: outNum, 全连接权值:outfullconn: c5CNum * outNum,偏置: outbias: outNum:
function ForwardOut: for i in ranges outNum: 令curout为outmap + i 令curout指向的值为0 for ch in ranges c5CNum: curout指向的值自加(c5map[ch] 乘以outfullconn[ch, i]) endfor ch 令curout指向的值自加outbias[i]然后输入到激活函数,将返回值赋值给curout指向的值 endfor i endfunction ForwardOut
反向计算
输出反馈:
function BackwardOut: for i in ranges outNum: 输出的第i个值的误差outdelt[i]等于第i个输出outmap[i]减去期望的第i个值label[i],然后乘以输出的激活函数的导数 偏置的误差outbiasdelta[i]等于输出误差outdelt[i] endfor i for h in ranges c5CNum: for co in ranges outNum: fulldelta[h, co] 等于C5层第h个神经元的值乘以outdelt[co] endfor co endfor h endfunction BackwardOut
C5层反馈:
function BackwardC5: for h in ranges c5CNum: 令临时变量curerr为0 for co in ranges outNum: 每个输出神经元误差outdelt[co]乘以输出全连接权值fullconn[h,co],curerr自加此乘积 endfor co 神经元误差c5delta[h]为c5map[h]神经元激活函数导数乘以curerr 偏置误差c5biasdelta[h]等于神经元误差c5delta[h] for ch in ranges s4H: for co in ranges s4W: 第h个卷积核的第ch,co个算子误差为s4MapSum[ch, co]乘以C5层第h个神经元误差 endfor co endfor ch endfor h endfunction BackwardC5
S4层反馈:
function BackwardS4: 初始化数组c5convSum :5*5 for i in ranges c5CNum: for h in ranges c5H: for co in ranges c5W: c5convSum[h,co]自加第i个卷积核对应位置c5conv[i, h, co]乘以C5第h个神经元误差 endfor co endfor h endfor i for i in ranges s4Num: for h in ranges s4H: for co in ranges s4W: S4层神经元误差s4delta[i,h,co]等于其激活函数的导数乘以c5convSum[h,co] endfor co endfor h 偏置误差为对应map的神经元误差均值 池化权值误差等于(S4层神经元误差乘以C3层对应位置神经元均值)再求均值 endfor i endfunction BackwardS4
C3层反馈:
function BackwardC3: 初始化数组s2MapSum:s2H * s2W for i in s2Num: for h in s2H: for co in s2W: s2MapSum[h,co]自加s2map[i,h,co] endfor co endfor h endfor i for i in ranges C3CNum: for h in ranges c3H: for co in ranges c3W: C3神经元误差c3delta[i,h,co]等于其激活函数导数乘以对应池化权值再乘以对应位置的S4神经元误差 endfor co endfor h 偏置误差为对应map的神经元误差均值 for ch in ranges 5: for cc in ranges 5: 令c3convdelta[i, ch, cc]为0 for sh in c3H: for sc in c3W: c3convdelta[i, ch, cc]自加s2Map[ch + sh, cc + sc]乘以c3delta[i,sh, sc] endfor sc endfor sh endfor cc endfor ch endfor i endfunction BackwardC3
S2层反馈:
function BackwardS2: 初始化数组c3convSum: s2H * s2W for i in ranges c3CNum: for h in ranges c3H: for co in ranges c3W: for ch in ranges 5: for cc in ranges 5: c3convSum[h+ch, co+cc]自加c3conv[i,ch, cc]乘以C3第i个map中[h,co]处神经元的误差 endfor cc endfor ch endfor co endfor h endfor i for i in ranges s2Num: for h in ranges s2H: for co in ranges s2W: 对应i个map的[h,co]处神经元激活函数导数乘以c3convSum[h,co]得到对应位置S2的神经元误差 endfor co endfor h 令偏置误差为对应map的神经元误差均值 池化权值误差为对应map i中, for h in ranges s2H: for co in ranges s2W: 池化权值误差s2poolingdelta[i]自加S2神经元误差s2delta[i,h,co]乘以以下四个元素的均值: c1map[h*2, co*2], c1map[h*2, co*2+1], c1map[h*2+1, co*2], c1map[h*2+1, co*2+1] endfor co endfor h endfor i endfunction BackwardS2
C1层反馈:
function BackwardC1: for i in ranges c1CNum: for h in ranges c1H: for co in ranges c1W: C1神经元误差为其激活函数导数乘以对应池化权值乘以对应S2神经元误差 endfor co endfor h 偏置为对应map神经元误差的均值 for ch in ranges 5: for cc in ranges 5: for h in ranges c1H: for co in ranges c1W: 对应卷积核的误差c1convdelta[i, ch, cc]自加inmap[h+ch, co+cc]乘以c1delta[i, h, co] endfor co endfor h endfor cc endfor ch endfor i endfunction BackwardC1
这里有几步是我没有理解LeNet5架构导致的错误,在研究了博客后,通过调试代码,才逐步了解。