用keras框架较为方便
首先安装anaconda,然后通过pip安装keras
①.、#导入各种用到的模块组件
from __future__ import absolute_import
from __future__ import print_function
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.advanced_activations import PReLU
from keras.optimizers import SGD, Adadelta, Adagrad
from keras.utils import np_utils, generic_utils
from six.moves import range
from data import load_data
import random
import numpy as np
index = [i for i in range(len(data))]
random.shuffle(index)
data = data[index]
label = label[index]
print(data.shape[0], ' samples')
label = np_utils.to_categorical(label, 10)
###############
#开始建立CNN模型
#生成一个model
model = Sequential()
#border_mode可以是valid或者full,具体看这里说明:
#激活函数用tanh
model.add(Activation('tanh'))
model.add(Flatten())
#Softmax分类,输出是10类别
model.add(Dense(10, init='normal'))
model.add(Activation('softmax'))
#############
#开始训练模型
##############
#使用SGD + momentum
#model.compile里的参数loss就是损失函数(目标函数)
model.compile(loss='categorical_crossentropy', optimizer=sgd,metrics=["accuracy"])
#调用fit方法,就是一个训练过程. 训练的epoch数设为10,batch_size为100.
"""
#使用data augmentation的方法
#一些参数和调用的方法,请看文档
datagen = ImageDataGenerator(
featurewise_center=True, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
zca_whitening=False, # apply ZCA whitening
horizontal_flip=True, # randomly flip images
vertical_flip=False) # randomly flip images
# compute quantities required for featurewise normalization
# (std, mean, and principal components if ZCA whitening is applied)
datagen.fit(data)
for e in range(nb_epoch):
print('Epoch', e)
print("Training...")
# batch train with realtime data augmentation
progbar = generic_utils.Progbar(data.shape[0])
for X_batch, Y_batch in datagen.flow(data, label):
loss,accuracy = model.train(X_batch, Y_batch,accuracy=True)
progbar.add(X_batch.shape[0], values=[("train loss", loss),("accuracy:", accuracy)] )
'''
三维卷积
:return:
h, w, c = image.shape
x, y, z = filter.shape
height_new = h - x + 1 ?# 输出 h
width_new = w - y + 1 ?# 输出 w
image_new = np.zeros((height_new, width_new), dtype=np.float)
for i in range(height_new):
for j in range(width_new):
r = np.sum(image[i:i+x, j:j+x, 0] * filter[:,:,0])
g = np.sum(image[i:i+y, j:j+y, 1] * filter[:,:,1])
image_new[i, j] = np.sum([r,g,b])
return image_new
如果不能一次性写入,那就分块.
假设data_string 最大长度为MAX_LENGTH ,则只需要将你需要写入的二进制字符串分块,每块大小为MAX_LENGTH,然后循环写入,即可.
conv是卷积运算,同时也可以做多项式的乘法
full 为缺省值,返回二维卷积的全部结果;
same 返回二维卷积结果中与 A 大小相同的中间部分;
valid 返回在卷积过程中,未使用边缘补 0 部分进行计算的卷积结果部分,当 size(A)size(B) 时,size(C)=[Ma-Mb+1,Na-Nb+1].
应用举例:
A =
B =
C =
可以自己 help conv
至于gggfconv和 ggfconv,matlab 不自带这两个函数,你看到的应该是别人自己写的,用户自定义.
以上就是土嘎嘎小编为大家整理的conv函数python相关主题介绍,如果您觉得小编更新的文章只要能对粉丝们有用,就是我们最大的鼓励和动力,不要忘记讲本站分享给您身边的朋友哦!!