Keras,如何获得每一层的输出?

我已经用CNN训练了一个二进制分类模型,这里是我的代码

model = Sequential() model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1], border_mode='valid', input_shape=input_shape)) model.add(Activation('relu')) model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1])) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=pool_size)) # (16, 16, 32) model.add(Convolution2D(nb_filters*2, kernel_size[0], kernel_size[1])) model.add(Activation('relu')) model.add(Convolution2D(nb_filters*2, kernel_size[0], kernel_size[1])) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=pool_size)) # (8, 8, 64) = (2048) model.add(Flatten()) model.add(Dense(1024)) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(2)) # define a binary classification problem model.add(Activation('softmax')) model.compile(loss='categorical_crossentropy', optimizer='adadelta', metrics=['accuracy']) model.fit(x_train, y_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1, validation_data=(x_test, y_test)) 

在这里,我想得到像TensorFlow一样的每一层的输出,我该怎么做?

通过使用: model.layers[index].output可以轻松获得任何图层的输出

对于所有的图层使用这个:

 from keras import backend as K inp = model.input # input placeholder outputs = [layer.output for layer in model.layers] # all layer outputs functors = [K.function([inp]+ [K.learning_phase()], [out]) for out in outputs] # evaluation functions # Testing test = np.random.random(input_shape)[np.newaxis,...] layer_outs = [func([test, 1.]) for func in functors] print layer_outs 

注意:要模拟Dropout,请使用learning_phase作为1.layer_outs否则使用0.

编辑:(根据评论)

K.function创build了张量/张量张量函数,后者用于从给定input的符号图中获取输出。

现在K.learning_phase()被要求作为一个input,因为许多Keras层像Dropout / Batchnomeization取决于它在训练和testing时间期间改变行为。

所以如果你在你的代码中删除了dropout层,你可以简单地使用:

 from keras import backend as K inp = model.input # input placeholder outputs = [layer.output for layer in model.layers] # all layer outputs functors = [K.function([inp], [out]) for out in outputs] # evaluation functions # Testing test = np.random.random(input_shape)[np.newaxis,...] layer_outs = [func([test]) for func in functors] print layer_outs 

编辑2:更多优化

我刚刚意识到,以前的答案是不是每个function评估优化的数据将传输CPU-> GPU的内存和张量计算需要做的低层over-n-over。

相反,这是一个更好的方法,因为你不需要多个函数,只需要一个函数给你所有输出的列表:

 from keras import backend as K inp = model.input # input placeholder outputs = [layer.output for layer in model.layers] # all layer outputs functor = K.function([inp]+ [K.learning_phase()], outputs ) # evaluation function # Testing test = np.random.random(input_shape)[np.newaxis,...] layer_outs = functor([test, 1.]) print layer_outs 

我为自己写了这个函数(在Jupyter),它的灵感来自于indraforyou的回答。 它会自动绘制所有的图层输出。 您的图片必须具有(x,y,1)形状,其中1代表1个通道。 您只需调用plot_layer_outputs(…)即可绘制。

 %matplotlib inline import matplotlib.pyplot as plt from keras import backend as K def get_layer_outputs(): test_image = YOUR IMAGE GOES HERE!!! outputs = [layer.output for layer in model.layers] # all layer outputs comp_graph = [K.function([model.input]+ [K.learning_phase()], [output]) for output in outputs] # evaluation functions # Testing layer_outputs_list = [op([test_image, 1.]) for op in comp_graph] layer_outputs = [] for layer_output in layer_outputs_list: print(layer_output[0][0].shape, end='\n-------------------\n') layer_outputs.append(layer_output[0][0]) return layer_outputs def plot_layer_outputs(layer_number): layer_outputs = get_layer_outputs() x_max = layer_outputs[layer_number].shape[0] y_max = layer_outputs[layer_number].shape[1] n = layer_outputs[layer_number].shape[2] L = [] for i in range(n): L.append(np.zeros((x_max, y_max))) for i in range(n): for x in range(x_max): for y in range(y_max): L[i][x][y] = layer_outputs[layer_number][x][y][i] for img in L: plt.figure() plt.imshow(img, interpolation='nearest') 

以下看起来很简单:

 model.layers[idx].output 

上面是张量对象,所以您可以使用可应用于张量对象的操作对其进行修改。

例如,要获得形状model.layers[idx].output.get_shape()

idx是图层的索引,你可以从model.summary()find它。

那么其他答案是非常完整的,但有一个非常基本的方式来“看”,而不是“获取”形状。

只要做一个model.summary() 。 它将打印所有图层及其输出形状。

https://keras.io/getting-started/faq/#how-can-i-obtain-the-output-of-an-intermediate-layer

 One simple way is to create a new Model that will output the layers that you are interested in: from keras.models import Model model = ... # create the original model layer_name = 'my_layer' intermediate_layer_model = Model(inputs=model.input, outputs=model.get_layer(layer_name).output) intermediate_output = intermediate_layer_model.predict(data) Alternatively, you can build a Keras function that will return the output of a certain layer given a certain input, for example: from keras import backend as K # with a Sequential model get_3rd_layer_output = K.function([model.layers[0].input], [model.layers[3].output]) layer_output = get_3rd_layer_output([x])[0] 

来自: https : //github.com/philipperemy/keras-visualize-activations/blob/master/read_activations.py

 import keras.backend as K def get_activations(model, model_inputs, print_shape_only=False, layer_name=None): print('----- activations -----') activations = [] inp = model.input model_multi_inputs_cond = True if not isinstance(inp, list): # only one input! let's wrap it in a list. inp = [inp] model_multi_inputs_cond = False outputs = [layer.output for layer in model.layers if layer.name == layer_name or layer_name is None] # all layer outputs funcs = [K.function(inp + [K.learning_phase()], [out]) for out in outputs] # evaluation functions if model_multi_inputs_cond: list_inputs = [] list_inputs.extend(model_inputs) list_inputs.append(0.) else: list_inputs = [model_inputs, 0.] # Learning phase. 0 = Test mode (no dropout or batch normalization) # layer_outputs = [func([model_inputs, 0.])[0] for func in funcs] layer_outputs = [func(list_inputs)[0] for func in funcs] for layer_activations in layer_outputs: activations.append(layer_activations) if print_shape_only: print(layer_activations.shape) else: print(layer_activations) return activations