# -*- coding: utf-8 -*-'''Xception V1 model for Keras.On ImageNet, this model gets to a top-1 validation accuracy of 0.790.and a top-5 validation accuracy of 0.945.Do note that the input image format for this model is different than forthe VGG16 and ResNet models (299x299 instead of 224x224),and that the input preprocessing functionis also different (same as Inception V3).Also do note that this model is only available for the TensorFlow backend,due to its reliance on `SeparableConvolution` layers.# Reference:- [Xception: Deep Learning with Depthwise Separable Convolutions](https://arxiv.org/abs/1610.02357)'''from__future__import print_functionfrom__future__import absolute_importimport warningsimport numpy as npfrom keras.preprocessing import imagefrom keras.models import Modelfrom keras import layersfrom keras.layers import Densefrom keras.layers import Inputfrom keras.layers import BatchNormalizationfrom keras.layers import Activationfrom keras.layers import Conv2Dfrom keras.layers import SeparableConv2Dfrom keras.layers import MaxPooling2Dfrom keras.layers import GlobalAveragePooling2Dfrom keras.layers import GlobalMaxPooling2Dfrom keras.engine.topology import get_source_inputsfrom keras.utils.data_utils import get_filefrom keras import backend as Kfrom keras.applications.imagenet_utils import decode_predictionsfrom keras.applications.imagenet_utils import _obtain_input_shapeTF_WEIGHTS_PATH = 'https://github.com/fchollet/deep-learning-models/releases/download/v0.4/xception_weights_tf_dim_ordering_tf_kernels.h5'
TF_WEIGHTS_PATH_NO_TOP = 'https://github.com/fchollet/deep-learning-models/releases/download/v0.4/xception_weights_tf_dim_ordering_tf_kernels_notop.h5'
defXception(include_top=True,weights='imagenet',input_tensor=None,input_shape=None,pooling=None,classes=1000):"""Instantiates the Xception architecture. Optionally loads weights pre-trained on ImageNet. This model is available for TensorFlow only, and can only be used with inputs following the TensorFlow data format `(width, height, channels)`. You should set `image_data_format="channels_last"` in your Keras config located at ~/.keras/keras.json. Note that the default input image size for this model is 299x299. # Arguments include_top: whether to include the fully-connected layer at the top of the network. weights: one of `None` (random initialization) or "imagenet" (pre-training on ImageNet). input_tensor: optional Keras tensor (i.e. output of `layers.Input()`) to use as image input for the model. input_shape: optional shape tuple, only to be specified if `include_top` is False (otherwise the input shape has to be `(299, 299, 3)`. It should have exactly 3 inputs channels, and width and height should be no smaller than 71. E.g. `(150, 150, 3)` would be one valid value. pooling: Optional pooling mode for feature extraction when `include_top` is `False`. - `None` means that the output of the model will be the 4D tensor output of the last convolutional layer. - `avg` means that global average pooling will be applied to the output of the last convolutional layer, and thus the output of the model will be a 2D tensor. - `max` means that global max pooling will be applied. classes: optional number of classes to classify images into, only to be specified if `include_top` is True, and if no `weights` argument is specified. # Returns A Keras model instance. # Raises ValueError: in case of invalid argument for `weights`, or invalid input shape. RuntimeError: If attempting to run this model with a backend that does not support separable convolutions. """if weights notin{'imagenet',None}:raiseValueError('The `weights` argument should be either ''`None` (random initialization) or `imagenet` ''(pre-training on ImageNet).')if weights =='imagenet'and include_top and classes !=1000:raiseValueError('If using `weights` as imagenet with `include_top`'' as true, `classes` should be 1000')if K.backend()!='tensorflow':raiseRuntimeError('The Xception model is only available with ''the TensorFlow backend.')if K.image_data_format()!='channels_last': warnings.warn('The Xception model is only available for the ''input data format "channels_last" ''(width, height, channels). ''However your settings specify the default ''data format "channels_first" (channels, width, height). ''You should set `image_data_format="channels_last"` in your Keras ''config located at ~/.keras/keras.json. ''The model being returned right now will expect inputs ''to follow the "channels_last" data format.') K.set_image_data_format('channels_last') old_data_format ='channels_first'else: old_data_format =None# Determine proper input shape input_shape =_obtain_input_shape(input_shape, default_size=299, min_size=71, data_format=K.image_data_format(), include_top=include_top)if input_tensor isNone: img_input =Input(shape=input_shape)else:ifnot K.is_keras_tensor(input_tensor): img_input =Input(tensor=input_tensor, shape=input_shape)else: img_input = input_tensor x =Conv2D(32, (3, 3), strides=(2, 2), use_bias=False, name='block1_conv1')(img_input) x =BatchNormalization(name='block1_conv1_bn')(x) x =Activation('relu', name='block1_conv1_act')(x) x =Conv2D(64, (3, 3), use_bias=False, name='block1_conv2')(x) x =BatchNormalization(name='block1_conv2_bn')(x) x =Activation('relu', name='block1_conv2_act')(x) residual =Conv2D(128, (1, 1), strides=(2, 2), padding='same', use_bias=False)(x) residual =BatchNormalization()(residual) x =SeparableConv2D(128, (3, 3), padding='same', use_bias=False, name='block2_sepconv1')(x) x =BatchNormalization(name='block2_sepconv1_bn')(x) x =Activation('relu', name='block2_sepconv2_act')(x) x =SeparableConv2D(128, (3, 3), padding='same', use_bias=False, name='block2_sepconv2')(x) x =BatchNormalization(name='block2_sepconv2_bn')(x) x =MaxPooling2D((3, 3), strides=(2, 2), padding='same', name='block2_pool')(x) x = layers.add([x, residual]) residual =Conv2D(256, (1, 1), strides=(2, 2), padding='same', use_bias=False)(x) residual =BatchNormalization()(residual) x =Activation('relu', name='block3_sepconv1_act')(x) x =SeparableConv2D(256, (3, 3), padding='same', use_bias=False, name='block3_sepconv1')(x) x =BatchNormalization(name='block3_sepconv1_bn')(x) x =Activation('relu', name='block3_sepconv2_act')(x) x =SeparableConv2D(256, (3, 3), padding='same', use_bias=False, name='block3_sepconv2')(x) x =BatchNormalization(name='block3_sepconv2_bn')(x) x =MaxPooling2D((3, 3), strides=(2, 2), padding='same', name='block3_pool')(x) x = layers.add([x, residual]) residual =Conv2D(728, (1, 1), strides=(2, 2), padding='same', use_bias=False)(x) residual =BatchNormalization()(residual) x =Activation('relu', name='block4_sepconv1_act')(x) x =SeparableConv2D(728, (3, 3), padding='same', use_bias=False, name='block4_sepconv1')(x) x =BatchNormalization(name='block4_sepconv1_bn')(x) x =Activation('relu', name='block4_sepconv2_act')(x) x =SeparableConv2D(728, (3, 3), padding='same', use_bias=False, name='block4_sepconv2')(x) x =BatchNormalization(name='block4_sepconv2_bn')(x) x =MaxPooling2D((3, 3), strides=(2, 2), padding='same', name='block4_pool')(x) x = layers.add([x, residual])for i inrange(8): residual = x prefix ='block'+str(i +5) x =Activation('relu', name=prefix +'_sepconv1_act')(x) x =SeparableConv2D(728, (3, 3), padding='same', use_bias=False, name=prefix +'_sepconv1')(x) x =BatchNormalization(name=prefix +'_sepconv1_bn')(x) x =Activation('relu', name=prefix +'_sepconv2_act')(x) x =SeparableConv2D(728, (3, 3), padding='same', use_bias=False, name=prefix +'_sepconv2')(x) x =BatchNormalization(name=prefix +'_sepconv2_bn')(x) x =Activation('relu', name=prefix +'_sepconv3_act')(x) x =SeparableConv2D(728, (3, 3), padding='same', use_bias=False, name=prefix +'_sepconv3')(x) x =BatchNormalization(name=prefix +'_sepconv3_bn')(x) x = layers.add([x, residual]) residual =Conv2D(1024, (1, 1), strides=(2, 2), padding='same', use_bias=False)(x) residual =BatchNormalization()(residual) x =Activation('relu', name='block13_sepconv1_act')(x) x =SeparableConv2D(728, (3, 3), padding='same', use_bias=False, name='block13_sepconv1')(x) x =BatchNormalization(name='block13_sepconv1_bn')(x) x =Activation('relu', name='block13_sepconv2_act')(x) x =SeparableConv2D(1024, (3, 3), padding='same', use_bias=False, name='block13_sepconv2')(x) x =BatchNormalization(name='block13_sepconv2_bn')(x) x =MaxPooling2D((3, 3), strides=(2, 2), padding='same', name='block13_pool')(x) x = layers.add([x, residual]) x =SeparableConv2D(1536, (3, 3), padding='same', use_bias=False, name='block14_sepconv1')(x) x =BatchNormalization(name='block14_sepconv1_bn')(x) x =Activation('relu', name='block14_sepconv1_act')(x) x =SeparableConv2D(2048, (3, 3), padding='same', use_bias=False, name='block14_sepconv2')(x) x =BatchNormalization(name='block14_sepconv2_bn')(x) x =Activation('relu', name='block14_sepconv2_act')(x)if include_top: x =GlobalAveragePooling2D(name='avg_pool')(x) x =Dense(classes, activation='softmax', name='predictions')(x)else:if pooling =='avg': x =GlobalAveragePooling2D()(x)elif pooling =='max': x =GlobalMaxPooling2D()(x)# Ensure that the model takes into account# any potential predecessors of `input_tensor`.if input_tensor isnotNone: inputs =get_source_inputs(input_tensor)else: inputs = img_input# Create model. model =Model(inputs, x, name='xception')# load weightsif weights =='imagenet':if include_top: weights_path =get_file('xception_weights_tf_dim_ordering_tf_kernels.h5', TF_WEIGHTS_PATH, cache_subdir='models')else: weights_path =get_file('xception_weights_tf_dim_ordering_tf_kernels_notop.h5', TF_WEIGHTS_PATH_NO_TOP, cache_subdir='models') model.load_weights(weights_path)if old_data_format: K.set_image_data_format(old_data_format)return modeldefpreprocess_input(x): x /=255. x -=0.5 x *=2.return xif__name__=='__main__': model =Xception(include_top=True, weights='imagenet') img_path ='elephant.jpg' img = image.load_img(img_path, target_size=(299, 299)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x =preprocess_input(x)print('Input image shape:', x.shape) preds = model.predict(x)print(np.argmax(preds))print('Predicted:', decode_predictions(preds, 1))
Xception: Deep Learning with Depthwise Separable Convolutions
Reference : Chollet, François. Xception: Deep Learning with Depthwise Separable Convolutions. arXiv preprint arXiv:1610.02357, 2016.
Xception is a model which improves upon the Inception V3 model1.
What is a Depthwise Separable Convolution
A depthwise separable convolution, commonly called “separable convolution” in deep learning frameworks such as TensorFlow and Keras, consists in a sequence of two operations:
A depthwise convolution, i.e. a spatial convolution performed independently over each channel of an input, and
a pointwise convolution, i.e. a 1x1 convolution, projecting the channels output by the depthwise convolution onto a new channel space.
This is not to be confused with a spatially separable convolution, which is also commonly called “separable convolution” in the image processing community.
Why Separable Convolutions ?
The author postulates that “the fundamental hypothesis behind Inception is that cross-channel correlations and spatial correlations are sufficiently decoupled that it is preferable not to map them jointly”. This decoupling is implemented in the Inception module by a 1x1 convolution followed by a 3x3 convolution (see fig. 1 and 2).
Furthermore, the author makes the following proposition : the Inception module lies in a discrete spectrum of formulations. At one extreme of this spectrum we find the regular convolution, which spans the entire channel space (there is a single segment). The reformulation of the simplified Inception module in figure 3 uses three segments. Figure 4 shows a module that uses a separate convolution for each channel, which is almost a depthwise separable convolution (more on this below).
There is two minor differences between a depthwise separable convolution and the module shown in figure 4 : (1) the order of the two operations, and (2) the presence or absence of a non-linearity between those operations. The author argues that the first does not matter. The second difference is important, and the absence of non-linearity improves accuracy (see figure 10). An opposite result is reported by Szegedy et al. for Inception modules. Chollet provides an explanation in section 4.7.
The Xception Architecture
In short, the Xception architecture is a linear stack of depthwise separable convolution layers with residual connections. “Xception” means “Extreme Inception”, as this new model uses depthwise separable convolutions, which are at one extreme of the spectrum described above.
Experimental evaluation
Two datasets were used : ImageNet (1000 classes) and JFT, an internal Google dataset comprising 350 million high-res images with labels from a set of 17,000 classes. For each dataset, an Inception V3 model and an Xception model were trained. Models trained on JFT were evaluated using a smaller auxiliary dataset, named FastEval14k. The networks were trained on 60 NVIDIA K80 GPUs each2. The ImageNet experiments took 3 days each, while the JFT experiments took over one month each. Note the great amount of resources invested, both in terms of money and time.
On ImageNet, Xception shows marginally better results than Inception V3. On JFT, Xception shows a 4.3% relative improvement on the FastEval14k MAP@100 metric (see table 1). The fact that both architectures have almost the same number of parameters indicates that the improvement seen on ImageNet and JFT does not come from added capacity but rather from a more efficient use of the model parameters. The author also point out that Xception outperforms ImageNet results reported by He et al. for ResNet-50, ResNet-101 and ResNet-152.
Table 1: Classification performance comparison on JFT (single crop, single model). The models with FC layers have two 4096-unit FC layers before the logistic regression layer at the top. MAP@100 is the mean average precision on the top 100 guesses.
Model
JFT FastEval14k MAP@100
Inception V3 - no FC layers
6.36
Inception V3 - with FC layers
6.50
Xception - no FC layers
6.70
Xception - with FC layers
6.78
Future directions
The author says that there is no reason to believe that depthwise separable convolutions are optimal. It may be that intermediate points on the spectrum, lying between regular Inception modules and depthwise separable convolutions, hold further advantages.
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015. ↩
Around 10,000$ per GPU, for a total of 600,000$. ↩