R/createWideResNetModel.R
createWideResNetModel3D.Rd
Creates a keras model of the Wide ResNet deep learning architecture for image classification/regression. The paper is available here:
createWideResNetModel3D( inputImageSize, numberOfClassificationLabels = 1000, depth = 2, width = 1, residualBlockSchedule = c(16, 32, 64), poolSize = c(8, 8, 8), dropoutRate = 0, weightDecay = 0.0005, mode = c("classification", "regression") )
inputImageSize | Used for specifying the input tensor shape. The shape (or dimension) of that tensor is the image dimensions followed by the number of channels (e.g., red, green, and blue). The batch size (i.e., number of training images) is not specified a priori. |
---|---|
numberOfClassificationLabels | Number of classification labels. |
depth | integer determining the depth of the ntwork. Related to the
actual number of layers by the |
width | integer determining the width of the network. Default = 1. |
residualBlockSchedule | vector determining the number of filters
per convolutional block. Default = |
poolSize | pool size for final average pooling layer. Default = c( 8, 8, 8 ). |
dropoutRate | Dropout percentage. Default = 0.0. |
weightDecay | weight for l2 regularizer in convolution layers. Default = 0.0005. |
mode | 'classification' or 'regression'. |
a Wide ResNet keras model
https://arxiv.org/abs/1512.03385
This particular implementation was influenced by the following python implementation:
https://github.com/titu1994/Wide-Residual-Networks
Tustison NJ
if (FALSE) { library( ANTsRNet ) library( keras ) mnistData <- dataset_mnist() numberOfLabels <- 10 # Extract a small subset for something that can run quickly X_trainSmall <- mnistData$train$x[1:10,,] X_trainSmall <- array( data = X_trainSmall, dim = c( dim( X_trainSmall ), 1 ) ) Y_trainSmall <- to_categorical( mnistData$train$y[1:10], numberOfLabels ) X_testSmall <- mnistData$test$x[1:10,,] X_testSmall <- array( data = X_testSmall, dim = c( dim( X_testSmall ), 1 ) ) Y_testSmall <- to_categorical( mnistData$test$y[1:10], numberOfLabels ) # We add a dimension of 1 to specify the channel size inputImageSize <- c( dim( X_trainSmall )[2:3], 1 ) model <- createWideResNetModel2D( inputImageSize = inputImageSize, numberOfClassificationLabels = numberOfLabels ) model %>% compile( loss = 'categorical_crossentropy', optimizer = optimizer_adam( lr = 0.0001 ), metrics = c( 'categorical_crossentropy', 'accuracy' ) ) # Comment out the rest due to travis build constraints # track <- model %>% fit( X_trainSmall, Y_trainSmall, verbose = 1, # epochs = 1, batch_size = 2, shuffle = TRUE, validation_split = 0.5 ) # Now test the model # testingMetrics <- model %>% evaluate( X_testSmall, Y_testSmall ) # predictedData <- model %>% predict( X_testSmall, verbose = 1 ) }