R/createDenseNetModel.R
createDenseNetModel3D.Rd
Creates a keras model of the DenseNet deep learning architecture for image recognition based on the paper
createDenseNetModel3D( inputImageSize, numberOfClassificationLabels = 1000, numberOfFilters = 16, depth = 7, numberOfDenseBlocks = 1, growthRate = 12, dropoutRate = 0.2, weightDecay = 0.0001, mode = "classification" )
inputImageSize | Used for specifying the input tensor shape. The shape (or dimension) of that tensor is the image dimensions followed by the number of channels (e.g., red, green, and blue). The batch size (i.e., number of training images) is not specified a priori. |
---|---|
numberOfClassificationLabels | Number of segmentation labels. |
numberOfFilters | number of filters |
depth | number of layers---must be equal to 3 * N + 4 where N is an integer (default = 7). |
numberOfDenseBlocks | number of dense blocks to add to the end (default = 1). |
growthRate | number of filters to add for each dense block layer (default = 12). |
dropoutRate | = per drop out layer rate (default = 0.2) |
weightDecay | = weight decay (default = 1e-4) |
mode | 'classification' or 'regression'. Default = 'classification'. |
an DenseNet keras model
G. Huang, Z. Liu, K. Weinberger, and L. van der Maaten. Densely Connected Convolutional Networks Networks
available here:
https://arxiv.org/abs/1608.06993
This particular implementation was influenced by the following python implementation:
https://github.com/tdeboissiere/DeepLearningImplementations/blob/master/DenseNet/densenet.py
Tustison NJ
if (FALSE) { library( ANTsRNet ) library( keras ) mnistData <- dataset_mnist() numberOfLabels <- 10 # Extract a small subset for something that can run quickly X_trainSmall <- mnistData$train$x[1:10,,] X_trainSmall <- array( data = X_trainSmall, dim = c( dim( X_trainSmall ), 1 ) ) Y_trainSmall <- to_categorical( mnistData$train$y[1:10], numberOfLabels ) X_testSmall <- mnistData$test$x[1:10,,] X_testSmall <- array( data = X_testSmall, dim = c( dim( X_testSmall ), 1 ) ) Y_testSmall <- to_categorical( mnistData$test$y[1:10], numberOfLabels ) # We add a dimension of 1 to specify the channel size inputImageSize <- c( dim( X_trainSmall )[2:3], 1 ) model <- createDenseNetModel2D( inputImageSize = inputImageSize, numberOfClassificationLabels = numberOfLabels ) model %>% compile( loss = 'categorical_crossentropy', optimizer = optimizer_adam( lr = 0.0001 ), metrics = c( 'categorical_crossentropy', 'accuracy' ) ) track <- model %>% fit( X_trainSmall, Y_trainSmall, verbose = 1, epochs = 1, batch_size = 2, shuffle = TRUE, validation_split = 0.5 ) # Now test the model testingMetrics <- model %>% evaluate( X_testSmall, Y_testSmall ) predictedData <- model %>% predict( X_testSmall, verbose = 1 ) }