Wraps the EfficientAttentionLayer2D modified from the following python implementation

layer_efficient_attention_2d(
  object,
  numberOfFiltersFG = 4L,
  numberOfFiltersH = 8L,
  kernelSize = 1L,
  poolSize = 2L,
  doConcatenateFinalLayers = FALSE,
  trainable = TRUE
)

Arguments

object

Object to compose layer with. This is either a keras::keras_model_sequential to add the layer to or another Layer which this layer will call.

numberOfFiltersFG

number of filters for F and G layers.

numberOfFiltersH

number of filters for H. If = NA, only use filter F for efficiency.

kernelSize

kernel size in convolution layer.

poolSize

pool size in max pool layer.

doConcatenateFinalLayers

concatenate final layer with input. Alternatively, add. Default = FALSE

Value

a keras layer tensor

Details

https://github.com/taki0112/Self-Attention-GAN-Tensorflow

based on the following paper:

https://arxiv.org/abs/1805.08318

Examples

if (FALSE) { library( keras ) library( ANTsRNet ) inputShape <- c( 100, 100, 3 ) input <- layer_input( shape = inputShape ) numberOfFiltersFG <- 64L outputs <- input %>% layer_efficient_attention_2d( numberOfFiltersFG ) model <- keras_model( inputs = input, outputs = outputs ) }