pub struct RnnConfig {
pub hidden_size: c_int,
pub num_layers: c_int,
pub sequence_length: c_int,
/* private fields */
}Expand description
Provides an interfaces for CUDNN’s Rnn Descriptor
§Arguments
rnn_descPreviously created descriptorhidden_sizeSize of the hidden layernum_layersNumber of layersdropout_descDescriptor to a previously created & initialized dropout descriptor, applied between layers.input_modeSpecifies behaviour at the input to the first layerdirection_modeSpecifies the recurrence pattern - i.e bidirectionalrnn_modeType of network used in routines ForwardInference, ForwardTraining, BackwardData, BackwardWeights. Can be ReLU, tanh, LSTM (Long Short Term Memory), or GRU (Gated Recurrent Unit).algo- Only required in v6 implementation FIXME: Should this be checked in compilation?data_typeMath Precision - default f32
The LSTM network offered by CUDNN is a four-gate network that does not use peephole connections.
Greff, et al. (2015)1 suggests it doesn’t matter what kind of network it is, although
Jozefowicz, et al. (2015)2 suggests that the most important gates are the forget and input,
followed by the output gate, so the peephole connection isn’t as important to be concerned with.
A positive bias, as encouraged in the paper, can be achieved by setting bias_mode to
CUDNN_RNN_DOUBLE_BIAS, which is the default, or CUDN_RNN_SINGLE_INP_BIAS or
CUDNN_RNN_SINGLE_REC_BIAS
Fields§
Size of Hidden Layer
num_layers: c_intNumber of Hidden Layers
sequence_length: c_intLength of Sequence
Implementations§
source§impl RnnConfig
impl RnnConfig
sourcepub fn new(
rnn_desc: RnnDescriptor,
hidden_size: i32,
num_layers: i32,
sequence_length: i32,
dropout_desc: cudnnDropoutDescriptor_t,
input_mode: cudnnRNNInputMode_t,
direction_mode: cudnnDirectionMode_t,
rnn_mode: cudnnRNNMode_t,
algo: cudnnRNNAlgo_t,
data_type: cudnnDataType_t,
workspace_size: usize,
training_reserve_size: usize,
training_reserve: CudaDeviceMemory
) -> RnnConfig
pub fn new( rnn_desc: RnnDescriptor, hidden_size: i32, num_layers: i32, sequence_length: i32, dropout_desc: cudnnDropoutDescriptor_t, input_mode: cudnnRNNInputMode_t, direction_mode: cudnnDirectionMode_t, rnn_mode: cudnnRNNMode_t, algo: cudnnRNNAlgo_t, data_type: cudnnDataType_t, workspace_size: usize, training_reserve_size: usize, training_reserve: CudaDeviceMemory ) -> RnnConfig
Initialise a RNN Config
sourcepub fn rnn_workspace_size(&self) -> usize
pub fn rnn_workspace_size(&self) -> usize
Workspace Size required for RNN Operations
sourcepub fn largest_workspace_size(&self) -> usize
pub fn largest_workspace_size(&self) -> usize
Largest Workspace Size for RNN
sourcepub fn training_reserve_size(&self) -> usize
pub fn training_reserve_size(&self) -> usize
Training Reserve Size for RNN
sourcepub fn training_reserve(&self) -> &CudaDeviceMemory
pub fn training_reserve(&self) -> &CudaDeviceMemory
Training Reserve Space on GPU for RNN
sourcepub fn rnn_desc(&self) -> &RnnDescriptor
pub fn rnn_desc(&self) -> &RnnDescriptor
Accessor function for Rnn Descriptor
sourcepub fn sequence_length(&self) -> &i32
pub fn sequence_length(&self) -> &i32
Accessor function for Sequence Length