Struct rcudnn::utils::RnnConfig

source ·
pub struct RnnConfig {
    pub hidden_size: c_int,
    pub num_layers: c_int,
    pub sequence_length: c_int,
    /* private fields */
}
Expand description

Provides an interfaces for CUDNN’s Rnn Descriptor

§Arguments

  • rnn_desc Previously created descriptor
  • hidden_size Size of the hidden layer
  • num_layers Number of layers
  • dropout_desc Descriptor to a previously created & initialized dropout descriptor, applied between layers.
  • input_mode Specifies behaviour at the input to the first layer
  • direction_mode Specifies the recurrence pattern - i.e bidirectional
  • rnn_mode Type of network used in routines ForwardInference, ForwardTraining, BackwardData, BackwardWeights. Can be ReLU, tanh, LSTM (Long Short Term Memory), or GRU (Gated Recurrent Unit).
  • algo - Only required in v6 implementation FIXME: Should this be checked in compilation?
  • data_type Math Precision - default f32

The LSTM network offered by CUDNN is a four-gate network that does not use peephole connections. Greff, et al. (2015)1 suggests it doesn’t matter what kind of network it is, although Jozefowicz, et al. (2015)2 suggests that the most important gates are the forget and input, followed by the output gate, so the peephole connection isn’t as important to be concerned with. A positive bias, as encouraged in the paper, can be achieved by setting bias_mode to CUDNN_RNN_DOUBLE_BIAS, which is the default, or CUDN_RNN_SINGLE_INP_BIAS or CUDNN_RNN_SINGLE_REC_BIAS

Fields§

§hidden_size: c_int

Size of Hidden Layer

§num_layers: c_int

Number of Hidden Layers

§sequence_length: c_int

Length of Sequence

Implementations§

source§

impl RnnConfig

source

pub fn new( rnn_desc: RnnDescriptor, hidden_size: i32, num_layers: i32, sequence_length: i32, dropout_desc: cudnnDropoutDescriptor_t, input_mode: cudnnRNNInputMode_t, direction_mode: cudnnDirectionMode_t, rnn_mode: cudnnRNNMode_t, algo: cudnnRNNAlgo_t, data_type: cudnnDataType_t, workspace_size: usize, training_reserve_size: usize, training_reserve: CudaDeviceMemory ) -> RnnConfig

Initialise a RNN Config

source

pub fn rnn_workspace_size(&self) -> usize

Workspace Size required for RNN Operations

source

pub fn largest_workspace_size(&self) -> usize

Largest Workspace Size for RNN

source

pub fn training_reserve_size(&self) -> usize

Training Reserve Size for RNN

source

pub fn training_reserve(&self) -> &CudaDeviceMemory

Training Reserve Space on GPU for RNN

source

pub fn rnn_desc(&self) -> &RnnDescriptor

Accessor function for Rnn Descriptor

source

pub fn sequence_length(&self) -> &i32

Accessor function for Sequence Length

Auto Trait Implementations§

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.