pub struct Rnn<B: Rnn<f32>> { /* private fields */ }
Expand description
Implementations§
Trait Implementations§
source§impl<B: IBackend + Rnn<f32>> ComputeInputGradient<f32, B> for Rnn<B>
impl<B: IBackend + Rnn<f32>> ComputeInputGradient<f32, B> for Rnn<B>
source§fn compute_input_gradient(
&self,
backend: &B,
weights_data: &[&SharedTensor<f32>],
output_data: &[&SharedTensor<f32>],
output_gradients: &[&SharedTensor<f32>],
input_data: &[&SharedTensor<f32>],
input_gradients: &mut [&mut SharedTensor<f32>]
)
fn compute_input_gradient( &self, backend: &B, weights_data: &[&SharedTensor<f32>], output_data: &[&SharedTensor<f32>], output_gradients: &[&SharedTensor<f32>], input_data: &[&SharedTensor<f32>], input_gradients: &mut [&mut SharedTensor<f32>] )
Compute gradients with respect to the inputs and write them into
input_gradients
.source§impl<B: IBackend + Rnn<f32>> ComputeOutput<f32, B> for Rnn<B>
impl<B: IBackend + Rnn<f32>> ComputeOutput<f32, B> for Rnn<B>
source§fn compute_output(
&self,
backend: &B,
weights: &[&SharedTensor<f32>],
input_data: &[&SharedTensor<f32>],
output_data: &mut [&mut SharedTensor<f32>]
)
fn compute_output( &self, backend: &B, weights: &[&SharedTensor<f32>], input_data: &[&SharedTensor<f32>], output_data: &mut [&mut SharedTensor<f32>] )
Compute output for given input and write them into
output_data
.source§impl<B: IBackend + Rnn<f32>> ComputeParametersGradient<f32, B> for Rnn<B>
impl<B: IBackend + Rnn<f32>> ComputeParametersGradient<f32, B> for Rnn<B>
source§fn compute_parameters_gradient(
&self,
backend: &B,
output_data: &[&SharedTensor<f32>],
output_gradients: &[&SharedTensor<f32>],
input_data: &[&SharedTensor<f32>],
parameters_gradients: &mut [&mut SharedTensor<f32>]
)
fn compute_parameters_gradient( &self, backend: &B, output_data: &[&SharedTensor<f32>], output_gradients: &[&SharedTensor<f32>], input_data: &[&SharedTensor<f32>], parameters_gradients: &mut [&mut SharedTensor<f32>] )
Compute gradients with respect to the parameters and write them into
parameters_gradients
.source§impl<B: IBackend + Rnn<f32>> ILayer<B> for Rnn<B>
impl<B: IBackend + Rnn<f32>> ILayer<B> for Rnn<B>
source§fn exact_num_output_blobs(&self) -> Option<usize>
fn exact_num_output_blobs(&self) -> Option<usize>
Returns the exact number of output blobs required by the layer,
or
None
if no exact number is required. Read moresource§fn exact_num_input_blobs(&self) -> Option<usize>
fn exact_num_input_blobs(&self) -> Option<usize>
Returns the exact number of input blobs required by the layer,
or
None
if no exact number is required. Read moresource§fn auto_weight_blobs(&self) -> bool
fn auto_weight_blobs(&self) -> bool
Return whether weight blobs are created automatically for the layer. Read more
source§fn reshape(
&mut self,
backend: Rc<B>,
input_data: &mut Vec<ArcLock<SharedTensor<f32>>>,
input_gradient: &mut Vec<ArcLock<SharedTensor<f32>>>,
weights_data: &mut Vec<ArcLock<SharedTensor<f32>>>,
weights_gradient: &mut Vec<ArcLock<SharedTensor<f32>>>,
output_data: &mut Vec<ArcLock<SharedTensor<f32>>>,
output_gradient: &mut Vec<ArcLock<SharedTensor<f32>>>
)
fn reshape( &mut self, backend: Rc<B>, input_data: &mut Vec<ArcLock<SharedTensor<f32>>>, input_gradient: &mut Vec<ArcLock<SharedTensor<f32>>>, weights_data: &mut Vec<ArcLock<SharedTensor<f32>>>, weights_gradient: &mut Vec<ArcLock<SharedTensor<f32>>>, output_data: &mut Vec<ArcLock<SharedTensor<f32>>>, output_gradient: &mut Vec<ArcLock<SharedTensor<f32>>> )
Adjust to shapes of the output blobs to fit the shapes of the input blobs. Read more
Adjust size of shared workspace. Read more
source§fn forward(
&self,
backend: &B,
input_data: &[ArcLock<SharedTensor<f32>>],
weights_data: &[ArcLock<SharedTensor<f32>>],
output_data: &mut [ArcLock<SharedTensor<f32>>]
)
fn forward( &self, backend: &B, input_data: &[ArcLock<SharedTensor<f32>>], weights_data: &[ArcLock<SharedTensor<f32>>], output_data: &mut [ArcLock<SharedTensor<f32>>] )
Compute the [feedforward][1] layer output using the provided Backend.
[1]: https://en.wikipedia.org/wiki/Feedforward_neural_network Read more
source§fn backward_input(
&self,
backend: &B,
weights_data: &[ArcLock<SharedTensor<f32>>],
output_data: &[ArcLock<SharedTensor<f32>>],
output_gradients: &[ArcLock<SharedTensor<f32>>],
input_data: &[ArcLock<SharedTensor<f32>>],
input_gradients: &mut [ArcLock<SharedTensor<f32>>]
)
fn backward_input( &self, backend: &B, weights_data: &[ArcLock<SharedTensor<f32>>], output_data: &[ArcLock<SharedTensor<f32>>], output_gradients: &[ArcLock<SharedTensor<f32>>], input_data: &[ArcLock<SharedTensor<f32>>], input_gradients: &mut [ArcLock<SharedTensor<f32>>] )
Compute the [backpropagation][1] input gradient using the provided backend.
[1]: https://en.wikipedia.org/wiki/Backpropagation Read more
source§fn backward_parameters(
&self,
backend: &B,
output_data: &[ArcLock<SharedTensor<f32>>],
output_gradients: &[ArcLock<SharedTensor<f32>>],
input_data: &[ArcLock<SharedTensor<f32>>],
weights_gradients: &mut [ArcLock<SharedTensor<f32>>]
)
fn backward_parameters( &self, backend: &B, output_data: &[ArcLock<SharedTensor<f32>>], output_gradients: &[ArcLock<SharedTensor<f32>>], input_data: &[ArcLock<SharedTensor<f32>>], weights_gradients: &mut [ArcLock<SharedTensor<f32>>] )
Compute the [backpropagation][1] parameters gradient using the provided backend.
[1]: https://en.wikipedia.org/wiki/Backpropagation Read more
source§fn auto_output_blobs(&self) -> bool
fn auto_output_blobs(&self) -> bool
Return whether “anonymous” output blobs are created automatically for the layer. Read more
source§fn min_output_blobs(&self) -> usize
fn min_output_blobs(&self) -> usize
Returns the minimum number of output blobs required by the layer,
or 0 if no minimum number is required. Read more
source§fn allow_force_backward(&self, input_id: usize) -> bool
fn allow_force_backward(&self, input_id: usize) -> bool
Return whether to allow force_backward for a given input blob index. Read more
source§fn sync_native(&self) -> bool
fn sync_native(&self) -> bool
Return wether a simple native backend should be used to [sync][1] instead of the default backend.
[1]: #method.sync Read more
source§fn compute_in_place(&self) -> bool
fn compute_in_place(&self) -> bool
Return wether the computations of a layer should be done in-place (the output will be written where the input was read from). Read more
source§fn is_container(&self) -> bool
fn is_container(&self) -> bool
Return wether the layer is a container. Read more
source§fn loss_weight(&self, output_id: usize) -> Option<f32>
fn loss_weight(&self, output_id: usize) -> Option<f32>
Return the associated loss weight for a given output blob index. Read more
source§fn inputs_data(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
fn inputs_data(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
Return the input tensors of the layer. Read more
source§fn inputs_gradients(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
fn inputs_gradients(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
Return the gradients of the input tensors of the layer. Read more
source§fn outputs_data(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
fn outputs_data(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
Return the output tensors of the layer. Read more
source§fn outputs_gradients(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
fn outputs_gradients(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
Return the gradients of the output tensors of the layer. Read more
source§fn learnable_weights(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
fn learnable_weights(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
Return the learnable weights inside the layer. Read more
source§fn learnable_weights_gradients(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
fn learnable_weights_gradients(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
Return the gradients for the learnable weights inside the layer. Read more
Auto Trait Implementations§
impl<B> RefUnwindSafe for Rnn<B>
impl<B> !Send for Rnn<B>
impl<B> !Sync for Rnn<B>
impl<B> Unpin for Rnn<B>
impl<B> UnwindSafe for Rnn<B>
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more