Trait juice::layer::ILayer

source ·
pub trait ILayer<B: IBackend>: ComputeOutput<f32, B> + ComputeInputGradient<f32, B> + ComputeParametersGradient<f32, B> {
Show 24 methods // Provided methods fn init(&mut self, backend: Rc<B>) { ... } fn reshape( &mut self, backend: Rc<B>, input_data: &mut Vec<ArcLock<SharedTensor<f32>>>, input_gradient: &mut Vec<ArcLock<SharedTensor<f32>>>, weights_data: &mut Vec<ArcLock<SharedTensor<f32>>>, weights_gradient: &mut Vec<ArcLock<SharedTensor<f32>>>, output_data: &mut Vec<ArcLock<SharedTensor<f32>>>, output_gradient: &mut Vec<ArcLock<SharedTensor<f32>>> ) { ... } fn resize_shared_workspace( &mut self, backend: Rc<B>, workspace: Option<ArcLock<SharedTensor<u8>>> ) -> Option<ArcLock<SharedTensor<u8>>> { ... } fn forward( &self, backend: &B, input_data: &[ArcLock<SharedTensor<f32>>], weights_data: &[ArcLock<SharedTensor<f32>>], output_data: &mut [ArcLock<SharedTensor<f32>>] ) { ... } fn backward_input( &self, backend: &B, weights_data: &[ArcLock<SharedTensor<f32>>], output_data: &[ArcLock<SharedTensor<f32>>], output_gradients: &[ArcLock<SharedTensor<f32>>], input_data: &[ArcLock<SharedTensor<f32>>], input_gradients: &mut [ArcLock<SharedTensor<f32>>] ) { ... } fn backward_parameters( &self, backend: &B, output_data: &[ArcLock<SharedTensor<f32>>], output_gradients: &[ArcLock<SharedTensor<f32>>], input_data: &[ArcLock<SharedTensor<f32>>], weights_gradients: &mut [ArcLock<SharedTensor<f32>>] ) { ... } fn auto_output_blobs(&self) -> bool { ... } fn min_output_blobs(&self) -> usize { ... } fn exact_num_output_blobs(&self) -> Option<usize> { ... } fn auto_weight_blobs(&self) -> bool { ... } fn exact_num_input_blobs(&self) -> Option<usize> { ... } fn allow_force_backward(&self, input_id: usize) -> bool { ... } fn sync_native(&self) -> bool { ... } fn compute_in_place(&self) -> bool { ... } fn is_container(&self) -> bool { ... } fn loss_weight(&self, output_id: usize) -> Option<f32> { ... } fn inputs_data(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>> { ... } fn inputs_gradients(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>> { ... } fn outputs_data(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>> { ... } fn outputs_gradients(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>> { ... } fn learnable_weights(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>> { ... } fn learnable_weights_gradients( &self ) -> Option<Vec<ArcLock<SharedTensor<f32>>>> { ... } fn learnable_weights_names(&self) -> Option<Vec<String>> { ... } fn learnable_weights_lr(&self) -> Option<Vec<Option<f32>>> { ... }
}
Expand description

A Layer in a Neural Network that can handle forward and backward of a computation step.

Provided Methods§

source

fn init(&mut self, backend: Rc<B>)

Initialize the layer for computation.

Allows for layer-specific one time setup, e.g. precomputing constant values.

source

fn reshape( &mut self, backend: Rc<B>, input_data: &mut Vec<ArcLock<SharedTensor<f32>>>, input_gradient: &mut Vec<ArcLock<SharedTensor<f32>>>, weights_data: &mut Vec<ArcLock<SharedTensor<f32>>>, weights_gradient: &mut Vec<ArcLock<SharedTensor<f32>>>, output_data: &mut Vec<ArcLock<SharedTensor<f32>>>, output_gradient: &mut Vec<ArcLock<SharedTensor<f32>>> )

Adjust to shapes of the output blobs to fit the shapes of the input blobs.

Should be called during Layer initalization, after init.

Caution: input_data should only be reshaped, but not resized.

source

fn resize_shared_workspace( &mut self, backend: Rc<B>, workspace: Option<ArcLock<SharedTensor<u8>>> ) -> Option<ArcLock<SharedTensor<u8>>>

Adjust size of shared workspace.

Is used by layers that need a workspace. The layer should either:

  • leave the workspace as is if it bigger than required by this layer
  • resize the workspace to the required size if smaller
  • create the workspace if the workspace is None

The reference to the workspace should be saved in the layer.

source

fn forward( &self, backend: &B, input_data: &[ArcLock<SharedTensor<f32>>], weights_data: &[ArcLock<SharedTensor<f32>>], output_data: &mut [ArcLock<SharedTensor<f32>>] )

Compute the [feedforward][1] layer output using the provided Backend. [1]: https://en.wikipedia.org/wiki/Feedforward_neural_network

Aquires read locks for the input tensors and write locks for the output tensors to ensure sequential computation, and then passes them to computation method specific function ([forward_cpu][4]).

source

fn backward_input( &self, backend: &B, weights_data: &[ArcLock<SharedTensor<f32>>], output_data: &[ArcLock<SharedTensor<f32>>], output_gradients: &[ArcLock<SharedTensor<f32>>], input_data: &[ArcLock<SharedTensor<f32>>], input_gradients: &mut [ArcLock<SharedTensor<f32>>] )

Compute the [backpropagation][1] input gradient using the provided backend. [1]: https://en.wikipedia.org/wiki/Backpropagation

Aquires write locks for the input blobs to ensure sequential computation, and then do a compute_input_gradient.

source

fn backward_parameters( &self, backend: &B, output_data: &[ArcLock<SharedTensor<f32>>], output_gradients: &[ArcLock<SharedTensor<f32>>], input_data: &[ArcLock<SharedTensor<f32>>], weights_gradients: &mut [ArcLock<SharedTensor<f32>>] )

Compute the [backpropagation][1] parameters gradient using the provided backend. [1]: https://en.wikipedia.org/wiki/Backpropagation

Aquires write locks for the input blobs to ensure sequential computation, and then do a compute_parameters_gradient.

source

fn auto_output_blobs(&self) -> bool

Return whether “anonymous” output blobs are created automatically for the layer.

If this method returns true, Network::init will create enough “anonymous” output blobs to fulfill the requirement specified by [exact_num_output_blobs][1] or [min_output_blobs][2]. [1]: #method.exact_num_output_blobs [2]: #method.min_output_blobs

source

fn min_output_blobs(&self) -> usize

Returns the minimum number of output blobs required by the layer, or 0 if no minimum number is required.

This method should be overridden to return a positive value if your layer expects some minimum number of output blobs.

source

fn exact_num_output_blobs(&self) -> Option<usize>

Returns the exact number of output blobs required by the layer, or None if no exact number is required.

This method should be overridden to return a positive value if your layer expects some exact number of output blobs.

source

fn auto_weight_blobs(&self) -> bool

Return whether weight blobs are created automatically for the layer.

If this method returns true, Network::init will create a weight blob for every output blob.

source

fn exact_num_input_blobs(&self) -> Option<usize>

Returns the exact number of input blobs required by the layer, or None if no exact number is required.

This method should be overridden to return a positive value if your layer expects some exact number of input blobs.

source

fn allow_force_backward(&self, input_id: usize) -> bool

Return whether to allow force_backward for a given input blob index.

If allow_force_backward(i) == false, we will ignore the force_backward setting and backpropagate to blob i only if it needs gradient information (as is done when force_backward == false).

source

fn sync_native(&self) -> bool

Return wether a simple native backend should be used to [sync][1] instead of the default backend. [1]: #method.sync

If false is returned the default backend will be used, otherwise a new native backend will be created and provided as argument to sync.

source

fn compute_in_place(&self) -> bool

Return wether the computations of a layer should be done in-place (the output will be written where the input was read from).

Doing computations in place reduces the memory required for layers.

If false is returned the layer behaves as normal, otherwise if a layer is provided a identiacla “input” and “output”, it will only be supplied an “output_data” when doing a compute_output.

source

fn is_container(&self) -> bool

Return wether the layer is a container.

This turns of certain behaviour for containers which would lead to problems:

  • RwLocks will not be aquired for forward/backward since it would lead to deadlocks.
source

fn loss_weight(&self, output_id: usize) -> Option<f32>

Return the associated loss weight for a given output blob index.

If loss_weight(i) == None, no loss will be calculated for the output blob.

This is usually overridden by loss layers.

source

fn inputs_data(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>

Return the input tensors of the layer.

This should only be overridden by container layers, where the tensors are not easily exposable.

source

fn inputs_gradients(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>

Return the gradients of the input tensors of the layer.

This should only be overridden by container layers, where the tensors are not easily exposable.

source

fn outputs_data(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>

Return the output tensors of the layer.

This should only be overridden by container layers, where the tensors are not easily exposable.

source

fn outputs_gradients(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>

Return the gradients of the output tensors of the layer.

This should only be overridden by container layers, where the tensors are not easily exposable.

source

fn learnable_weights(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>

Return the learnable weights inside the layer.

This should only be overridden by container layers, where the weights are not easily exposable.

source

fn learnable_weights_gradients(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>

Return the gradients for the learnable weights inside the layer.

This should only be overridden by container layers, where the weights are not easily exposable.

source

fn learnable_weights_names(&self) -> Option<Vec<String>>

Return the names of the learnable weights inside the layer.

This should only be overridden by container layers, where the weights are not easily exposable.

source

fn learnable_weights_lr(&self) -> Option<Vec<Option<f32>>>

Return the learning rates for the learnable weights inside the layer.

This should only be overridden by container layers, where the weights are not easily exposable.

Trait Implementations§

source§

impl<B: IBackend> Debug for dyn ILayer<B>

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more

Implementors§