Struct juice::layers::common::convolution::Convolution
source · pub struct Convolution<B: Convolution<f32>> { /* private fields */ }
Expand description
Convolution Layer
Implementations§
source§impl<B: Convolution<f32>> Convolution<B>
impl<B: Convolution<f32>> Convolution<B>
sourcepub fn from_config(config: &ConvolutionConfig) -> Convolution<B>
pub fn from_config(config: &ConvolutionConfig) -> Convolution<B>
Create a Convolution layer from a ConvolutionConfig.
Trait Implementations§
source§impl<B: Clone + Convolution<f32>> Clone for Convolution<B>
impl<B: Clone + Convolution<f32>> Clone for Convolution<B>
source§fn clone(&self) -> Convolution<B>
fn clone(&self) -> Convolution<B>
Returns a copy of the value. Read more
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
Performs copy-assignment from
source
. Read moresource§impl<B: IBackend + Convolution<f32>> ComputeInputGradient<f32, B> for Convolution<B>
impl<B: IBackend + Convolution<f32>> ComputeInputGradient<f32, B> for Convolution<B>
source§fn compute_input_gradient(
&self,
backend: &B,
weights_data: &[&SharedTensor<f32>],
_output_data: &[&SharedTensor<f32>],
output_gradients: &[&SharedTensor<f32>],
input_data: &[&SharedTensor<f32>],
input_gradients: &mut [&mut SharedTensor<f32>]
)
fn compute_input_gradient( &self, backend: &B, weights_data: &[&SharedTensor<f32>], _output_data: &[&SharedTensor<f32>], output_gradients: &[&SharedTensor<f32>], input_data: &[&SharedTensor<f32>], input_gradients: &mut [&mut SharedTensor<f32>] )
Compute gradients with respect to the inputs and write them into
input_gradients
.source§impl<B: IBackend + Convolution<f32>> ComputeOutput<f32, B> for Convolution<B>
impl<B: IBackend + Convolution<f32>> ComputeOutput<f32, B> for Convolution<B>
source§fn compute_output(
&self,
backend: &B,
weights: &[&SharedTensor<f32>],
input_data: &[&SharedTensor<f32>],
output_data: &mut [&mut SharedTensor<f32>]
)
fn compute_output( &self, backend: &B, weights: &[&SharedTensor<f32>], input_data: &[&SharedTensor<f32>], output_data: &mut [&mut SharedTensor<f32>] )
Compute output for given input and write them into
output_data
.source§impl<B: IBackend + Convolution<f32>> ComputeParametersGradient<f32, B> for Convolution<B>
impl<B: IBackend + Convolution<f32>> ComputeParametersGradient<f32, B> for Convolution<B>
source§fn compute_parameters_gradient(
&self,
backend: &B,
_output_data: &[&SharedTensor<f32>],
output_gradients: &[&SharedTensor<f32>],
input_data: &[&SharedTensor<f32>],
parameters_gradients: &mut [&mut SharedTensor<f32>]
)
fn compute_parameters_gradient( &self, backend: &B, _output_data: &[&SharedTensor<f32>], output_gradients: &[&SharedTensor<f32>], input_data: &[&SharedTensor<f32>], parameters_gradients: &mut [&mut SharedTensor<f32>] )
Compute gradients with respect to the parameters and write them into
parameters_gradients
.source§impl<B: Debug + Convolution<f32>> Debug for Convolution<B>
impl<B: Debug + Convolution<f32>> Debug for Convolution<B>
source§impl<B: Convolution<f32>> FilterLayer for Convolution<B>
impl<B: Convolution<f32>> FilterLayer for Convolution<B>
source§fn num_spatial_dims(&self, input_shape: &[usize]) -> usize
fn num_spatial_dims(&self, input_shape: &[usize]) -> usize
Calculates the number of spatial dimensions for the convolution operation.
source§fn calculate_output_shape(&self, input_shape: &[usize]) -> Vec<usize>
fn calculate_output_shape(&self, input_shape: &[usize]) -> Vec<usize>
Calculate output shape based on the shape of filter, padding, stride and input.
source§fn filter_shape(&self) -> &[usize]
fn filter_shape(&self) -> &[usize]
The filter_shape that will be used by
spatial_filter_dims
.source§fn calculate_spatial_output_dims(
input_dims: &[usize],
filter_dims: &[usize],
padding: &[usize],
stride: &[usize]
) -> Vec<usize>
fn calculate_spatial_output_dims( input_dims: &[usize], filter_dims: &[usize], padding: &[usize], stride: &[usize] ) -> Vec<usize>
Computes the shape of the spatial dimensions.
source§fn spatial_filter_dims(&self, num_spatial_dims: usize) -> Vec<usize>
fn spatial_filter_dims(&self, num_spatial_dims: usize) -> Vec<usize>
Retrievs the spatial dimensions for the filter based on
self.filter_shape()
and the number of spatial dimensions. Read moresource§impl<B: IBackend + Convolution<f32>> ILayer<B> for Convolution<B>
impl<B: IBackend + Convolution<f32>> ILayer<B> for Convolution<B>
source§fn exact_num_output_blobs(&self) -> Option<usize>
fn exact_num_output_blobs(&self) -> Option<usize>
Returns the exact number of output blobs required by the layer,
or
None
if no exact number is required. Read moresource§fn exact_num_input_blobs(&self) -> Option<usize>
fn exact_num_input_blobs(&self) -> Option<usize>
Returns the exact number of input blobs required by the layer,
or
None
if no exact number is required. Read moresource§fn auto_weight_blobs(&self) -> bool
fn auto_weight_blobs(&self) -> bool
Return whether weight blobs are created automatically for the layer. Read more
source§fn reshape(
&mut self,
backend: Rc<B>,
input_data: &mut Vec<ArcLock<SharedTensor<f32>>>,
input_gradient: &mut Vec<ArcLock<SharedTensor<f32>>>,
weights_data: &mut Vec<ArcLock<SharedTensor<f32>>>,
weights_gradient: &mut Vec<ArcLock<SharedTensor<f32>>>,
output_data: &mut Vec<ArcLock<SharedTensor<f32>>>,
output_gradient: &mut Vec<ArcLock<SharedTensor<f32>>>
)
fn reshape( &mut self, backend: Rc<B>, input_data: &mut Vec<ArcLock<SharedTensor<f32>>>, input_gradient: &mut Vec<ArcLock<SharedTensor<f32>>>, weights_data: &mut Vec<ArcLock<SharedTensor<f32>>>, weights_gradient: &mut Vec<ArcLock<SharedTensor<f32>>>, output_data: &mut Vec<ArcLock<SharedTensor<f32>>>, output_gradient: &mut Vec<ArcLock<SharedTensor<f32>>> )
Adjust to shapes of the output blobs to fit the shapes of the input blobs. Read more
Adjust size of shared workspace. Read more
source§fn forward(
&self,
backend: &B,
input_data: &[ArcLock<SharedTensor<f32>>],
weights_data: &[ArcLock<SharedTensor<f32>>],
output_data: &mut [ArcLock<SharedTensor<f32>>]
)
fn forward( &self, backend: &B, input_data: &[ArcLock<SharedTensor<f32>>], weights_data: &[ArcLock<SharedTensor<f32>>], output_data: &mut [ArcLock<SharedTensor<f32>>] )
Compute the [feedforward][1] layer output using the provided Backend.
[1]: https://en.wikipedia.org/wiki/Feedforward_neural_network Read more
source§fn backward_input(
&self,
backend: &B,
weights_data: &[ArcLock<SharedTensor<f32>>],
output_data: &[ArcLock<SharedTensor<f32>>],
output_gradients: &[ArcLock<SharedTensor<f32>>],
input_data: &[ArcLock<SharedTensor<f32>>],
input_gradients: &mut [ArcLock<SharedTensor<f32>>]
)
fn backward_input( &self, backend: &B, weights_data: &[ArcLock<SharedTensor<f32>>], output_data: &[ArcLock<SharedTensor<f32>>], output_gradients: &[ArcLock<SharedTensor<f32>>], input_data: &[ArcLock<SharedTensor<f32>>], input_gradients: &mut [ArcLock<SharedTensor<f32>>] )
Compute the [backpropagation][1] input gradient using the provided backend.
[1]: https://en.wikipedia.org/wiki/Backpropagation Read more
source§fn backward_parameters(
&self,
backend: &B,
output_data: &[ArcLock<SharedTensor<f32>>],
output_gradients: &[ArcLock<SharedTensor<f32>>],
input_data: &[ArcLock<SharedTensor<f32>>],
weights_gradients: &mut [ArcLock<SharedTensor<f32>>]
)
fn backward_parameters( &self, backend: &B, output_data: &[ArcLock<SharedTensor<f32>>], output_gradients: &[ArcLock<SharedTensor<f32>>], input_data: &[ArcLock<SharedTensor<f32>>], weights_gradients: &mut [ArcLock<SharedTensor<f32>>] )
Compute the [backpropagation][1] parameters gradient using the provided backend.
[1]: https://en.wikipedia.org/wiki/Backpropagation Read more
source§fn auto_output_blobs(&self) -> bool
fn auto_output_blobs(&self) -> bool
Return whether “anonymous” output blobs are created automatically for the layer. Read more
source§fn min_output_blobs(&self) -> usize
fn min_output_blobs(&self) -> usize
Returns the minimum number of output blobs required by the layer,
or 0 if no minimum number is required. Read more
source§fn allow_force_backward(&self, input_id: usize) -> bool
fn allow_force_backward(&self, input_id: usize) -> bool
Return whether to allow force_backward for a given input blob index. Read more
source§fn sync_native(&self) -> bool
fn sync_native(&self) -> bool
Return wether a simple native backend should be used to [sync][1] instead of the default backend.
[1]: #method.sync Read more
source§fn compute_in_place(&self) -> bool
fn compute_in_place(&self) -> bool
Return wether the computations of a layer should be done in-place (the output will be written where the input was read from). Read more
source§fn is_container(&self) -> bool
fn is_container(&self) -> bool
Return wether the layer is a container. Read more
source§fn loss_weight(&self, output_id: usize) -> Option<f32>
fn loss_weight(&self, output_id: usize) -> Option<f32>
Return the associated loss weight for a given output blob index. Read more
source§fn inputs_data(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
fn inputs_data(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
Return the input tensors of the layer. Read more
source§fn inputs_gradients(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
fn inputs_gradients(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
Return the gradients of the input tensors of the layer. Read more
source§fn outputs_data(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
fn outputs_data(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
Return the output tensors of the layer. Read more
source§fn outputs_gradients(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
fn outputs_gradients(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
Return the gradients of the output tensors of the layer. Read more
source§fn learnable_weights(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
fn learnable_weights(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
Return the learnable weights inside the layer. Read more
source§fn learnable_weights_gradients(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
fn learnable_weights_gradients(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
Return the gradients for the learnable weights inside the layer. Read more
Auto Trait Implementations§
impl<B> RefUnwindSafe for Convolution<B>
impl<B> !Send for Convolution<B>
impl<B> !Sync for Convolution<B>
impl<B> Unpin for Convolution<B>
impl<B> UnwindSafe for Convolution<B>
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more