pub struct Linear { /* private fields */ }
Expand description
Linear Layer
Implementations§
source§impl Linear
impl Linear
sourcepub fn from_config(config: &LinearConfig) -> Linear
pub fn from_config(config: &LinearConfig) -> Linear
Create a Linear layer from a LinearConfig.
Trait Implementations§
source§impl<B: IBackend + LayerOps<f32>> ComputeInputGradient<f32, B> for Linear
impl<B: IBackend + LayerOps<f32>> ComputeInputGradient<f32, B> for Linear
source§fn compute_input_gradient(
&self,
backend: &B,
weights_data: &[&SharedTensor<f32>],
output_data: &[&SharedTensor<f32>],
output_gradients: &[&SharedTensor<f32>],
input_data: &[&SharedTensor<f32>],
input_gradients: &mut [&mut SharedTensor<f32>]
)
fn compute_input_gradient( &self, backend: &B, weights_data: &[&SharedTensor<f32>], output_data: &[&SharedTensor<f32>], output_gradients: &[&SharedTensor<f32>], input_data: &[&SharedTensor<f32>], input_gradients: &mut [&mut SharedTensor<f32>] )
Since we have row vectors instead of columns, xW^T = (Wx^T)^T. Take the derivative with respect to x^T (gives us a column vector of dimension (n, 1)), we get d((Wx^T)^T)/d(x^T) = W^T of dims (n, m). In backpropagation with column vectors, we would take W^T * output_grad, and in terms of row vectors, that would be output_grad^T * W which produces a vector of dims (1, n)
source§impl<B: IBackend + LayerOps<f32>> ComputeOutput<f32, B> for Linear
impl<B: IBackend + LayerOps<f32>> ComputeOutput<f32, B> for Linear
source§fn compute_output(
&self,
backend: &B,
weights: &[&SharedTensor<f32>],
input_data: &[&SharedTensor<f32>],
output_data: &mut [&mut SharedTensor<f32>]
)
fn compute_output( &self, backend: &B, weights: &[&SharedTensor<f32>], input_data: &[&SharedTensor<f32>], output_data: &mut [&mut SharedTensor<f32>] )
Basically, x has the shape (k, n) where k is the batch size. Given W with shape (m, n) where m is output vector length, we compute the output with the formula xW^T which will give us a matrix of size (k, m) with the outputs.
source§impl<B: IBackend + LayerOps<f32>> ComputeParametersGradient<f32, B> for Linear
impl<B: IBackend + LayerOps<f32>> ComputeParametersGradient<f32, B> for Linear
source§fn compute_parameters_gradient(
&self,
backend: &B,
output_data: &[&SharedTensor<f32>],
output_gradients: &[&SharedTensor<f32>],
input_data: &[&SharedTensor<f32>],
parameters_gradients: &mut [&mut SharedTensor<f32>]
)
fn compute_parameters_gradient( &self, backend: &B, output_data: &[&SharedTensor<f32>], output_gradients: &[&SharedTensor<f32>], input_data: &[&SharedTensor<f32>], parameters_gradients: &mut [&mut SharedTensor<f32>] )
Compute gradients with respect to the parameters and write them into
parameters_gradients
.source§impl<B: IBackend + LayerOps<f32>> ILayer<B> for Linear
impl<B: IBackend + LayerOps<f32>> ILayer<B> for Linear
source§fn auto_weight_blobs(&self) -> bool
fn auto_weight_blobs(&self) -> bool
Return whether weight blobs are created automatically for the layer. Read more
source§fn reshape(
&mut self,
backend: Rc<B>,
input_data: &mut Vec<ArcLock<SharedTensor<f32>>>,
input_gradient: &mut Vec<ArcLock<SharedTensor<f32>>>,
weights_data: &mut Vec<ArcLock<SharedTensor<f32>>>,
weights_gradient: &mut Vec<ArcLock<SharedTensor<f32>>>,
output_data: &mut Vec<ArcLock<SharedTensor<f32>>>,
output_gradient: &mut Vec<ArcLock<SharedTensor<f32>>>
)
fn reshape( &mut self, backend: Rc<B>, input_data: &mut Vec<ArcLock<SharedTensor<f32>>>, input_gradient: &mut Vec<ArcLock<SharedTensor<f32>>>, weights_data: &mut Vec<ArcLock<SharedTensor<f32>>>, weights_gradient: &mut Vec<ArcLock<SharedTensor<f32>>>, output_data: &mut Vec<ArcLock<SharedTensor<f32>>>, output_gradient: &mut Vec<ArcLock<SharedTensor<f32>>> )
Adjust to shapes of the output blobs to fit the shapes of the input blobs. Read more
source§fn exact_num_output_blobs(&self) -> Option<usize>
fn exact_num_output_blobs(&self) -> Option<usize>
Returns the exact number of output blobs required by the layer,
or
None
if no exact number is required. Read moreAdjust size of shared workspace. Read more
source§fn forward(
&self,
backend: &B,
input_data: &[ArcLock<SharedTensor<f32>>],
weights_data: &[ArcLock<SharedTensor<f32>>],
output_data: &mut [ArcLock<SharedTensor<f32>>]
)
fn forward( &self, backend: &B, input_data: &[ArcLock<SharedTensor<f32>>], weights_data: &[ArcLock<SharedTensor<f32>>], output_data: &mut [ArcLock<SharedTensor<f32>>] )
Compute the [feedforward][1] layer output using the provided Backend.
[1]: https://en.wikipedia.org/wiki/Feedforward_neural_network Read more
source§fn backward_input(
&self,
backend: &B,
weights_data: &[ArcLock<SharedTensor<f32>>],
output_data: &[ArcLock<SharedTensor<f32>>],
output_gradients: &[ArcLock<SharedTensor<f32>>],
input_data: &[ArcLock<SharedTensor<f32>>],
input_gradients: &mut [ArcLock<SharedTensor<f32>>]
)
fn backward_input( &self, backend: &B, weights_data: &[ArcLock<SharedTensor<f32>>], output_data: &[ArcLock<SharedTensor<f32>>], output_gradients: &[ArcLock<SharedTensor<f32>>], input_data: &[ArcLock<SharedTensor<f32>>], input_gradients: &mut [ArcLock<SharedTensor<f32>>] )
Compute the [backpropagation][1] input gradient using the provided backend.
[1]: https://en.wikipedia.org/wiki/Backpropagation Read more
source§fn backward_parameters(
&self,
backend: &B,
output_data: &[ArcLock<SharedTensor<f32>>],
output_gradients: &[ArcLock<SharedTensor<f32>>],
input_data: &[ArcLock<SharedTensor<f32>>],
weights_gradients: &mut [ArcLock<SharedTensor<f32>>]
)
fn backward_parameters( &self, backend: &B, output_data: &[ArcLock<SharedTensor<f32>>], output_gradients: &[ArcLock<SharedTensor<f32>>], input_data: &[ArcLock<SharedTensor<f32>>], weights_gradients: &mut [ArcLock<SharedTensor<f32>>] )
Compute the [backpropagation][1] parameters gradient using the provided backend.
[1]: https://en.wikipedia.org/wiki/Backpropagation Read more
source§fn auto_output_blobs(&self) -> bool
fn auto_output_blobs(&self) -> bool
Return whether “anonymous” output blobs are created automatically for the layer. Read more
source§fn min_output_blobs(&self) -> usize
fn min_output_blobs(&self) -> usize
Returns the minimum number of output blobs required by the layer,
or 0 if no minimum number is required. Read more
source§fn exact_num_input_blobs(&self) -> Option<usize>
fn exact_num_input_blobs(&self) -> Option<usize>
Returns the exact number of input blobs required by the layer,
or
None
if no exact number is required. Read moresource§fn allow_force_backward(&self, input_id: usize) -> bool
fn allow_force_backward(&self, input_id: usize) -> bool
Return whether to allow force_backward for a given input blob index. Read more
source§fn sync_native(&self) -> bool
fn sync_native(&self) -> bool
Return wether a simple native backend should be used to [sync][1] instead of the default backend.
[1]: #method.sync Read more
source§fn compute_in_place(&self) -> bool
fn compute_in_place(&self) -> bool
Return wether the computations of a layer should be done in-place (the output will be written where the input was read from). Read more
source§fn is_container(&self) -> bool
fn is_container(&self) -> bool
Return wether the layer is a container. Read more
source§fn loss_weight(&self, output_id: usize) -> Option<f32>
fn loss_weight(&self, output_id: usize) -> Option<f32>
Return the associated loss weight for a given output blob index. Read more
source§fn inputs_data(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
fn inputs_data(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
Return the input tensors of the layer. Read more
source§fn inputs_gradients(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
fn inputs_gradients(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
Return the gradients of the input tensors of the layer. Read more
source§fn outputs_data(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
fn outputs_data(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
Return the output tensors of the layer. Read more
source§fn outputs_gradients(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
fn outputs_gradients(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
Return the gradients of the output tensors of the layer. Read more
source§fn learnable_weights(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
fn learnable_weights(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
Return the learnable weights inside the layer. Read more
source§fn learnable_weights_gradients(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
fn learnable_weights_gradients(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>
Return the gradients for the learnable weights inside the layer. Read more
Auto Trait Implementations§
impl !RefUnwindSafe for Linear
impl !Send for Linear
impl !Sync for Linear
impl Unpin for Linear
impl !UnwindSafe for Linear
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more