@frozen
public struct Dense<Scalar> : Layer where Scalar : TensorFlowFloatingPoint
A densely-connected neural network layer.
Dense
implements the operation activation(matmul(input, weight) + bias)
, where weight
is
a weight matrix, bias
is a bias vector, and activation
is an element-wise activation
function.
This layer also supports 3-D weight tensors with 2-D bias matrices. In this case the first
dimension of both is treated as the batch size that is aligned with the first dimension of
input
and the batch variant of the matmul(_:_:)
operation is used, thus using a different
weight and bias for each element in input batch.
-
The weight matrix.
Declaration
public var weight: Tensor<Scalar>
-
The bias vector.
Declaration
public var bias: Tensor<Scalar>
-
The element-wise activation function.
Declaration
@noDerivative public let activation: Activation
-
Creates an instance from the given weight, optional bias, and activation function.
Note
currently,weight
is the only differentiability parameter.bias
can be made a differentiability parameter afterOptional
conditionally conforms toDifferentiable
: TF-499.Declaration
@differentiable(wrt: weight) public init( weight: Tensor<Scalar>, bias: Tensor<Scalar>? = nil, activation: @escaping Activation )
-
Creates a
Dense
layer with the specified input size, output size, and element-wise activation function. The weight matrix is created with shape[inputSize, outputSize]
and the bias vector is created with shape[outputSize]
.Declaration
public init( inputSize: Int, outputSize: Int, activation: @escaping Activation = identity, useBias: Bool = true, weightInitializer: ParameterInitializer<Scalar> = glorotUniform(), biasInitializer: ParameterInitializer<Scalar> = zeros() )
Parameters
inputSize
The dimensionality of the input space.
outputSize
The dimensionality of the output space.
activation
The activation function to use. The default value is
identity(_:)
.weightInitializer
Initializer to use for
weight
.biasInitializer
Initializer to use for
bias
.