tf.contrib.layers.xavier_initializer
Stay organized with collections
Save and categorize content based on your preferences.
Returns an initializer performing "Xavier" initialization for weights.
View aliases
Main aliases
`tf.contrib.layers.xavier_initializer_conv2d`
tf.contrib.layers.xavier_initializer(
uniform=True, seed=None, dtype=tf.dtypes.float32
)
This function implements the weight initialization from:
Xavier Glorot and Yoshua Bengio (2010):
Understanding the difficulty of training deep feedforward neural
networks. International conference on artificial intelligence and
statistics.
This initializer is designed to keep the scale of the gradients roughly the
same in all layers. In uniform distribution this ends up being the range:
x = sqrt(6. / (in + out)); [-x, x]
and for normal distribution a standard
deviation of sqrt(2. / (in + out))
is used.
Args |
uniform
|
Whether to use uniform or normal distributed random initialization.
|
seed
|
A Python integer. Used to create random seeds. See
tf.compat.v1.set_random_seed for behavior.
|
dtype
|
The data type. Only floating point types are supported.
|
Returns |
An initializer for a weight matrix.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.layers.xavier_initializer\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/layers/python/layers/initializers.py#L31-L57) |\n\nReturns an initializer performing \"Xavier\" initialization for weights.\n\n#### View aliases\n\n\n**Main aliases**\n\n\\`tf.contrib.layers.xavier_initializer_conv2d\\`\n\n\u003cbr /\u003e\n\n tf.contrib.layers.xavier_initializer(\n uniform=True, seed=None, dtype=tf.dtypes.float32\n )\n\nThis function implements the weight initialization from:\n\nXavier Glorot and Yoshua Bengio (2010):\n[Understanding the difficulty of training deep feedforward neural\nnetworks. International conference on artificial intelligence and\nstatistics.](http://www.jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf)\n\nThis initializer is designed to keep the scale of the gradients roughly the\nsame in all layers. In uniform distribution this ends up being the range:\n`x = sqrt(6. / (in + out)); [-x, x]` and for normal distribution a standard\ndeviation of `sqrt(2. / (in + out))` is used.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-----------|---------------------------------------------------------------------------------------------------------------------------------------|\n| `uniform` | Whether to use uniform or normal distributed random initialization. |\n| `seed` | A Python integer. Used to create random seeds. See [`tf.compat.v1.set_random_seed`](../../../tf/random/set_random_seed) for behavior. |\n| `dtype` | The data type. Only floating point types are supported. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| An initializer for a weight matrix. ||\n\n\u003cbr /\u003e"]]