tf.keras.initializers.lecun_normal
Stay organized with collections
Save and categorize content based on your preferences.
LeCun normal initializer.
tf.keras.initializers.lecun_normal(
seed=None
)
It draws samples from a truncated normal distribution centered on 0
with stddev = sqrt(1 / fan_in)
where fan_in
is the number of input units in the weight tensor.
Arguments |
seed
|
A Python integer. Used to seed the random generator.
|
References:
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.keras.initializers.lecun_normal\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.1.0/tensorflow/python/ops/init_ops_v2.py#L619-L642) |\n\nLeCun normal initializer.\n\n#### View aliases\n\n\n**Main aliases**\n\n[`tf.initializers.lecun_normal`](/api_docs/python/tf/keras/initializers/LecunNormal)\n\n\u003cbr /\u003e\n\n tf.keras.initializers.lecun_normal(\n seed=None\n )\n\nIt draws samples from a truncated normal distribution centered on 0\nwith `stddev = sqrt(1 / fan_in)`\nwhere `fan_in` is the number of input units in the weight tensor.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Arguments --------- ||\n|--------|------------------------------------------------------|\n| `seed` | A Python integer. Used to seed the random generator. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| An initializer. ||\n\n\u003cbr /\u003e\n\n#### References:\n\n- Self-Normalizing Neural Networks, [Klambauer et al., 2017](https://papers.nips.cc/paper/6698-self-normalizing-neural-networks) ([pdf](https://papers.nips.cc/paper/6698-self-normalizing-neural-networks.pdf))\n- Efficient Backprop, [Lecun et al., 1998](http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf)"]]