tf.contrib.layers.apply_regularization
Stay organized with collections
Save and categorize content based on your preferences.
Returns the summed penalty by applying regularizer
to the weights_list
.
tf.contrib.layers.apply_regularization(
regularizer, weights_list=None
)
Adding a regularization penalty over the layer weights and embedding weights
can help prevent overfitting the training data. Regularization over layer
biases is less common/useful, but assuming proper data preprocessing/mean
subtraction, it usually shouldn't hurt much either.
Args |
regularizer
|
A function that takes a single Tensor argument and returns
a scalar Tensor output.
|
weights_list
|
List of weights Tensors or Variables to apply
regularizer over. Defaults to the GraphKeys.WEIGHTS collection if
None .
|
Returns |
A scalar representing the overall regularization penalty.
|
Raises |
ValueError
|
If regularizer does not return a scalar output, or if we find
no weights.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.layers.apply_regularization\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/layers/python/layers/regularizers.py#L170-L209) |\n\nReturns the summed penalty by applying `regularizer` to the `weights_list`. \n\n tf.contrib.layers.apply_regularization(\n regularizer, weights_list=None\n )\n\nAdding a regularization penalty over the layer weights and embedding weights\ncan help prevent overfitting the training data. Regularization over layer\nbiases is less common/useful, but assuming proper data preprocessing/mean\nsubtraction, it usually shouldn't hurt much either.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|----------------|---------------------------------------------------------------------------------------------------------------------------------|\n| `regularizer` | A function that takes a single `Tensor` argument and returns a scalar `Tensor` output. |\n| `weights_list` | List of weights `Tensors` or `Variables` to apply `regularizer` over. Defaults to the `GraphKeys.WEIGHTS` collection if `None`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A scalar representing the overall regularization penalty. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|-----------------------------------------------------------------------------|\n| `ValueError` | If `regularizer` does not return a scalar output, or if we find no weights. |\n\n\u003cbr /\u003e"]]