Use this when your sparse features are in string or integer format, and you
want to distribute your inputs into a finite number of buckets by hashing.
output_id = Hash(input_feature_string) % bucket_size for string type input.
For int type input, the value is converted to its string representation first
and then hashed by the same formula.
For input dictionary features, features[key] is either Tensor or
SparseTensor. If Tensor, missing values can be represented by -1 for int
and '' for string, which will be dropped by this feature column.
A unique string identifying the input feature. It is used as the column
name and the dictionary key for feature parsing configs, feature Tensor
objects, and feature columns.
hash_bucket_size
An int > 1. The number of buckets.
dtype
The type of features. Only string and integer types are supported.
[null,null,["Last updated 2024-04-26 UTC."],[],[],null,["# tf.feature_column.categorical_column_with_hash_bucket\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/feature_column/feature_column_v2.py#L1004-L1073) |\n\nRepresents sparse feature where ids are set by hashing. (deprecated)\n| **Warning:** tf.feature_column is not recommended for new code. Instead, feature preprocessing can be done directly using either [Keras preprocessing\n| layers](https://www.tensorflow.org/guide/migrate/migrating_feature_columns) or through the one-stop utility [`tf.keras.utils.FeatureSpace`](https://www.tensorflow.org/api_docs/python/tf/keras/utils/FeatureSpace) built on top of them. See the [migration guide](https://tensorflow.org/guide/migrate) for details.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.feature_column.categorical_column_with_hash_bucket`](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_hash_bucket)\n\n\u003cbr /\u003e\n\n tf.feature_column.categorical_column_with_hash_bucket(\n key,\n hash_bucket_size,\n dtype=../../tf/dtypes#string\n )\n\n### Used in the notebooks\n\n| Used in the guide | Used in the tutorials |\n|------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------|\n| - [Estimators](https://www.tensorflow.org/guide/estimator) | - [Classify structured data with feature columns](https://www.tensorflow.org/tutorials/structured_data/feature_columns) |\n\n| **Deprecated:** THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use Keras preprocessing layers instead, either directly or via the [`tf.keras.utils.FeatureSpace`](../../tf/keras/utils/FeatureSpace) utility. Each of `tf.feature_column.*` has a functional equivalent in `tf.keras.layers` for feature preprocessing when training a Keras model.\n\nUse this when your sparse features are in string or integer format, and you\nwant to distribute your inputs into a finite number of buckets by hashing.\noutput_id = Hash(input_feature_string) % bucket_size for string type input.\nFor int type input, the value is converted to its string representation first\nand then hashed by the same formula.\n\nFor input dictionary `features`, `features[key]` is either `Tensor` or\n`SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int\nand `''` for string, which will be dropped by this feature column.\n\n#### Example:\n\n import tensorflow as tf\n keywords = tf.feature_column.categorical_column_with_hash_bucket(\"keywords\",\n 10000)\n columns = [keywords]\n features = {'keywords': tf.constant([['Tensorflow', 'Keras', 'RNN', 'LSTM',\n 'CNN'], ['LSTM', 'CNN', 'Tensorflow', 'Keras', 'RNN'], ['CNN', 'Tensorflow',\n 'LSTM', 'Keras', 'RNN']])}\n linear_prediction, _, _ = tf.compat.v1.feature_column.linear_model(features,\n columns)\n\n # or\n import tensorflow as tf\n keywords = tf.feature_column.categorical_column_with_hash_bucket(\"keywords\",\n 10000)\n keywords_embedded = tf.feature_column.embedding_column(keywords, 16)\n columns = [keywords_embedded]\n features = {'keywords': tf.constant([['Tensorflow', 'Keras', 'RNN', 'LSTM',\n 'CNN'], ['LSTM', 'CNN', 'Tensorflow', 'Keras', 'RNN'], ['CNN', 'Tensorflow',\n 'LSTM', 'Keras', 'RNN']])}\n input_layer = tf.keras.layers.DenseFeatures(columns)\n dense_tensor = input_layer(features)\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|--------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `key` | A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature `Tensor` objects, and feature columns. |\n| `hash_bucket_size` | An int \\\u003e 1. The number of buckets. |\n| `dtype` | The type of features. Only string and integer types are supported. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A `HashedCategoricalColumn`. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|-------------------------------------------|\n| `ValueError` | `hash_bucket_size` is not greater than 1. |\n| `ValueError` | `dtype` is neither string nor integer. |\n\n\u003cbr /\u003e"]]