[null,null,["Last updated 2024-04-26 UTC."],[],[],null,["# tf_agents.eval.metric_utils.eager_compute\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/agents/blob/v0.19.0/tf_agents/eval/metric_utils.py#L122-L176) |\n\nCompute metrics using `policy` on the `environment`. \n\n tf_agents.eval.metric_utils.eager_compute(\n metrics,\n environment,\n policy,\n num_episodes=1,\n train_step=None,\n summary_writer=None,\n summary_prefix='',\n use_function=True\n )\n\n| **Note:** Because placeholders are not compatible with Eager mode we can not use python policies. Because we use tf_policies we need the environment time_steps to be tensors making it easier to use a tf_env for evaluations. Otherwise this method mirrors `compute` directly.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|------------------|------------------------------------------------------------------------------------------------------------------------------|\n| `metrics` | List of metrics to compute. |\n| `environment` | tf_environment instance. |\n| `policy` | tf_policy instance used to step the environment. |\n| `num_episodes` | Number of episodes to compute the metrics over. |\n| `train_step` | An optional step to write summaries against. |\n| `summary_writer` | An optional writer for generating metric summaries. |\n| `summary_prefix` | An optional prefix scope for metric summaries. |\n| `use_function` | Option to enable use of [`tf.function`](https://www.tensorflow.org/api_docs/python/tf/function) when collecting the metrics. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A dictionary of results {metric_name: metric_value} ||\n\n\u003cbr /\u003e"]]