Stay organized with collections
Save and categorize content based on your preferences.
tensorflow::
ops::
EncodeJpeg
#include <image_ops.h>
JPEG-encode an image.
Summary
image
is a 3-D uint8
Tensor
of shape
[height, width, channels]
.
The attr
format
can be used to override the color format of the encoded output. Values can be:
-
`''
: Use a default format based on the number of channels in the image. *
grayscale
:
Output
a grayscale JPEG image. The
channels
dimension of
image
must be 1. *
rgb
:
Output
an RGB JPEG image. The
channels
dimension of
image` must be 3.
If
format
is not specified or is the empty string, a default format is picked in function of the number of channels in
image
:
Args:
-
scope: A
Scope
object
-
image: 3-D with shape
[height, width, channels]
.
Optional attributes (see
Attrs
):
-
format: Per pixel image format.
-
quality: Quality of the compression from 0 to 100 (higher is better and slower).
-
progressive: If True, create a JPEG that loads progressively (coarse to fine).
-
optimize_size: If True, spend CPU/RAM to reduce size with no quality change.
-
chroma_downsampling: See
http://en.wikipedia.org/wiki/Chroma_subsampling
.
-
density_unit: Unit used to specify
x_density
and
y_density
: pixels per inch (
'in'
) or centimeter (
'cm'
).
-
x_density: Horizontal pixels per density unit.
-
y_density: Vertical pixels per density unit.
-
xmp_metadata: If not empty, embed this XMP metadata in the image header.
Returns:
-
Output
: 0-D. JPEG-encoded image.
Public attributes
Public functions
node
::tensorflow::Node * node() const
operator::tensorflow::Input() const
operator::tensorflow::Output
operator::tensorflow::Output() const
Public static functions
ChromaDownsampling
Attrs ChromaDownsampling(
bool x
)
DensityUnit
Attrs DensityUnit(
StringPiece x
)
Attrs Format(
StringPiece x
)
OptimizeSize
Attrs OptimizeSize(
bool x
)
Progressive
Attrs Progressive(
bool x
)
Quality
Attrs Quality(
int64 x
)
XDensity
Attrs XDensity(
int64 x
)
Attrs XmpMetadata(
StringPiece x
)
YDensity
Attrs YDensity(
int64 x
)
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2021-08-16 UTC.
[null,null,["Last updated 2021-08-16 UTC."],[],[],null,["# tensorflow::ops::EncodeJpeg Class Reference\n\ntensorflow::\nops::\nEncodeJpeg\n=============================\n\n`\n#include \u003cimage_ops.h\u003e\n`\n\n\nJPEG-encode an image.\n\nSummary\n-------\n\n\n`\nimage\n`\nis a 3-D uint8\n[Tensor](/versions/r2.6/api_docs/cc/class/tensorflow/tensor#classtensorflow_1_1_tensor)\nof shape\n`\n[height, width, channels]\n`\n.\n\n\nThe attr\n`\nformat\n`\ncan be used to override the color format of the encoded output. Values can be:\n\n\n- \\`'' `\n : Use a default format based on the number of channels in the image. *\n ` grayscale `\n :\n `[Output](/versions/r2.6/api_docs/cc/class/tensorflow/output#classtensorflow_1_1_output)`\n a grayscale JPEG image. The\n ` channels `\n dimension of\n ` image `\n must be 1. *\n ` rgb `\n :\n `[Output](/versions/r2.6/api_docs/cc/class/tensorflow/output#classtensorflow_1_1_output)`\n an RGB JPEG image. The\n ` channels `\n dimension of\n ` image\\` must be 3.\n\n\u003cbr /\u003e\n\n\nIf\n`\nformat\n`\nis not specified or is the empty string, a default format is picked in function of the number of channels in\n`\nimage\n`\n:\n\n\n- 1: [Output](/versions/r2.6/api_docs/cc/class/tensorflow/output#classtensorflow_1_1_output) a grayscale image.\n- 3: [Output](/versions/r2.6/api_docs/cc/class/tensorflow/output#classtensorflow_1_1_output) an RGB image.\n\n\u003cbr /\u003e\n\n\nArgs:\n\n- scope: A [Scope](/versions/r2.6/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope) object\n- image: 3-D with shape `\n [height, width, channels]\n ` .\n\n\u003cbr /\u003e\n\n\nOptional attributes (see\n`\n`[Attrs](/versions/r2.6/api_docs/cc/struct/tensorflow/ops/encode-jpeg/attrs#structtensorflow_1_1ops_1_1_encode_jpeg_1_1_attrs)`\n`\n):\n\n- format: Per pixel image format.\n- quality: Quality of the compression from 0 to 100 (higher is better and slower).\n- progressive: If True, create a JPEG that loads progressively (coarse to fine).\n- optimize_size: If True, spend CPU/RAM to reduce size with no quality change.\n- chroma_downsampling: See \u003chttp://en.wikipedia.org/wiki/Chroma_subsampling\u003e .\n- density_unit: Unit used to specify `\n x_density\n ` and `\n y_density\n ` : pixels per inch ( `\n 'in'\n ` ) or centimeter ( `\n 'cm'\n ` ).\n- x_density: Horizontal pixels per density unit.\n- y_density: Vertical pixels per density unit.\n- xmp_metadata: If not empty, embed this XMP metadata in the image header.\n\n\u003cbr /\u003e\n\n\nReturns:\n\n- `\n `[Output](/versions/r2.6/api_docs/cc/class/tensorflow/output#classtensorflow_1_1_output)`\n ` : 0-D. JPEG-encoded image.\n\n\u003cbr /\u003e\n\n| ### Constructors and Destructors ||\n|---|---|\n| ` `[EncodeJpeg](#classtensorflow_1_1ops_1_1_encode_jpeg_1a0ca40e89fe38209cf7585aa75db5253b)` (const :: `[tensorflow::Scope](/versions/r2.6/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, :: `[tensorflow::Input](/versions/r2.6/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` image) ` ||\n| ` `[EncodeJpeg](#classtensorflow_1_1ops_1_1_encode_jpeg_1a79419850b6852e9fef1de27ccaeb02c9)` (const :: `[tensorflow::Scope](/versions/r2.6/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, :: `[tensorflow::Input](/versions/r2.6/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` image, const `[EncodeJpeg::Attrs](/versions/r2.6/api_docs/cc/struct/tensorflow/ops/encode-jpeg/attrs#structtensorflow_1_1ops_1_1_encode_jpeg_1_1_attrs)` & attrs) ` ||\n\n| ### Public attributes ||\n|-----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------|\n| ` `[contents](#classtensorflow_1_1ops_1_1_encode_jpeg_1a993f3e068d50550dccdee87eab14bf46)` ` | ` :: `[tensorflow::Output](/versions/r2.6/api_docs/cc/class/tensorflow/output#classtensorflow_1_1_output)` ` |\n| ` `[operation](#classtensorflow_1_1ops_1_1_encode_jpeg_1a4a6ed1dc754ddbe8448db94af6b97903)` ` | ` `[Operation](/versions/r2.6/api_docs/cc/class/tensorflow/operation#classtensorflow_1_1_operation)` ` |\n\n| ### Public functions ||\n|---------------------------------------------------------------------------------------------------------------------------|--------------------------|\n| ` `[node](#classtensorflow_1_1ops_1_1_encode_jpeg_1a184e73345337120296e192103c1faa1b)` () const ` | ` ::tensorflow::Node * ` |\n| ` `[operator::tensorflow::Input](#classtensorflow_1_1ops_1_1_encode_jpeg_1abb0cb093da0dd1edcd437b7a5a6a501e)` () const ` | ` ` |\n| ` `[operator::tensorflow::Output](#classtensorflow_1_1ops_1_1_encode_jpeg_1a0b066a9fb1c91437f844cb0056d6bed9)` () const ` | ` ` |\n\n| ### Public static functions ||\n|-----------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------|\n| ` `[ChromaDownsampling](#classtensorflow_1_1ops_1_1_encode_jpeg_1a955a859ac255af73650246c2be60efa6)` (bool x) ` | ` `[Attrs](/versions/r2.6/api_docs/cc/struct/tensorflow/ops/encode-jpeg/attrs#structtensorflow_1_1ops_1_1_encode_jpeg_1_1_attrs)` ` |\n| ` `[DensityUnit](#classtensorflow_1_1ops_1_1_encode_jpeg_1aac4afe05ce09cebfce9f62e2b733243d)` (StringPiece x) ` | ` `[Attrs](/versions/r2.6/api_docs/cc/struct/tensorflow/ops/encode-jpeg/attrs#structtensorflow_1_1ops_1_1_encode_jpeg_1_1_attrs)` ` |\n| ` `[Format](#classtensorflow_1_1ops_1_1_encode_jpeg_1a357af3801d374097cb3ab666711f727c)` (StringPiece x) ` | ` `[Attrs](/versions/r2.6/api_docs/cc/struct/tensorflow/ops/encode-jpeg/attrs#structtensorflow_1_1ops_1_1_encode_jpeg_1_1_attrs)` ` |\n| ` `[OptimizeSize](#classtensorflow_1_1ops_1_1_encode_jpeg_1a363d3434cd5e13cfe6646e5297e55617)` (bool x) ` | ` `[Attrs](/versions/r2.6/api_docs/cc/struct/tensorflow/ops/encode-jpeg/attrs#structtensorflow_1_1ops_1_1_encode_jpeg_1_1_attrs)` ` |\n| ` `[Progressive](#classtensorflow_1_1ops_1_1_encode_jpeg_1ad0bc11703455f6452e78d3e9290bfa30)` (bool x) ` | ` `[Attrs](/versions/r2.6/api_docs/cc/struct/tensorflow/ops/encode-jpeg/attrs#structtensorflow_1_1ops_1_1_encode_jpeg_1_1_attrs)` ` |\n| ` `[Quality](#classtensorflow_1_1ops_1_1_encode_jpeg_1a8f272a8cab58219e417e67bad1538ee9)` (int64 x) ` | ` `[Attrs](/versions/r2.6/api_docs/cc/struct/tensorflow/ops/encode-jpeg/attrs#structtensorflow_1_1ops_1_1_encode_jpeg_1_1_attrs)` ` |\n| ` `[XDensity](#classtensorflow_1_1ops_1_1_encode_jpeg_1aaf12a81368799b401dbfa78b22eb2e0f)` (int64 x) ` | ` `[Attrs](/versions/r2.6/api_docs/cc/struct/tensorflow/ops/encode-jpeg/attrs#structtensorflow_1_1ops_1_1_encode_jpeg_1_1_attrs)` ` |\n| ` `[XmpMetadata](#classtensorflow_1_1ops_1_1_encode_jpeg_1aba918bc2d45a12d7eee5dab85c56badb)` (StringPiece x) ` | ` `[Attrs](/versions/r2.6/api_docs/cc/struct/tensorflow/ops/encode-jpeg/attrs#structtensorflow_1_1ops_1_1_encode_jpeg_1_1_attrs)` ` |\n| ` `[YDensity](#classtensorflow_1_1ops_1_1_encode_jpeg_1a34fd19ec04cb0d7801aa09095585eb64)` (int64 x) ` | ` `[Attrs](/versions/r2.6/api_docs/cc/struct/tensorflow/ops/encode-jpeg/attrs#structtensorflow_1_1ops_1_1_encode_jpeg_1_1_attrs)` ` |\n\n| ### Structs ||\n|-------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------|\n| [tensorflow:: ops:: EncodeJpeg:: Attrs](/versions/r2.6/api_docs/cc/struct/tensorflow/ops/encode-jpeg/attrs) | Optional attribute setters for [EncodeJpeg](/versions/r2.6/api_docs/cc/class/tensorflow/ops/encode-jpeg#classtensorflow_1_1ops_1_1_encode_jpeg) . |\n\nPublic attributes\n-----------------\n\n### contents\n\n```text\n::tensorflow::Output contents\n``` \n\n### operation\n\n```text\nOperation operation\n``` \n\nPublic functions\n----------------\n\n### EncodeJpeg\n\n```gdscript\n EncodeJpeg(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input image\n)\n``` \n\n### EncodeJpeg\n\n```gdscript\n EncodeJpeg(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input image,\n const EncodeJpeg::Attrs & attrs\n)\n``` \n\n### node\n\n```gdscript\n::tensorflow::Node * node() const \n``` \n\n### operator::tensorflow::Input\n\n```gdscript\n operator::tensorflow::Input() const \n``` \n\n### operator::tensorflow::Output\n\n```gdscript\n operator::tensorflow::Output() const \n``` \n\nPublic static functions\n-----------------------\n\n### ChromaDownsampling\n\n```text\nAttrs ChromaDownsampling(\n bool x\n)\n``` \n\n### DensityUnit\n\n```text\nAttrs DensityUnit(\n StringPiece x\n)\n``` \n\n### Format\n\n```text\nAttrs Format(\n StringPiece x\n)\n``` \n\n### OptimizeSize\n\n```text\nAttrs OptimizeSize(\n bool x\n)\n``` \n\n### Progressive\n\n```text\nAttrs Progressive(\n bool x\n)\n``` \n\n### Quality\n\n```text\nAttrs Quality(\n int64 x\n)\n``` \n\n### XDensity\n\n```text\nAttrs XDensity(\n int64 x\n)\n``` \n\n### XmpMetadata\n\n```text\nAttrs XmpMetadata(\n StringPiece x\n)\n``` \n\n### YDensity\n\n```text\nAttrs YDensity(\n int64 x\n)\n```"]]