영화 리뷰로 텍스트 분류

TensorFlow.org에서 보기 Google Colab에서 실행 GitHub에서 소스 보기 노트북 다운로드 TF Hub 모델 보기

이 노트북은 리뷰의 텍스트를 사용하여 영화 리뷰를 긍정적 또는 부정적으로 분류합니다. 이진( 또는 2-클래스 분류인 이 예는 광범위하게 적용할 수 있는 중요한 머신러닝 응용 사례입니다.

인터넷 영화 데이터베이스에서 가져온 50,000개의 영화 리뷰 텍스트를 포함한 IMDB 데이터세트를 사용합니다. 이 데이터세트는 훈련을 위한 25,000개 리뷰와 테스트를 위한 25,000개 리뷰로 나뉩니다. 훈련 및 테스트 세트는 균형적으로, 긍정적 리뷰와 부정적 리뷰의 수가 동일하게 포함되어 있습니다.

이 노트북은 높은 수준의 API인 tf.keras를 사용하여 TensorFlow에서 모델을 빌드 및 훈련하고, 전이 학습을 위한 라이브러리 및 플랫폼인 TensorFlow Hub를 사용합니다. tf.keras를 사용하는 한 단계 더 나아간 텍스트 분류 튜토리얼은 MLCC 텍스트 분류 가이드를 참조하세요.

더 많은 모델

여기에서 텍스트 임베딩을 생성하는 데 사용할 수 있는 보다 표현력이 있거나 성능이 뛰어난 모델을 찾을 수 있습니다.

!pip install -U tf-hub-nightly
import tensorflow_hub as hub

from tensorflow.keras import layers

import numpy as np

import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds

import matplotlib.pyplot as plt

print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.config.list_physical_devices('GPU') else "NOT AVAILABLE")
2022-12-14 22:09:32.730423: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2022-12-14 22:09:32.730516: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2022-12-14 22:09:32.730526: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
Version:  2.11.0
Eager mode:  True
Hub version:  0.12.0
GPU is available

IMDb 데이터세트 다운로드하기

IMDB 데이터세트는 TensorFlow 데이터세트에서 사용할 수 있습니다. 다음 코드를 이용해 IMDB 데이터세트를 사용자 머신(또는 colab 런타임)으로 다운로드합니다.

train_data, test_data = tfds.load(name="imdb_reviews", split=["train", "test"], 
                                  batch_size=-1, as_supervised=True)

train_examples, train_labels = tfds.as_numpy(train_data)
test_examples, test_labels = tfds.as_numpy(test_data)

데이터 살펴보기

잠시 시간을내어 데이터 형식을 이해하겠습니다. 각 예제는 영화 리뷰와 해당 레이블을 나타내는 문장입니다. 문장은 어떤 식 으로든 사전 처리되지 않습니다. 레이블은 0 또는 1의 정수 값입니다. 여기서 0은 부정적인 리뷰이고 1은 긍정적 인 리뷰입니다.

print("Training entries: {}, test entries: {}".format(len(train_examples), len(test_examples)))
Training entries: 25000, test entries: 25000

처음 10개의 예를 인쇄하겠습니다.

train_examples[:10]
array([b"This was an absolutely terrible movie. Don't be lured in by Christopher Walken or Michael Ironside. Both are great actors, but this must simply be their worst role in history. Even their great acting could not redeem this movie's ridiculous storyline. This movie is an early nineties US propaganda piece. The most pathetic scenes were those when the Columbian rebels were making their cases for revolutions. Maria Conchita Alonso appeared phony, and her pseudo-love affair with Walken was nothing but a pathetic emotional plug in a movie that was devoid of any real meaning. I am disappointed that there are movies like this, ruining actor's like Christopher Walken's good name. I could barely sit through it.",
       b'I have been known to fall asleep during films, but this is usually due to a combination of things including, really tired, being warm and comfortable on the sette and having just eaten a lot. However on this occasion I fell asleep because the film was rubbish. The plot development was constant. Constantly slow and boring. Things seemed to happen, but with no explanation of what was causing them or why. I admit, I may have missed part of the film, but i watched the majority of it and everything just seemed to happen of its own accord without any real concern for anything else. I cant recommend this film at all.',
       b'Mann photographs the Alberta Rocky Mountains in a superb fashion, and Jimmy Stewart and Walter Brennan give enjoyable performances as they always seem to do. <br /><br />But come on Hollywood - a Mountie telling the people of Dawson City, Yukon to elect themselves a marshal (yes a marshal!) and to enforce the law themselves, then gunfighters battling it out on the streets for control of the town? <br /><br />Nothing even remotely resembling that happened on the Canadian side of the border during the Klondike gold rush. Mr. Mann and company appear to have mistaken Dawson City for Deadwood, the Canadian North for the American Wild West.<br /><br />Canadian viewers be prepared for a Reefer Madness type of enjoyable howl with this ludicrous plot, or, to shake your head in disgust.',
       b'This is the kind of film for a snowy Sunday afternoon when the rest of the world can go ahead with its own business as you descend into a big arm-chair and mellow for a couple of hours. Wonderful performances from Cher and Nicolas Cage (as always) gently row the plot along. There are no rapids to cross, no dangerous waters, just a warm and witty paddle through New York life at its best. A family film in every sense and one that deserves the praise it received.',
       b'As others have mentioned, all the women that go nude in this film are mostly absolutely gorgeous. The plot very ably shows the hypocrisy of the female libido. When men are around they want to be pursued, but when no "men" are around, they become the pursuers of a 14 year old boy. And the boy becomes a man really fast (we should all be so lucky at this age!). He then gets up the courage to pursue his true love.',
       b"This is a film which should be seen by anybody interested in, effected by, or suffering from an eating disorder. It is an amazingly accurate and sensitive portrayal of bulimia in a teenage girl, its causes and its symptoms. The girl is played by one of the most brilliant young actresses working in cinema today, Alison Lohman, who was later so spectacular in 'Where the Truth Lies'. I would recommend that this film be shown in all schools, as you will never see a better on this subject. Alison Lohman is absolutely outstanding, and one marvels at her ability to convey the anguish of a girl suffering from this compulsive disorder. If barometers tell us the air pressure, Alison Lohman tells us the emotional pressure with the same degree of accuracy. Her emotional range is so precise, each scene could be measured microscopically for its gradations of trauma, on a scale of rising hysteria and desperation which reaches unbearable intensity. Mare Winningham is the perfect choice to play her mother, and does so with immense sympathy and a range of emotions just as finely tuned as Lohman's. Together, they make a pair of sensitive emotional oscillators vibrating in resonance with one another. This film is really an astonishing achievement, and director Katt Shea should be proud of it. The only reason for not seeing it is if you are not interested in people. But even if you like nature films best, this is after all animal behaviour at the sharp edge. Bulimia is an extreme version of how a tormented soul can destroy her own body in a frenzy of despair. And if we don't sympathise with people suffering from the depths of despair, then we are dead inside.",
       b'Okay, you have:<br /><br />Penelope Keith as Miss Herringbone-Tweed, B.B.E. (Backbone of England.) She\'s killed off in the first scene - that\'s right, folks; this show has no backbone!<br /><br />Peter O\'Toole as Ol\' Colonel Cricket from The First War and now the emblazered Lord of the Manor.<br /><br />Joanna Lumley as the ensweatered Lady of the Manor, 20 years younger than the colonel and 20 years past her own prime but still glamourous (Brit spelling, not mine) enough to have a toy-boy on the side. It\'s alright, they have Col. Cricket\'s full knowledge and consent (they guy even comes \'round for Christmas!) Still, she\'s considerate of the colonel enough to have said toy-boy her own age (what a gal!)<br /><br />David McCallum as said toy-boy, equally as pointlessly glamourous as his squeeze. Pilcher couldn\'t come up with any cover for him within the story, so she gave him a hush-hush job at the Circus.<br /><br />and finally:<br /><br />Susan Hampshire as Miss Polonia Teacups, Venerable Headmistress of the Venerable Girls\' Boarding-School, serving tea in her office with a dash of deep, poignant advice for life in the outside world just before graduation. Her best bit of advice: "I\'ve only been to Nancherrow (the local Stately Home of England) once. I thought it was very beautiful but, somehow, not part of the real world." Well, we can\'t say they didn\'t warn us.<br /><br />Ah, Susan - time was, your character would have been running the whole show. They don\'t write \'em like that any more. Our loss, not yours.<br /><br />So - with a cast and setting like this, you have the re-makings of "Brideshead Revisited," right?<br /><br />Wrong! They took these 1-dimensional supporting roles because they paid so well. After all, acting is one of the oldest temp-jobs there is (YOU name another!)<br /><br />First warning sign: lots and lots of backlighting. They get around it by shooting outdoors - "hey, it\'s just the sunlight!"<br /><br />Second warning sign: Leading Lady cries a lot. When not crying, her eyes are moist. That\'s the law of romance novels: Leading Lady is "dewy-eyed."<br /><br />Henceforth, Leading Lady shall be known as L.L.<br /><br />Third warning sign: L.L. actually has stars in her eyes when she\'s in love. Still, I\'ll give Emily Mortimer an award just for having to act with that spotlight in her eyes (I wonder . did they use contacts?)<br /><br />And lastly, fourth warning sign: no on-screen female character is "Mrs." She\'s either "Miss" or "Lady."<br /><br />When all was said and done, I still couldn\'t tell you who was pursuing whom and why. I couldn\'t even tell you what was said and done.<br /><br />To sum up: they all live through World War II without anything happening to them at all.<br /><br />OK, at the end, L.L. finds she\'s lost her parents to the Japanese prison camps and baby sis comes home catatonic. Meanwhile (there\'s always a "meanwhile,") some young guy L.L. had a crush on (when, I don\'t know) comes home from some wartime tough spot and is found living on the street by Lady of the Manor (must be some street if SHE\'s going to find him there.) Both war casualties are whisked away to recover at Nancherrow (SOMEBODY has to be "whisked away" SOMEWHERE in these romance stories!)<br /><br />Great drama.',
       b'The film is based on a genuine 1950s novel.<br /><br />Journalist Colin McInnes wrote a set of three "London novels": "Absolute Beginners", "City of Spades" and "Mr Love and Justice". I have read all three. The first two are excellent. The last, perhaps an experiment that did not come off. But McInnes\'s work is highly acclaimed; and rightly so. This musical is the novelist\'s ultimate nightmare - to see the fruits of one\'s mind being turned into a glitzy, badly-acted, soporific one-dimensional apology of a film that says it captures the spirit of 1950s London, and does nothing of the sort.<br /><br />Thank goodness Colin McInnes wasn\'t alive to witness it.',
       b'I really love the sexy action and sci-fi films of the sixties and its because of the actress\'s that appeared in them. They found the sexiest women to be in these films and it didn\'t matter if they could act (Remember "Candy"?). The reason I was disappointed by this film was because it wasn\'t nostalgic enough. The story here has a European sci-fi film called "Dragonfly" being made and the director is fired. So the producers decide to let a young aspiring filmmaker (Jeremy Davies) to complete the picture. They\'re is one real beautiful woman in the film who plays Dragonfly but she\'s barely in it. Film is written and directed by Roman Coppola who uses some of his fathers exploits from his early days and puts it into the script. I wish the film could have been an homage to those early films. They could have lots of cameos by actors who appeared in them. There is one actor in this film who was popular from the sixties and its John Phillip Law (Barbarella). Gerard Depardieu, Giancarlo Giannini and Dean Stockwell appear as well. I guess I\'m going to have to continue waiting for a director to make a good homage to the films of the sixties. If any are reading this, "Make it as sexy as you can"! I\'ll be waiting!',
       b'Sure, this one isn\'t really a blockbuster, nor does it target such a position. "Dieter" is the first name of a quite popular German musician, who is either loved or hated for his kind of acting and thats exactly what this movie is about. It is based on the autobiography "Dieter Bohlen" wrote a few years ago but isn\'t meant to be accurate on that. The movie is filled with some sexual offensive content (at least for American standard) which is either amusing (not for the other "actors" of course) or dumb - it depends on your individual kind of humor or on you being a "Bohlen"-Fan or not. Technically speaking there isn\'t much to criticize. Speaking of me I find this movie to be an OK-movie.'],
      dtype=object)

처음 10개의 레이블도 인쇄하겠습니다.

train_labels[:10]
array([0, 0, 0, 1, 1, 1, 0, 0, 0, 0])

모델 구성하기

신경망은 레이어를 쌓아서 생성됩니다. 이를 위해서는 세 가지 주요 아키텍처 결정이 필요합니다.

  • 텍스트를 표현하는 방법?
  • 모델에서 사용할 레이어는 몇 개입니까?
  • 각 레이어에 사용할 숨겨진 단위는 몇 개입니까?

이 예에서 입력 데이터는 문장으로 구성됩니다. 예측할 레이블은 0 또는 1입니다.

텍스트를 표현하는 한 가지 방법은 문장을 임베딩 벡터로 변환하는 것입니다. 사전 훈련 된 텍스트 임베딩을 첫 번째 레이어로 사용할 수 있으며, 두 가지 이점이 있습니다.

  • 텍스트 전처리에 대해 걱정할 필요가 없습니다.
  • 전이 학습의 혜택을 누릴 수 있습니다.

이 예에서는 google/nnlm-en-dim50/2라고 하는 TensorFlow Hub의 모델을 사용합니다.

이 튜토리얼을 위해 테스트할 다른 두 가지 모델이 있습니다.

먼저 TensorFlow Hub 모델을 사용하여 문장을 포함하는 Keras 레이어를 만들고 몇 가지 입력 예에서 사용해 보겠습니다. 생성된 임베딩의 출력 형상은 (num_examples, embedding_dimension)으로 예상됩니다.

model = "https://tfhub.dev/google/nnlm-en-dim50/2"
hub_layer = hub.KerasLayer(model, input_shape=[], dtype=tf.string, trainable=True)
hub_layer(train_examples[:3])
WARNING:tensorflow:Please fix your imports. Module tensorflow.python.training.tracking.data_structures has been moved to tensorflow.python.trackable.data_structures. The old module will be deleted in version 2.11.
WARNING:tensorflow:Please fix your imports. Module tensorflow.python.training.tracking.data_structures has been moved to tensorflow.python.trackable.data_structures. The old module will be deleted in version 2.11.
<tf.Tensor: shape=(3, 50), dtype=float32, numpy=
array([[ 0.5423194 , -0.01190171,  0.06337537,  0.0686297 , -0.16776839,
        -0.10581177,  0.168653  , -0.04998823, -0.31148052,  0.07910344,
         0.15442258,  0.01488661,  0.03930155,  0.19772716, -0.12215477,
        -0.04120982, -0.27041087, -0.21922147,  0.26517656, -0.80739075,
         0.25833526, -0.31004202,  0.2868321 ,  0.19433866, -0.29036498,
         0.0386285 , -0.78444123, -0.04793238,  0.41102988, -0.36388886,
        -0.58034706,  0.30269453,  0.36308962, -0.15227163, -0.4439151 ,
         0.19462997,  0.19528405,  0.05666233,  0.2890704 , -0.28468323,
        -0.00531206,  0.0571938 , -0.3201319 , -0.04418665, -0.08550781,
        -0.55847436, -0.2333639 , -0.20782956, -0.03543065, -0.17533456],
       [ 0.56338924, -0.12339553, -0.10862677,  0.7753425 , -0.07667087,
        -0.15752274,  0.01872334, -0.08169781, -0.3521876 ,  0.46373403,
        -0.08492758,  0.07166861, -0.00670818,  0.12686071, -0.19326551,
        -0.5262643 , -0.32958236,  0.14394784,  0.09043556, -0.54175544,
         0.02468163, -0.15456744,  0.68333143,  0.09068333, -0.45327246,
         0.23180094, -0.8615696 ,  0.3448039 ,  0.12838459, -0.58759046,
        -0.40712303,  0.23061076,  0.48426905, -0.2712814 , -0.5380918 ,
         0.47016335,  0.2257274 , -0.00830665,  0.28462422, -0.30498496,
         0.04400366,  0.25025868,  0.14867125,  0.4071703 , -0.15422425,
        -0.06878027, -0.40825695, -0.31492147,  0.09283663, -0.20183429],
       [ 0.7456156 ,  0.21256858,  0.1440033 ,  0.52338624,  0.11032254,
         0.00902788, -0.36678016, -0.08938274, -0.24165548,  0.33384597,
        -0.111946  , -0.01460045, -0.00716449,  0.19562715,  0.00685217,
        -0.24886714, -0.42796353,  0.1862    , -0.05241097, -0.664625  ,
         0.13449019, -0.22205493,  0.08633009,  0.43685383,  0.2972681 ,
         0.36140728, -0.71968895,  0.05291242, -0.1431612 , -0.15733941,
        -0.15056324, -0.05988007, -0.08178931, -0.15569413, -0.09303784,
        -0.18971168,  0.0762079 , -0.02541647, -0.27134502, -0.3392682 ,
        -0.10296471, -0.27275252, -0.34078008,  0.20083308, -0.26644838,
         0.00655449, -0.05141485, -0.04261916, -0.4541363 ,  0.20023566]],
      dtype=float32)>

이제 전체 모델을 빌드 해 보겠습니다.

model = tf.keras.Sequential()
model.add(hub_layer)
model.add(tf.keras.layers.Dense(16, activation='relu'))
model.add(tf.keras.layers.Dense(1))

model.summary()
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/autograph/pyct/static_analysis/liveness.py:83: Analyzer.lamba_check (from tensorflow.python.autograph.pyct.static_analysis.liveness) is deprecated and will be removed after 2023-09-23.
Instructions for updating:
Lambda fuctions will be no more assumed to be used in the statement where they are used, or at least in the same block. https://github.com/tensorflow/tensorflow/issues/56089
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/autograph/pyct/static_analysis/liveness.py:83: Analyzer.lamba_check (from tensorflow.python.autograph.pyct.static_analysis.liveness) is deprecated and will be removed after 2023-09-23.
Instructions for updating:
Lambda fuctions will be no more assumed to be used in the statement where they are used, or at least in the same block. https://github.com/tensorflow/tensorflow/issues/56089
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 keras_layer (KerasLayer)    (None, 50)                48190600  
                                                                 
 dense (Dense)               (None, 16)                816       
                                                                 
 dense_1 (Dense)             (None, 1)                 17        
                                                                 
=================================================================
Total params: 48,191,433
Trainable params: 48,191,433
Non-trainable params: 0
_________________________________________________________________

순서대로 층을 쌓아 분류기를 만듭니다:

  1. 첫 번째 레이어는 TensorFlow Hub 레이어입니다. 이 레이어는 사전 훈련된 저장된 모델을 사용하여 문장을 임베딩 벡터에 매핑합니다. 여기서 사용하는 모델(google/nnlm-en-dim50/2)은 문장을 여러 토큰으로 분할하고 각 토큰을 포함한 다음 임베딩을 결합합니다. 결과적인 차원은 (num_examples, embedding_dimension)입니다.
  2. 이 고정 길이 출력 벡터는 16 개의 은닉 유닛이있는 완전 연결 ( Dense ) 계층을 통해 파이프됩니다.
  3. 마지막 레이어는 단일 출력 노드와 조밀하게 연결됩니다. 여기서 모델에 따라 실제 클래스의 로그 확률인 로짓이 출력됩니다.

숨겨진 단위

위의 모델에는 입력과 출력 사이에 두 개의 중간 또는 "숨겨진" 레이어가 있습니다. 출력(단위, 노드 또는 뉴런)의 수는 레이어에 대한 표현 공간의 차원입니다. 즉, 내부 표현을 학습할 때 네트워크에서 허용되는 자유의 정도입니다.

모델에 더 많은 숨겨진 단위(고차원 표현 공간) 및/또는 더 많은 레이어가 있는 경우, 네트워크는 더 복잡한 표현을 학습할 수 있습니다. 그러나 이로 인해 계산 측면에서 네트워크의 비용 부담이 증가하고, 원치 않는 패턴(훈련 데이터에서는 성능을 향상하지만 테스트 데이터에서는 그렇지 않은 패턴)을 학습하게 됩니다. 이를 과대적합이라고 하며 나중에 살펴보겠습니다.

손실 기능 및 옵티마이저

모델에는 훈련을 위한 손실 함수와 옵티마이저가 필요합니다. 이진 분류 문제이고 모델이 확률(시그모이드 활성화가 있는 단일 단위 레이어)을 출력하므로 binary_crossentropy 손실 함수를 사용합니다.

손실 함수에 이 선택만 가능한 것은 아닙니다. 예를 들어 mean_squared_error를 선택할 수 있습니다. 그러나 일반적으로 binary_crossentropy가 확률을 처리하기에는 더 적합합니다. 즉, 확률 분포 간 또는 이 경우에는 실제 분포와 예측 간의 "거리"를 측정합니다.

나중에 회귀 문제 (예 : 주택 가격 예측)를 탐색 할 때 평균 제곱 오차라고하는 또 다른 손실 함수를 사용하는 방법을 살펴볼 것입니다.

이제 최적화 기와 손실 함수를 사용하도록 모델을 구성합니다.

model.compile(optimizer='adam',
              loss=tf.losses.BinaryCrossentropy(from_logits=True),
              metrics=[tf.metrics.BinaryAccuracy(threshold=0.0, name='accuracy')])

검증 세트 만들기

훈련할 때 이전에 입력된 적이 없는 데이터에 대한 모델의 정확성을 확인하려고 합니다. 원본 훈련 데이터에서 10,000개의 예제를 구분하여 검증 세트를 만듭니다. 왜 지금 테스트 세트를 사용하지 않을까요? 훈련 데이터만 사용하여 모델을 개발하고 조정한 다음 테스트 데이터를 한 번만 사용하여 정확성을 평가하는 것이 목적이기 때문입니다.

x_val = train_examples[:10000]
partial_x_train = train_examples[10000:]

y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]

모델 훈련하기

512개 샘플의 미니 배치에서 40개 epoch 동안 모델을 훈련합니다. 이 동작은 x_trainy_train 텐서의 모든 샘플에 대한 40회 반복에 해당합니다. 훈련하는 동안 검증 세트의 10,000개 샘플에서 모델의 손실과 정확도를 모니터링합니다.

history = model.fit(partial_x_train,
                    partial_y_train,
                    epochs=40,
                    batch_size=512,
                    validation_data=(x_val, y_val),
                    verbose=1)
Epoch 1/40
30/30 [==============================] - 7s 195ms/step - loss: 0.6851 - accuracy: 0.5831 - val_loss: 0.6416 - val_accuracy: 0.6903
Epoch 2/40
30/30 [==============================] - 6s 187ms/step - loss: 0.5879 - accuracy: 0.7583 - val_loss: 0.5429 - val_accuracy: 0.7756
Epoch 3/40
30/30 [==============================] - 6s 190ms/step - loss: 0.4523 - accuracy: 0.8415 - val_loss: 0.4290 - val_accuracy: 0.8269
Epoch 4/40
30/30 [==============================] - 5s 178ms/step - loss: 0.3232 - accuracy: 0.8915 - val_loss: 0.3586 - val_accuracy: 0.8528
Epoch 5/40
30/30 [==============================] - 5s 156ms/step - loss: 0.2318 - accuracy: 0.9249 - val_loss: 0.3204 - val_accuracy: 0.8642
Epoch 6/40
30/30 [==============================] - 5s 159ms/step - loss: 0.1692 - accuracy: 0.9480 - val_loss: 0.3067 - val_accuracy: 0.8722
Epoch 7/40
30/30 [==============================] - 5s 159ms/step - loss: 0.1233 - accuracy: 0.9679 - val_loss: 0.3014 - val_accuracy: 0.8757
Epoch 8/40
30/30 [==============================] - 5s 156ms/step - loss: 0.0893 - accuracy: 0.9815 - val_loss: 0.3059 - val_accuracy: 0.8765
Epoch 9/40
30/30 [==============================] - 5s 156ms/step - loss: 0.0646 - accuracy: 0.9895 - val_loss: 0.3130 - val_accuracy: 0.8756
Epoch 10/40
30/30 [==============================] - 5s 162ms/step - loss: 0.0472 - accuracy: 0.9943 - val_loss: 0.3227 - val_accuracy: 0.8750
Epoch 11/40
30/30 [==============================] - 4s 142ms/step - loss: 0.0343 - accuracy: 0.9973 - val_loss: 0.3336 - val_accuracy: 0.8738
Epoch 12/40
30/30 [==============================] - 4s 144ms/step - loss: 0.0255 - accuracy: 0.9987 - val_loss: 0.3454 - val_accuracy: 0.8732
Epoch 13/40
30/30 [==============================] - 4s 130ms/step - loss: 0.0193 - accuracy: 0.9991 - val_loss: 0.3570 - val_accuracy: 0.8706
Epoch 14/40
30/30 [==============================] - 4s 123ms/step - loss: 0.0149 - accuracy: 0.9995 - val_loss: 0.3673 - val_accuracy: 0.8701
Epoch 15/40
30/30 [==============================] - 4s 140ms/step - loss: 0.0118 - accuracy: 0.9997 - val_loss: 0.3776 - val_accuracy: 0.8688
Epoch 16/40
30/30 [==============================] - 3s 112ms/step - loss: 0.0095 - accuracy: 0.9999 - val_loss: 0.3882 - val_accuracy: 0.8680
Epoch 17/40
30/30 [==============================] - 4s 124ms/step - loss: 0.0078 - accuracy: 0.9999 - val_loss: 0.3970 - val_accuracy: 0.8678
Epoch 18/40
30/30 [==============================] - 4s 133ms/step - loss: 0.0065 - accuracy: 0.9999 - val_loss: 0.4062 - val_accuracy: 0.8673
Epoch 19/40
30/30 [==============================] - 4s 146ms/step - loss: 0.0056 - accuracy: 0.9999 - val_loss: 0.4146 - val_accuracy: 0.8667
Epoch 20/40
30/30 [==============================] - 4s 118ms/step - loss: 0.0048 - accuracy: 0.9999 - val_loss: 0.4226 - val_accuracy: 0.8656
Epoch 21/40
30/30 [==============================] - 5s 156ms/step - loss: 0.0041 - accuracy: 0.9999 - val_loss: 0.4300 - val_accuracy: 0.8667
Epoch 22/40
30/30 [==============================] - 3s 96ms/step - loss: 0.0035 - accuracy: 1.0000 - val_loss: 0.4372 - val_accuracy: 0.8658
Epoch 23/40
30/30 [==============================] - 3s 106ms/step - loss: 0.0031 - accuracy: 1.0000 - val_loss: 0.4443 - val_accuracy: 0.8656
Epoch 24/40
30/30 [==============================] - 3s 111ms/step - loss: 0.0027 - accuracy: 1.0000 - val_loss: 0.4511 - val_accuracy: 0.8654
Epoch 25/40
30/30 [==============================] - 3s 106ms/step - loss: 0.0024 - accuracy: 1.0000 - val_loss: 0.4568 - val_accuracy: 0.8655
Epoch 26/40
30/30 [==============================] - 4s 122ms/step - loss: 0.0022 - accuracy: 1.0000 - val_loss: 0.4628 - val_accuracy: 0.8651
Epoch 27/40
30/30 [==============================] - 4s 127ms/step - loss: 0.0020 - accuracy: 1.0000 - val_loss: 0.4682 - val_accuracy: 0.8647
Epoch 28/40
30/30 [==============================] - 4s 128ms/step - loss: 0.0018 - accuracy: 1.0000 - val_loss: 0.4738 - val_accuracy: 0.8647
Epoch 29/40
30/30 [==============================] - 2s 80ms/step - loss: 0.0016 - accuracy: 1.0000 - val_loss: 0.4791 - val_accuracy: 0.8652
Epoch 30/40
30/30 [==============================] - 4s 121ms/step - loss: 0.0015 - accuracy: 1.0000 - val_loss: 0.4841 - val_accuracy: 0.8646
Epoch 31/40
30/30 [==============================] - 3s 105ms/step - loss: 0.0014 - accuracy: 1.0000 - val_loss: 0.4889 - val_accuracy: 0.8653
Epoch 32/40
30/30 [==============================] - 3s 100ms/step - loss: 0.0013 - accuracy: 1.0000 - val_loss: 0.4936 - val_accuracy: 0.8644
Epoch 33/40
30/30 [==============================] - 3s 94ms/step - loss: 0.0012 - accuracy: 1.0000 - val_loss: 0.4982 - val_accuracy: 0.8648
Epoch 34/40
30/30 [==============================] - 3s 84ms/step - loss: 0.0011 - accuracy: 1.0000 - val_loss: 0.5028 - val_accuracy: 0.8641
Epoch 35/40
30/30 [==============================] - 3s 106ms/step - loss: 9.9566e-04 - accuracy: 1.0000 - val_loss: 0.5068 - val_accuracy: 0.8639
Epoch 36/40
30/30 [==============================] - 2s 83ms/step - loss: 9.2888e-04 - accuracy: 1.0000 - val_loss: 0.5111 - val_accuracy: 0.8638
Epoch 37/40
30/30 [==============================] - 3s 88ms/step - loss: 8.6689e-04 - accuracy: 1.0000 - val_loss: 0.5151 - val_accuracy: 0.8636
Epoch 38/40
30/30 [==============================] - 3s 109ms/step - loss: 8.1107e-04 - accuracy: 1.0000 - val_loss: 0.5192 - val_accuracy: 0.8645
Epoch 39/40
30/30 [==============================] - 3s 94ms/step - loss: 7.6065e-04 - accuracy: 1.0000 - val_loss: 0.5230 - val_accuracy: 0.8636
Epoch 40/40
30/30 [==============================] - 2s 84ms/step - loss: 7.1425e-04 - accuracy: 1.0000 - val_loss: 0.5268 - val_accuracy: 0.8635

모델 평가하기

그리고 모델이 어떤 성능을 보이는지 알아보겠습니다. 손실(오류를 나타내는 숫자, 값이 낮을수록 좋음) 및 정확성의 두 가지 값이 반환됩니다.

results = model.evaluate(test_examples, test_labels)

print(results)
782/782 [==============================] - 3s 3ms/step - loss: 0.5948 - accuracy: 0.8467
[0.5948469042778015, 0.8467199802398682]

상당히 단순한 이 접근 방식으로 약 87%의 정확도가 얻어집니다. 더 발전된 방식을 이용하면 모델이 95%에 근접합니다.

시간 경과에 따른 정확도 및 손실 그래프 생성

model.fit()History 객체를 반환합니다. 여기에는 훈련하는 동안 일어난 모든 정보가 담긴 딕셔너리(dictionary)가 들어 있습니다:

history_dict = history.history
history_dict.keys()
dict_keys(['loss', 'accuracy', 'val_loss', 'val_accuracy'])

4개의 항목이 있습니다. 훈련 및 검증 중에 모니터링되는 각 메트릭에 대해 하나씩 있습니다. 이들 항목을 사용하여 비교를 위한 훈련 및 검증 손실과 훈련 및 검증 정확성을 플롯할 수 있습니다.

acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']

epochs = range(1, len(acc) + 1)

# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()

plt.show()

png

plt.clf()   # clear figure

plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()

plt.show()

png

이 그래프에서 점선은 훈련 손실과 훈련 정확도를 나타냅니다. 실선은 검증 손실과 검증 정확도입니다.

훈련 손실은 에포크마다 감소하고 훈련 정확도는 증가한다는 것을 주목하세요. 경사 하강법 최적화를 사용할 때 볼 수 있는 현상입니다. 매 반복마다 최적화 대상의 값을 최소화합니다.

약 20개 epoch 후에 최대치에 도달하는 검증 손실 및 정확도의 경우에는 그렇지 않습니다. 이것은 과대적합의 예입니다. 즉, 모델은 이전에 입력된 적이 없는 데이터보다 훈련 데이터에서 더 우수한 성능을 나타냅니다. 이 시점 이후 모델은 테스트 데이터로 일반화되지 않는 훈련 데이터에 특정한 표현을 과도하게 최적화하고 학습합니다.

이 특별한 경우에는 단순히 약 20개 epoch 후에 훈련을 중단함으로써 과대적합을 방지할 수 있습니다. 나중에 콜백을 사용하여 이 작업을 자동으로 수행하는 방법을 살펴보겠습니다.

# MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.