라이브러리 사용

import tensorflow as tf
import pandas as pd

데이터 준비

  • row : 506
  • col : 14
  • 보스턴 주택 가격 데이터(1000달러 기준)와 주택 가격에 영향을 미칠만한 데이터
  • CRIM : 범죄율
  • ZN : 25,000 평방피트를 초과하는 거주지역의 비율
  • INDUS : 비소매상업지역이 점유하고 있는 토지의 비율
  • CHAS : 찰스강에 대한 더미변수(강의 경계에 위치한 경우는 1, 아니면 0)
  • NOX : 10ppm 당 농축 일산화질소
  • RM : 주택 1가구당 평균 방의 개수
  • AGE : 1940년 이전에 건축된 소유주택의 비율
  • DIS : 5개의 보스턴 직업센터까지의 접근성 지수
  • RAD : 방사형 도로까지의 접근성 지수
  • TAX : 10,000 달러 당 재산세율
  • PTRATIO : 자치시(town)별 학생/교사 비율
  • B : 1000(Bk-0.63)^2, 여기서 Bk는 자치시별 흑인의 비율을 말함.
  • LSTAT : 모집단의 하위계층의 비율(%)
  • MEDV : 본인 소유의 주택가격(중앙값) (단위: $1,000)

출처: https://ai-times.tistory.com/431 [ai-times]

파일경로 = 'https://raw.githubusercontent.com/blackdew/tensorflow1/master/csv/boston.csv'
보스턴 = pd.read_csv(파일경로)
print(보스턴.columns)
보스턴.head()
Index(['crim', 'zn', 'indus', 'chas', 'nox', 'rm', 'age', 'dis', 'rad', 'tax',
       'ptratio', 'b', 'lstat', 'medv'],
      dtype='object')
crim zn indus chas nox rm age dis rad tax ptratio b lstat medv
0 0.00632 18.0 2.31 0 0.538 6.575 65.2 4.0900 1 296 15.3 396.90 4.98 24.0
1 0.02731 0.0 7.07 0 0.469 6.421 78.9 4.9671 2 242 17.8 396.90 9.14 21.6
2 0.02729 0.0 7.07 0 0.469 7.185 61.1 4.9671 2 242 17.8 392.83 4.03 34.7
3 0.03237 0.0 2.18 0 0.458 6.998 45.8 6.0622 3 222 18.7 394.63 2.94 33.4
4 0.06905 0.0 2.18 0 0.458 7.147 54.2 6.0622 3 222 18.7 396.90 5.33 36.2
보스턴.tail()
crim zn indus chas nox rm age dis rad tax ptratio b lstat medv
501 0.06263 0.0 11.93 0 0.573 6.593 69.1 2.4786 1 273 21.0 391.99 9.67 22.4
502 0.04527 0.0 11.93 0 0.573 6.120 76.7 2.2875 1 273 21.0 396.90 9.08 20.6
503 0.06076 0.0 11.93 0 0.573 6.976 91.0 2.1675 1 273 21.0 396.90 5.64 23.9
504 0.10959 0.0 11.93 0 0.573 6.794 89.3 2.3889 1 273 21.0 393.45 6.48 22.0
505 0.04741 0.0 11.93 0 0.573 6.030 80.8 2.5050 1 273 21.0 396.90 7.88 11.9
보스턴.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 506 entries, 0 to 505
Data columns (total 14 columns):
 #   Column   Non-Null Count  Dtype  
---  ------   --------------  -----  
 0   crim     506 non-null    float64
 1   zn       506 non-null    float64
 2   indus    506 non-null    float64
 3   chas     506 non-null    int64  
 4   nox      506 non-null    float64
 5   rm       506 non-null    float64
 6   age      506 non-null    float64
 7   dis      506 non-null    float64
 8   rad      506 non-null    int64  
 9   tax      506 non-null    int64  
 10  ptratio  506 non-null    float64
 11  b        506 non-null    float64
 12  lstat    506 non-null    float64
 13  medv     506 non-null    float64
dtypes: float64(11), int64(3)
memory usage: 55.5 KB
보스턴.age.mean() # 1940년 이전에 건축된 소유주택의 비율
68.57490118577078
보스턴[['crim']].mean()
crim    3.613524
dtype: float64
보스턴[['crim']].max()
crim    88.9762
dtype: float64
보스턴[['crim']].min()
crim    0.00632
dtype: float64
보스턴.boxplot(column=['crim']) # 이상치 제거 필요
<matplotlib.axes._subplots.AxesSubplot at 0x7f63e3ad5ad0>

데이터 전처리


모델링

독립 = 보스턴[['crim', 'zn', 'indus', 'chas', 'nox', 'rm', 'age', 'dis', 'rad', 'tax', 'ptratio', 'b', 'lstat']]
종속 = 보스턴[['medv']]
print(독립.shape, 종속.shape)
(506, 13) (506, 1)
X = tf.keras.layers.Input(shape=[13]) # 독립에서의 col
Y = tf.keras.layers.Dense(1)(X) # 종속에서의 col
model = tf.keras.models.Model(X, Y)
model.compile(loss='mse')
model.fit(독립, 종속, epochs=10000, verbose=0)
model.fit(독립, 종속, epochs=10)
Epoch 1/10
16/16 [==============================] - 0s 1ms/step - loss: 23.3325
Epoch 2/10
16/16 [==============================] - 0s 1ms/step - loss: 23.6406
Epoch 3/10
16/16 [==============================] - 0s 1ms/step - loss: 24.2017
Epoch 4/10
16/16 [==============================] - 0s 1ms/step - loss: 22.9282
Epoch 5/10
16/16 [==============================] - 0s 1ms/step - loss: 23.6037
Epoch 6/10
16/16 [==============================] - 0s 1ms/step - loss: 23.7327
Epoch 7/10
16/16 [==============================] - 0s 1ms/step - loss: 22.9146
Epoch 8/10
16/16 [==============================] - 0s 1ms/step - loss: 23.8316
Epoch 9/10
16/16 [==============================] - 0s 1ms/step - loss: 23.7118
Epoch 10/10
16/16 [==============================] - 0s 1ms/step - loss: 23.2829
<tensorflow.python.keras.callbacks.History at 0x7f63db666d50>
# 예측값
print(model.predict(독립[0:5]))

# 실제값(종속변수 확인)
print(종속[0:5])
[[29.279457]
 [24.11077 ]
 [30.636097]
 [29.275276]
 [28.885345]]
   medv
0  24.0
1  21.6
2  34.7
3  33.4
4  36.2
예측값 = model.predict(독립)
실제값 = 종속
오차값 = (예측값 - 실제값)**2
오차값.head()
medv
0 28.382128
1 7.256643
2 16.445512
3 18.144550
4 56.123259
오차값.mean() #MSE
medv    23.07116
dtype: float64
model.get_weights()
[array([[-0.09769897],
        [ 0.04758811],
        [-0.01014985],
        [ 2.6305165 ],
        [-5.6234183 ],
        [ 5.012733  ],
        [-0.00839883],
        [-1.1512003 ],
        [ 0.2200571 ],
        [-0.01184279],
        [-0.6021571 ],
        [ 0.01250572],
        [-0.47408003]], dtype=float32), array([13.713469], dtype=float32)]
예측값[0]
array([29.327488], dtype=float32)
독립.iloc[0]
crim         0.00632
zn          18.00000
indus        2.31000
chas         0.00000
nox          0.53800
rm           6.57500
age         65.20000
dis          4.09000
rad          1.00000
tax        296.00000
ptratio     15.30000
b          396.90000
lstat        4.98000
Name: 0, dtype: float64
'''
crim         0.00632 * -0.09769897
zn          18.00000 *  0.04758811
indus        2.31000 * -0.01014985
chas         0.00000 * 2.6305165
nox          0.53800 * -5.6234183
rm           6.57500 * 5.012733
age         65.20000 * -0.00839883
dis          4.09000 * -1.1512003
rad          1.00000 * 0.2200571
tax        296.00000 * -0.01184279
ptratio     15.30000 * -0.6021571
b          396.90000 * 0.01250572
lstat        4.98000 * -0.47408003 + 13.713469

[array([[-0.09769897],
        [ 0.04758811],
        [-0.01014985],
        [ 2.6305165 ],
        [-5.6234183 ],
        [ 5.012733  ],
        [-0.00839883],
        [-1.1512003 ],
        [ 0.2200571 ],
        [-0.01184279],
        [-0.6021571 ],
        [ 0.01250572],
        [-0.47408003]], dtype=float32), array([13.713469], dtype=float32)]
'''
0.00632 * -0.09769897 +\
18.00000 *  0.04758811 +\
2.31000 * -0.01014985 +\
0.00000 * 2.6305165 +\
0.53800 * -5.6234183 +\
6.57500 * 5.012733 +\
65.20000 * -0.00839883 +\
4.09000 * -1.1512003 +\
1.00000 * 0.2200571 +\
296.00000 * -0.01184279 +\
15.30000 * -0.6021571 +\
396.90000 * 0.01250572 +\
4.98000 * -0.47408003 + 13.713469
29.3274882042096

CSV 파일 생성

예측값 = model.predict(독립)
예측값
예측값 = pd.DataFrame(예측값)
예측값.to_csv('result.csv')

고도화 작업 - 1

X = tf.keras.layers.Input(shape=[13]) # 독립변수의 col
H = tf.keras.layers.Dense(200, activation='swish')(X) # 노드의 수는 천천히 늘려감! (2 ~ 200)
H = tf.keras.layers.Dense(5, activation='swish')(H) # 처음에는 주석처리!
H = tf.keras.layers.Dense(5, activation='swish')(H) # 처음에는 주석처리!
H = tf.keras.layers.Dense(5, activation='swish')(H) # 처음에는 주석처리!
H = tf.keras.layers.Dense(5, activation='swish')(H) # 처음에는 주석처리!
Y = tf.keras.layers.Dense(1)(H) # 종속변수의 col
model = tf.keras.models.Model(X, Y)
model.compile(loss='mse') # MSE(Mean squared error)
model.summary()
Model: "model_3"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_4 (InputLayer)         [(None, 13)]              0         
_________________________________________________________________
dense_13 (Dense)             (None, 200)               2800      
_________________________________________________________________
dense_14 (Dense)             (None, 5)                 1005      
_________________________________________________________________
dense_15 (Dense)             (None, 5)                 30        
_________________________________________________________________
dense_16 (Dense)             (None, 5)                 30        
_________________________________________________________________
dense_17 (Dense)             (None, 5)                 30        
_________________________________________________________________
dense_18 (Dense)             (None, 1)                 6         
=================================================================
Total params: 3,901
Trainable params: 3,901
Non-trainable params: 0
_________________________________________________________________
plot_model(model)
model.fit(독립, 종속, epochs=1000, verbose=0)
model.fit(독립, 종속, epochs=10)
Epoch 1/10
16/16 [==============================] - 0s 1ms/step - loss: 7.7405
Epoch 2/10
16/16 [==============================] - 0s 2ms/step - loss: 7.7418
Epoch 3/10
16/16 [==============================] - 0s 2ms/step - loss: 6.9789
Epoch 4/10
16/16 [==============================] - 0s 2ms/step - loss: 10.1850
Epoch 5/10
16/16 [==============================] - 0s 3ms/step - loss: 8.2013
Epoch 6/10
16/16 [==============================] - 0s 2ms/step - loss: 7.9208
Epoch 7/10
16/16 [==============================] - 0s 2ms/step - loss: 7.9824
Epoch 8/10
16/16 [==============================] - 0s 1ms/step - loss: 8.7595
Epoch 9/10
16/16 [==============================] - 0s 2ms/step - loss: 7.8617
Epoch 10/10
16/16 [==============================] - 0s 1ms/step - loss: 7.4242
<tensorflow.python.keras.callbacks.History at 0x7f63daf7c290>
# 예측값
print(model.predict(독립[0:5]))

# 실제값(종속변수 확인)
print(종속[0:5])
[[37.54553 ]
 [21.55653 ]
 [32.446022]
 [36.555557]
 [33.97718 ]]
   medv
0  24.0
1  21.6
2  34.7
3  33.4
4  36.2

고도화 작업 - 2

import tensorflow as tf
from tensorflow.keras.datasets.boston_housing import load_data
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.utils import plot_model

from sklearn.model_selection import train_test_split

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('seaborn-white')
import random

random.randint(1, 10)# 1부터 10까지의 값을 가짐
8
import random

random.seed(5)
random.randint(1, 10)# 1부터 10까지의 값을 가짐
10
tf.random.set_seed(111)

(x_train_full, y_train_full), (x_test, y_test) = load_data(path='boston_housing.npz',
                                                           test_split=0.2,
                                                           seed=111)
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/boston_housing.npz
57344/57026 [==============================] - 0s 0us/step
x_train_full # 집값의 독립변수
array([[2.87500e-02, 2.80000e+01, 1.50400e+01, ..., 1.82000e+01,
        3.96330e+02, 6.21000e+00],
       [6.14700e-01, 0.00000e+00, 6.20000e+00, ..., 1.74000e+01,
        3.96900e+02, 7.60000e+00],
       [2.76300e-02, 7.50000e+01, 2.95000e+00, ..., 1.83000e+01,
        3.95630e+02, 4.32000e+00],
       ...,
       [8.15174e+00, 0.00000e+00, 1.81000e+01, ..., 2.02000e+01,
        3.96900e+02, 2.08500e+01],
       [3.11300e-02, 0.00000e+00, 4.39000e+00, ..., 1.88000e+01,
        3.85640e+02, 1.05300e+01],
       [1.10874e+01, 0.00000e+00, 1.81000e+01, ..., 2.02000e+01,
        3.18750e+02, 1.50200e+01]])
y_train_full # 집값의 정답값(종속변수)
array([25. , 30.1, 30.8, 20.5, 48.5, 22.1, 35.1, 15.6, 22.7,  9.6, 25.2,
       50. , 24.4, 29. , 13.1, 22.6, 26.2, 21.7, 14.5, 22.5, 11.7, 22. ,
       23.8, 29.6, 11.9, 18.6, 48.3, 24.4, 23.1, 17.9, 31.1,  5. , 14.6,
       24.1, 44.8, 29.6,  7.2, 50. , 23.1, 15.1, 22.8, 27.5, 27.5, 22.9,
       21.7, 31. , 28.4, 14. , 50. , 16.7, 20.3, 36.2, 15.2, 20.6, 21.9,
       21.2, 19.9, 20.3, 10.4, 19.8, 20.6, 26.6, 22. , 20.1, 19.2, 24.6,
       28.2, 36. , 25. , 25. , 28.7, 15.3, 11.3, 14.3, 33.3, 21.4, 23.7,
       22.5, 22.2, 27.9, 13.8, 13.9,  9.7, 36.5, 15.4, 20.4, 50. , 29.1,
       18.3, 14.9, 12.6, 13.9, 24.4, 16.3, 18.5, 10.2, 19.5, 25. , 15.6,
       13.8, 20.9, 22. , 50. , 22.2, 23.4, 17.8, 11.7, 50. , 23.5, 12.3,
       18.4, 13.2, 25. , 20.1, 28.7, 19.9,  7.4, 20. , 29.1, 24. , 44. ,
       12. , 36.4, 16.5, 23. ,  7. , 34.7, 23.9, 23.8, 17.6, 16.6, 35.4,
       50. , 24.8, 22.8, 17. , 48.8, 31.6, 13.8, 22.2, 32.7, 15.6,  7.2,
       21.4, 11.9, 22. , 14.6, 20.2, 21.5, 19.3, 18.4, 24.3, 34.9, 26.5,
       10.5, 20.8, 23.2, 10.4, 37.9,  7.2, 33.2, 24.4, 19.4, 18.3, 18.9,
       21.4, 15. , 23.7, 15.7, 32.5, 17.2, 32.4, 23.8, 20.7, 31.5, 46. ,
       20. , 24.3, 10.2,  8.8, 16.1, 27.9,  8.3, 36.2, 26.6, 15.2, 21. ,
       21.9, 19.9, 13.4, 22. , 18.8, 22. , 19.3, 33.1, 13.6, 30.1, 33.2,
       39.8, 22.7, 19.2, 28.4, 19.7, 23.1, 19.5,  5.6, 21.7, 10.9, 29.8,
       12.7, 19. , 21. , 16.6, 19.4, 21.7, 21.8, 16. , 25. , 27.5, 24.7,
       37.2, 17.2, 13.4, 18.8, 10.9, 13.4, 23. , 18.5, 20. , 28.1, 10.8,
       24.7, 23.7, 20.6, 34.9, 20.6, 37.3, 22.9, 22.8, 14.9, 20.1, 35.4,
       23.4, 18.9, 50. , 22.2, 37. , 33.1, 42.8, 15. , 23.1, 19.4, 45.4,
       12.7, 29. , 21.1, 18.4,  8.5, 26.6, 19.4, 50. , 20.6, 13.1, 19.1,
       50. , 24.7, 18.5, 26.4, 19.5, 21.2, 23. , 18.2,  7.5, 17.8, 21.2,
       22.8, 17.7, 24.5, 23.6, 23.9, 14.2, 28.6, 15.6, 17.5, 31.6, 24.8,
       13.8, 13.4, 22.9, 17.8, 19.6, 19.1, 16.2, 20.6, 16.8, 50. , 18.9,
       31.5, 30.5, 23.2, 24.5, 23.1, 33.4, 17.1,  7. , 20.1, 50. , 23.2,
       11. , 18. , 23.3, 22.6, 20.8, 12.7, 29.4, 22.6, 11.8, 22.6, 23.1,
       21.6, 19.8, 34.6, 21.2, 20.3, 19.9, 18.2, 13.1, 24.5, 34.9, 24.8,
        8.4, 17.4, 23.3, 30.3, 19.3, 20. , 50. , 15.4, 29.9, 28.5, 23.3,
       21.7,  8.7, 13.3, 20. , 19.4, 11.8, 18.2, 14.3, 28.7,  8.1, 21.5,
       15.2, 12.1, 20.3, 14.4, 18.5,  5. , 41.7, 16.2, 14.5, 20.2, 19.4,
       23.9, 24.1, 15. , 23.2, 21.8, 17.2, 18.7, 14.8, 43.8, 22.4, 28. ,
       16.1, 16.5, 24.2, 20.5, 22.6, 12.8, 13.1, 18.7,  8.5, 30.1, 26.4,
       14.1,  8.4, 20.1, 21.7, 43.1, 16.8, 22.3, 17.5, 10.5,  6.3, 21.7,
       13.5, 36.1, 14.1, 19.8, 18.6, 11.5, 17.5, 16.7])
x_train_full.shape
(404, 13)
y_train_full.shape
(404,)
# 'crim', 'zn', 'indus', 'chas', 'nox', 'rm', 'age', 'dis', 'rad', 'tax', 'ptratio', 'b', 'lstat', 'medv'
# x_train_full.shape : 'crim'(범죄율), 'zn', 'indus', 'chas', 'nox', 'rm', 'age', 'dis', 'rad', 'tax', 'ptratio', 'b', 'lstat'
# y_train_full.shape : 'medv'(집값, 1000 달러 단위)
(x_test.shape, y_test.shape)
((102, 13), (102,))

데이터 전처리

arr = [[10, 20, 30],
       [3, 50, 5],
       [70, 80, 90],
       [100, 110, 120]]

print("Two Dimension array :", arr)
print("Mean with no axis :", np.mean(arr))
print("Mean with axis along column :", np.mean(arr, axis=0)) # row 평균을 구함, 값은 col으로 구해짐
print("Mean with axis aong row :", np.mean(arr, axis=1)) # col 평균을 구함, 값은 row으로 구해짐
Two Dimension array : [[10, 20, 30], [3, 50, 5], [70, 80, 90], [100, 110, 120]]
Mean with no axis : 57.333333333333336
Mean with axis along column : [45.75 65.   61.25]
Mean with axis aong row : [ 20.          19.33333333  80.         110.        ]
np.sum(arr) / 12
57.333333333333336
mean = np.mean(x_train_full, axis=0)
std = np.std(x_train_full, axis=0)

x_train_preprocessed = (x_train_full - mean) / std # 데이터의 스케일을 줄이기 위해 사용하는 것
x_test = (x_test - mean) / std

x_train, x_val, y_train, y_val = train_test_split(x_train_preprocessed, y_train_full,
                                                  test_size=0.3, random_state=111)
x_train.shape, x_val.shape, y_train.shape, y_val.shape
((282, 13), (122, 13), (282,), (122,))

모델 구성

  • 학습 데이터가 매우 적은 경우 모델의 깊이를 깊게 할수록 과대적합(Overfitting)이 일어날 확율이 높음
'''
# 히든레이어 모델 준비
X = tf.keras.layers.Input(shape=[13]) # 독립변수의 col
H = tf.keras.layers.Dense(200, activation='swish')(X) # 노드의 수는 천천히 늘려감! (2 ~ 200)
H = tf.keras.layers.Dense(5, activation='swish')(H) # 처음에는 주석처리!
H = tf.keras.layers.Dense(5, activation='swish')(H) # 처음에는 주석처리!
H = tf.keras.layers.Dense(5, activation='swish')(H) # 처음에는 주석처리!
H = tf.keras.layers.Dense(5, activation='swish')(H) # 처음에는 주석처리!
Y = tf.keras.layers.Dense(1)(H) # 종속변수의 col
model = tf.keras.models.Model(X, Y)
model.compile(loss='mse') # MSE(Mean squared error)
'''

model = Sequential([Dense(100, activation='relu', input_shape=(13, ), name='dense1'),
                    Dense(64, activation='relu', name='dense2'),
                    Dense(32, activation='relu', name='dense3'),
                    Dense(1, name='output')])
model.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense1 (Dense)               (None, 100)               1400      
_________________________________________________________________
dense2 (Dense)               (None, 64)                6464      
_________________________________________________________________
dense3 (Dense)               (None, 32)                2080      
_________________________________________________________________
output (Dense)               (None, 1)                 33        
=================================================================
Total params: 9,977
Trainable params: 9,977
Non-trainable params: 0
_________________________________________________________________
plot_model(model)
model.compile(loss='mse', optimizer=Adam(learning_rate=1e-2), metrics=['mae']) 
# MAE는 절대 평균 오차로 MSE처럼 제곱을 하지 않고 절대값을 씌어 평균을 냄
# MSE와 MAE : https://wooono.tistory.com/99
# Adam은 파라미터마다 다른 크기의 업데이트를 적용하는 방법!
# Adam에 대한 보다 상세한 글 : https://hiddenbeginner.github.io/deeplearning/2019/09/22/optimization_algorithms_in_deep_learning.html
# 잘 모르겠으면 일단 아담을 사용하라는 글 : https://sacko.tistory.com/42

## 각각 옵션에 대한 상세한 글 : 
## https://wikidocs.net/36033
## https://onesixx.com/optimizer-loss-metrics/
history = model.fit(x_train, y_train, epochs=300, validation_data=(x_val, y_val))
Epoch 1/300
9/9 [==============================] - 1s 21ms/step - loss: 284.5112 - mae: 14.0540 - val_loss: 101.7117 - val_mae: 8.4467
Epoch 2/300
9/9 [==============================] - 0s 5ms/step - loss: 54.0324 - mae: 5.5914 - val_loss: 32.2761 - val_mae: 4.2198
Epoch 3/300
9/9 [==============================] - 0s 4ms/step - loss: 27.8157 - mae: 3.7876 - val_loss: 20.7668 - val_mae: 3.3478
Epoch 4/300
9/9 [==============================] - 0s 4ms/step - loss: 19.2665 - mae: 3.1608 - val_loss: 14.5887 - val_mae: 3.0283
Epoch 5/300
9/9 [==============================] - 0s 4ms/step - loss: 14.9836 - mae: 2.7218 - val_loss: 14.2842 - val_mae: 2.8176
Epoch 6/300
9/9 [==============================] - 0s 4ms/step - loss: 13.3089 - mae: 2.5932 - val_loss: 11.3557 - val_mae: 2.5460
Epoch 7/300
9/9 [==============================] - 0s 4ms/step - loss: 12.1528 - mae: 2.4447 - val_loss: 10.4115 - val_mae: 2.4610
Epoch 8/300
9/9 [==============================] - 0s 4ms/step - loss: 12.1903 - mae: 2.5271 - val_loss: 11.3119 - val_mae: 2.4856
Epoch 9/300
9/9 [==============================] - 0s 4ms/step - loss: 11.3173 - mae: 2.4005 - val_loss: 9.7402 - val_mae: 2.4195
Epoch 10/300
9/9 [==============================] - 0s 5ms/step - loss: 10.5424 - mae: 2.3713 - val_loss: 9.2937 - val_mae: 2.3253
Epoch 11/300
9/9 [==============================] - 0s 4ms/step - loss: 9.8692 - mae: 2.2519 - val_loss: 12.5149 - val_mae: 2.5708
Epoch 12/300
9/9 [==============================] - 0s 7ms/step - loss: 10.3525 - mae: 2.3773 - val_loss: 9.5581 - val_mae: 2.4654
Epoch 13/300
9/9 [==============================] - 0s 4ms/step - loss: 9.0872 - mae: 2.1746 - val_loss: 8.9033 - val_mae: 2.3131
Epoch 14/300
9/9 [==============================] - 0s 5ms/step - loss: 8.8682 - mae: 2.1138 - val_loss: 9.3241 - val_mae: 2.3729
Epoch 15/300
9/9 [==============================] - 0s 5ms/step - loss: 8.5467 - mae: 2.0509 - val_loss: 7.9374 - val_mae: 2.2252
Epoch 16/300
9/9 [==============================] - 0s 4ms/step - loss: 7.8166 - mae: 2.0610 - val_loss: 8.0780 - val_mae: 2.1683
Epoch 17/300
9/9 [==============================] - 0s 4ms/step - loss: 8.1478 - mae: 2.0726 - val_loss: 11.8896 - val_mae: 2.7745
Epoch 18/300
9/9 [==============================] - 0s 5ms/step - loss: 9.4028 - mae: 2.3156 - val_loss: 9.1295 - val_mae: 2.3945
Epoch 19/300
9/9 [==============================] - 0s 5ms/step - loss: 8.3677 - mae: 2.0804 - val_loss: 8.5966 - val_mae: 2.2688
Epoch 20/300
9/9 [==============================] - 0s 5ms/step - loss: 7.5441 - mae: 1.9226 - val_loss: 10.3885 - val_mae: 2.4832
Epoch 21/300
9/9 [==============================] - 0s 5ms/step - loss: 7.4639 - mae: 2.0313 - val_loss: 8.1804 - val_mae: 2.1646
Epoch 22/300
9/9 [==============================] - 0s 5ms/step - loss: 6.8445 - mae: 1.9647 - val_loss: 9.1997 - val_mae: 2.3798
Epoch 23/300
9/9 [==============================] - 0s 5ms/step - loss: 7.1956 - mae: 2.0406 - val_loss: 9.5129 - val_mae: 2.3559
Epoch 24/300
9/9 [==============================] - 0s 4ms/step - loss: 6.0134 - mae: 1.8190 - val_loss: 7.7562 - val_mae: 2.1807
Epoch 25/300
9/9 [==============================] - 0s 4ms/step - loss: 5.9984 - mae: 1.8414 - val_loss: 7.3719 - val_mae: 2.0447
Epoch 26/300
9/9 [==============================] - 0s 7ms/step - loss: 5.9729 - mae: 1.7940 - val_loss: 8.6394 - val_mae: 2.2846
Epoch 27/300
9/9 [==============================] - 0s 4ms/step - loss: 5.9945 - mae: 1.9084 - val_loss: 8.5585 - val_mae: 2.2463
Epoch 28/300
9/9 [==============================] - 0s 6ms/step - loss: 5.3401 - mae: 1.6995 - val_loss: 8.0528 - val_mae: 2.1966
Epoch 29/300
9/9 [==============================] - 0s 5ms/step - loss: 5.1501 - mae: 1.6918 - val_loss: 7.6480 - val_mae: 2.1288
Epoch 30/300
9/9 [==============================] - 0s 4ms/step - loss: 4.7328 - mae: 1.6866 - val_loss: 9.0202 - val_mae: 2.2249
Epoch 31/300
9/9 [==============================] - 0s 5ms/step - loss: 5.6758 - mae: 1.7797 - val_loss: 9.6781 - val_mae: 2.3580
Epoch 32/300
9/9 [==============================] - 0s 5ms/step - loss: 4.5855 - mae: 1.6315 - val_loss: 11.5347 - val_mae: 2.4384
Epoch 33/300
9/9 [==============================] - 0s 5ms/step - loss: 5.0049 - mae: 1.6514 - val_loss: 9.5657 - val_mae: 2.3244
Epoch 34/300
9/9 [==============================] - 0s 4ms/step - loss: 4.7753 - mae: 1.6492 - val_loss: 7.9665 - val_mae: 2.1009
Epoch 35/300
9/9 [==============================] - 0s 5ms/step - loss: 4.4168 - mae: 1.5755 - val_loss: 9.2074 - val_mae: 2.3016
Epoch 36/300
9/9 [==============================] - 0s 4ms/step - loss: 3.9583 - mae: 1.4918 - val_loss: 8.7889 - val_mae: 2.2092
Epoch 37/300
9/9 [==============================] - 0s 6ms/step - loss: 3.7794 - mae: 1.4789 - val_loss: 8.0425 - val_mae: 2.1058
Epoch 38/300
9/9 [==============================] - 0s 4ms/step - loss: 3.8976 - mae: 1.5078 - val_loss: 12.2869 - val_mae: 2.6816
Epoch 39/300
9/9 [==============================] - 0s 4ms/step - loss: 4.3649 - mae: 1.6286 - val_loss: 9.5141 - val_mae: 2.3699
Epoch 40/300
9/9 [==============================] - 0s 4ms/step - loss: 4.0611 - mae: 1.4942 - val_loss: 9.5984 - val_mae: 2.2645
Epoch 41/300
9/9 [==============================] - 0s 4ms/step - loss: 4.4848 - mae: 1.6271 - val_loss: 11.7578 - val_mae: 2.4464
Epoch 42/300
9/9 [==============================] - 0s 7ms/step - loss: 5.7391 - mae: 1.8263 - val_loss: 10.9567 - val_mae: 2.5576
Epoch 43/300
9/9 [==============================] - 0s 4ms/step - loss: 4.8053 - mae: 1.7116 - val_loss: 8.4805 - val_mae: 2.1905
Epoch 44/300
9/9 [==============================] - 0s 4ms/step - loss: 3.7710 - mae: 1.4910 - val_loss: 9.2685 - val_mae: 2.2773
Epoch 45/300
9/9 [==============================] - 0s 4ms/step - loss: 3.0423 - mae: 1.3324 - val_loss: 9.8588 - val_mae: 2.3870
Epoch 46/300
9/9 [==============================] - 0s 4ms/step - loss: 3.1118 - mae: 1.3160 - val_loss: 9.6013 - val_mae: 2.2718
Epoch 47/300
9/9 [==============================] - 0s 4ms/step - loss: 3.3541 - mae: 1.3715 - val_loss: 9.9317 - val_mae: 2.3633
Epoch 48/300
9/9 [==============================] - 0s 4ms/step - loss: 3.5051 - mae: 1.4657 - val_loss: 10.4556 - val_mae: 2.3612
Epoch 49/300
9/9 [==============================] - 0s 5ms/step - loss: 2.8366 - mae: 1.2479 - val_loss: 9.1685 - val_mae: 2.2768
Epoch 50/300
9/9 [==============================] - 0s 5ms/step - loss: 2.9141 - mae: 1.2856 - val_loss: 9.2170 - val_mae: 2.3147
Epoch 51/300
9/9 [==============================] - 0s 4ms/step - loss: 2.9130 - mae: 1.3059 - val_loss: 9.2830 - val_mae: 2.4074
Epoch 52/300
9/9 [==============================] - 0s 4ms/step - loss: 3.6576 - mae: 1.4440 - val_loss: 8.2930 - val_mae: 2.3096
Epoch 53/300
9/9 [==============================] - 0s 8ms/step - loss: 3.7554 - mae: 1.4417 - val_loss: 10.3064 - val_mae: 2.4746
Epoch 54/300
9/9 [==============================] - 0s 4ms/step - loss: 3.9983 - mae: 1.5461 - val_loss: 9.0305 - val_mae: 2.1781
Epoch 55/300
9/9 [==============================] - 0s 4ms/step - loss: 2.4590 - mae: 1.2253 - val_loss: 11.7878 - val_mae: 2.4756
Epoch 56/300
9/9 [==============================] - 0s 4ms/step - loss: 2.4868 - mae: 1.1852 - val_loss: 9.3828 - val_mae: 2.2327
Epoch 57/300
9/9 [==============================] - 0s 5ms/step - loss: 2.8161 - mae: 1.3143 - val_loss: 11.9237 - val_mae: 2.4437
Epoch 58/300
9/9 [==============================] - 0s 4ms/step - loss: 2.8071 - mae: 1.2740 - val_loss: 9.1914 - val_mae: 2.2249
Epoch 59/300
9/9 [==============================] - 0s 4ms/step - loss: 2.2587 - mae: 1.1530 - val_loss: 9.6559 - val_mae: 2.4365
Epoch 60/300
9/9 [==============================] - 0s 4ms/step - loss: 3.4021 - mae: 1.4175 - val_loss: 9.3767 - val_mae: 2.4723
Epoch 61/300
9/9 [==============================] - 0s 6ms/step - loss: 3.5084 - mae: 1.4200 - val_loss: 11.4666 - val_mae: 2.4462
Epoch 62/300
9/9 [==============================] - 0s 4ms/step - loss: 3.5914 - mae: 1.4945 - val_loss: 14.0612 - val_mae: 2.7399
Epoch 63/300
9/9 [==============================] - 0s 4ms/step - loss: 4.1327 - mae: 1.5373 - val_loss: 14.6667 - val_mae: 2.6739
Epoch 64/300
9/9 [==============================] - 0s 5ms/step - loss: 3.3784 - mae: 1.3880 - val_loss: 12.8577 - val_mae: 2.4854
Epoch 65/300
9/9 [==============================] - 0s 5ms/step - loss: 3.1059 - mae: 1.3413 - val_loss: 10.2083 - val_mae: 2.2411
Epoch 66/300
9/9 [==============================] - 0s 5ms/step - loss: 4.4544 - mae: 1.5783 - val_loss: 13.1504 - val_mae: 2.7282
Epoch 67/300
9/9 [==============================] - 0s 4ms/step - loss: 4.4461 - mae: 1.5080 - val_loss: 9.3300 - val_mae: 2.3887
Epoch 68/300
9/9 [==============================] - 0s 4ms/step - loss: 3.4763 - mae: 1.3835 - val_loss: 10.0616 - val_mae: 2.3925
Epoch 69/300
9/9 [==============================] - 0s 4ms/step - loss: 3.3870 - mae: 1.3969 - val_loss: 13.5585 - val_mae: 2.5634
Epoch 70/300
9/9 [==============================] - 0s 4ms/step - loss: 2.6638 - mae: 1.2294 - val_loss: 12.2245 - val_mae: 2.5543
Epoch 71/300
9/9 [==============================] - 0s 4ms/step - loss: 2.8323 - mae: 1.2967 - val_loss: 10.0222 - val_mae: 2.3171
Epoch 72/300
9/9 [==============================] - 0s 4ms/step - loss: 3.7947 - mae: 1.4285 - val_loss: 11.4458 - val_mae: 2.7129
Epoch 73/300
9/9 [==============================] - 0s 4ms/step - loss: 3.4621 - mae: 1.3901 - val_loss: 8.4725 - val_mae: 2.2448
Epoch 74/300
9/9 [==============================] - 0s 4ms/step - loss: 2.6856 - mae: 1.2645 - val_loss: 11.4067 - val_mae: 2.3572
Epoch 75/300
9/9 [==============================] - 0s 4ms/step - loss: 2.1707 - mae: 1.1376 - val_loss: 10.9015 - val_mae: 2.4353
Epoch 76/300
9/9 [==============================] - 0s 5ms/step - loss: 2.1772 - mae: 1.1470 - val_loss: 9.4474 - val_mae: 2.3484
Epoch 77/300
9/9 [==============================] - 0s 5ms/step - loss: 2.1045 - mae: 1.1069 - val_loss: 11.0634 - val_mae: 2.6222
Epoch 78/300
9/9 [==============================] - 0s 4ms/step - loss: 2.5436 - mae: 1.2070 - val_loss: 11.0566 - val_mae: 2.3459
Epoch 79/300
9/9 [==============================] - 0s 5ms/step - loss: 2.5658 - mae: 1.2122 - val_loss: 8.7894 - val_mae: 2.3246
Epoch 80/300
9/9 [==============================] - 0s 4ms/step - loss: 3.2659 - mae: 1.3651 - val_loss: 15.4604 - val_mae: 2.9006
Epoch 81/300
9/9 [==============================] - 0s 4ms/step - loss: 5.8932 - mae: 1.8734 - val_loss: 8.7928 - val_mae: 2.1848
Epoch 82/300
9/9 [==============================] - 0s 4ms/step - loss: 5.0979 - mae: 1.6791 - val_loss: 17.2314 - val_mae: 3.0984
Epoch 83/300
9/9 [==============================] - 0s 4ms/step - loss: 5.3734 - mae: 1.7512 - val_loss: 10.1699 - val_mae: 2.4246
Epoch 84/300
9/9 [==============================] - 0s 4ms/step - loss: 3.1025 - mae: 1.3791 - val_loss: 8.0583 - val_mae: 2.2012
Epoch 85/300
9/9 [==============================] - 0s 5ms/step - loss: 3.0591 - mae: 1.3608 - val_loss: 10.0700 - val_mae: 2.4356
Epoch 86/300
9/9 [==============================] - 0s 4ms/step - loss: 3.1022 - mae: 1.3196 - val_loss: 10.3131 - val_mae: 2.3257
Epoch 87/300
9/9 [==============================] - 0s 5ms/step - loss: 2.7406 - mae: 1.2745 - val_loss: 12.9390 - val_mae: 2.6948
Epoch 88/300
9/9 [==============================] - 0s 4ms/step - loss: 2.4159 - mae: 1.2248 - val_loss: 9.9760 - val_mae: 2.3511
Epoch 89/300
9/9 [==============================] - 0s 4ms/step - loss: 2.2531 - mae: 1.1365 - val_loss: 11.8934 - val_mae: 2.4585
Epoch 90/300
9/9 [==============================] - 0s 4ms/step - loss: 3.0078 - mae: 1.2853 - val_loss: 11.1644 - val_mae: 2.5048
Epoch 91/300
9/9 [==============================] - 0s 4ms/step - loss: 3.5025 - mae: 1.3570 - val_loss: 9.2513 - val_mae: 2.3425
Epoch 92/300
9/9 [==============================] - 0s 4ms/step - loss: 2.7118 - mae: 1.2411 - val_loss: 10.6357 - val_mae: 2.4711
Epoch 93/300
9/9 [==============================] - 0s 6ms/step - loss: 2.1579 - mae: 1.0975 - val_loss: 11.0048 - val_mae: 2.3693
Epoch 94/300
9/9 [==============================] - 0s 4ms/step - loss: 1.8383 - mae: 1.0101 - val_loss: 8.7624 - val_mae: 2.2079
Epoch 95/300
9/9 [==============================] - 0s 4ms/step - loss: 1.9034 - mae: 1.0183 - val_loss: 9.0126 - val_mae: 2.2160
Epoch 96/300
9/9 [==============================] - 0s 4ms/step - loss: 1.8063 - mae: 1.0272 - val_loss: 10.7558 - val_mae: 2.3465
Epoch 97/300
9/9 [==============================] - 0s 4ms/step - loss: 2.0504 - mae: 1.0847 - val_loss: 8.7077 - val_mae: 2.3058
Epoch 98/300
9/9 [==============================] - 0s 4ms/step - loss: 2.1829 - mae: 1.1103 - val_loss: 9.9883 - val_mae: 2.3958
Epoch 99/300
9/9 [==============================] - 0s 5ms/step - loss: 2.7802 - mae: 1.2599 - val_loss: 8.7747 - val_mae: 2.2187
Epoch 100/300
9/9 [==============================] - 0s 5ms/step - loss: 1.9790 - mae: 1.0758 - val_loss: 12.0451 - val_mae: 2.6189
Epoch 101/300
9/9 [==============================] - 0s 4ms/step - loss: 2.4906 - mae: 1.1831 - val_loss: 10.1801 - val_mae: 2.3198
Epoch 102/300
9/9 [==============================] - 0s 5ms/step - loss: 2.3685 - mae: 1.1637 - val_loss: 8.9193 - val_mae: 2.3021
Epoch 103/300
9/9 [==============================] - 0s 7ms/step - loss: 2.6865 - mae: 1.2885 - val_loss: 10.8707 - val_mae: 2.3570
Epoch 104/300
9/9 [==============================] - 0s 4ms/step - loss: 2.3778 - mae: 1.1594 - val_loss: 10.3086 - val_mae: 2.3925
Epoch 105/300
9/9 [==============================] - 0s 5ms/step - loss: 1.9299 - mae: 1.0588 - val_loss: 10.1214 - val_mae: 2.2951
Epoch 106/300
9/9 [==============================] - 0s 5ms/step - loss: 1.9407 - mae: 1.0413 - val_loss: 10.2044 - val_mae: 2.3418
Epoch 107/300
9/9 [==============================] - 0s 5ms/step - loss: 2.2013 - mae: 1.1615 - val_loss: 9.5291 - val_mae: 2.2889
Epoch 108/300
9/9 [==============================] - 0s 5ms/step - loss: 1.5356 - mae: 0.9307 - val_loss: 9.9544 - val_mae: 2.3796
Epoch 109/300
9/9 [==============================] - 0s 5ms/step - loss: 1.5832 - mae: 0.9541 - val_loss: 8.9253 - val_mae: 2.2070
Epoch 110/300
9/9 [==============================] - 0s 5ms/step - loss: 1.4403 - mae: 0.9142 - val_loss: 9.4118 - val_mae: 2.2036
Epoch 111/300
9/9 [==============================] - 0s 4ms/step - loss: 1.2912 - mae: 0.8380 - val_loss: 10.5468 - val_mae: 2.3724
Epoch 112/300
9/9 [==============================] - 0s 4ms/step - loss: 1.5027 - mae: 0.9226 - val_loss: 9.0908 - val_mae: 2.2205
Epoch 113/300
9/9 [==============================] - 0s 5ms/step - loss: 1.2615 - mae: 0.8458 - val_loss: 9.8323 - val_mae: 2.2628
Epoch 114/300
9/9 [==============================] - 0s 4ms/step - loss: 1.3027 - mae: 0.8540 - val_loss: 10.3418 - val_mae: 2.2923
Epoch 115/300
9/9 [==============================] - 0s 4ms/step - loss: 1.3919 - mae: 0.8868 - val_loss: 8.4377 - val_mae: 2.1720
Epoch 116/300
9/9 [==============================] - 0s 5ms/step - loss: 1.4686 - mae: 0.9039 - val_loss: 10.3558 - val_mae: 2.3306
Epoch 117/300
9/9 [==============================] - 0s 4ms/step - loss: 1.3778 - mae: 0.8652 - val_loss: 10.9006 - val_mae: 2.3601
Epoch 118/300
9/9 [==============================] - 0s 5ms/step - loss: 1.2900 - mae: 0.8477 - val_loss: 9.7185 - val_mae: 2.3396
Epoch 119/300
9/9 [==============================] - 0s 4ms/step - loss: 1.5093 - mae: 0.9236 - val_loss: 9.7441 - val_mae: 2.3777
Epoch 120/300
9/9 [==============================] - 0s 5ms/step - loss: 2.1726 - mae: 1.1108 - val_loss: 10.0502 - val_mae: 2.4928
Epoch 121/300
9/9 [==============================] - 0s 5ms/step - loss: 2.2530 - mae: 1.1410 - val_loss: 8.5525 - val_mae: 2.3316
Epoch 122/300
9/9 [==============================] - 0s 4ms/step - loss: 2.3638 - mae: 1.1793 - val_loss: 9.6537 - val_mae: 2.4083
Epoch 123/300
9/9 [==============================] - 0s 5ms/step - loss: 3.1176 - mae: 1.3450 - val_loss: 10.4422 - val_mae: 2.5739
Epoch 124/300
9/9 [==============================] - 0s 5ms/step - loss: 2.4086 - mae: 1.2037 - val_loss: 10.6141 - val_mae: 2.3526
Epoch 125/300
9/9 [==============================] - 0s 4ms/step - loss: 2.1229 - mae: 1.1169 - val_loss: 11.8525 - val_mae: 2.3527
Epoch 126/300
9/9 [==============================] - 0s 4ms/step - loss: 2.2442 - mae: 1.1197 - val_loss: 8.9360 - val_mae: 2.4037
Epoch 127/300
9/9 [==============================] - 0s 4ms/step - loss: 2.5998 - mae: 1.2697 - val_loss: 9.5898 - val_mae: 2.2810
Epoch 128/300
9/9 [==============================] - 0s 4ms/step - loss: 2.2440 - mae: 1.1592 - val_loss: 10.1961 - val_mae: 2.3951
Epoch 129/300
9/9 [==============================] - 0s 4ms/step - loss: 1.9587 - mae: 1.0319 - val_loss: 8.6976 - val_mae: 2.3077
Epoch 130/300
9/9 [==============================] - 0s 5ms/step - loss: 2.2268 - mae: 1.1480 - val_loss: 9.9518 - val_mae: 2.4276
Epoch 131/300
9/9 [==============================] - 0s 5ms/step - loss: 3.0697 - mae: 1.3161 - val_loss: 9.0599 - val_mae: 2.1674
Epoch 132/300
9/9 [==============================] - 0s 4ms/step - loss: 2.1114 - mae: 1.0910 - val_loss: 12.9671 - val_mae: 2.6370
Epoch 133/300
9/9 [==============================] - 0s 4ms/step - loss: 2.5425 - mae: 1.2050 - val_loss: 11.9471 - val_mae: 2.5541
Epoch 134/300
9/9 [==============================] - 0s 4ms/step - loss: 2.7657 - mae: 1.2797 - val_loss: 9.1584 - val_mae: 2.3534
Epoch 135/300
9/9 [==============================] - 0s 4ms/step - loss: 1.7294 - mae: 0.9989 - val_loss: 9.0478 - val_mae: 2.2782
Epoch 136/300
9/9 [==============================] - 0s 4ms/step - loss: 1.3861 - mae: 0.8861 - val_loss: 10.3818 - val_mae: 2.3478
Epoch 137/300
9/9 [==============================] - 0s 4ms/step - loss: 1.2833 - mae: 0.8509 - val_loss: 10.2006 - val_mae: 2.3078
Epoch 138/300
9/9 [==============================] - 0s 5ms/step - loss: 1.4685 - mae: 0.8910 - val_loss: 10.6764 - val_mae: 2.4353
Epoch 139/300
9/9 [==============================] - 0s 5ms/step - loss: 1.7648 - mae: 1.0516 - val_loss: 9.9569 - val_mae: 2.2854
Epoch 140/300
9/9 [==============================] - 0s 5ms/step - loss: 1.5818 - mae: 0.9495 - val_loss: 10.6176 - val_mae: 2.2908
Epoch 141/300
9/9 [==============================] - 0s 4ms/step - loss: 1.3158 - mae: 0.8663 - val_loss: 9.2190 - val_mae: 2.4115
Epoch 142/300
9/9 [==============================] - 0s 4ms/step - loss: 1.8697 - mae: 1.0759 - val_loss: 8.4910 - val_mae: 2.2363
Epoch 143/300
9/9 [==============================] - 0s 4ms/step - loss: 1.3123 - mae: 0.8546 - val_loss: 9.5580 - val_mae: 2.2758
Epoch 144/300
9/9 [==============================] - 0s 5ms/step - loss: 1.4274 - mae: 0.9029 - val_loss: 9.6794 - val_mae: 2.2475
Epoch 145/300
9/9 [==============================] - 0s 5ms/step - loss: 1.4860 - mae: 0.9200 - val_loss: 9.7681 - val_mae: 2.2583
Epoch 146/300
9/9 [==============================] - 0s 5ms/step - loss: 1.4636 - mae: 0.8893 - val_loss: 10.3608 - val_mae: 2.3677
Epoch 147/300
9/9 [==============================] - 0s 4ms/step - loss: 1.2619 - mae: 0.8384 - val_loss: 10.1053 - val_mae: 2.3468
Epoch 148/300
9/9 [==============================] - 0s 4ms/step - loss: 1.3166 - mae: 0.8477 - val_loss: 9.5112 - val_mae: 2.3209
Epoch 149/300
9/9 [==============================] - 0s 4ms/step - loss: 1.3242 - mae: 0.8631 - val_loss: 10.7832 - val_mae: 2.3820
Epoch 150/300
9/9 [==============================] - 0s 4ms/step - loss: 1.5037 - mae: 0.9514 - val_loss: 8.7991 - val_mae: 2.3230
Epoch 151/300
9/9 [==============================] - 0s 6ms/step - loss: 1.4840 - mae: 0.9361 - val_loss: 8.3615 - val_mae: 2.2355
Epoch 152/300
9/9 [==============================] - 0s 5ms/step - loss: 1.4309 - mae: 0.9314 - val_loss: 8.5271 - val_mae: 2.1781
Epoch 153/300
9/9 [==============================] - 0s 5ms/step - loss: 1.2496 - mae: 0.8383 - val_loss: 9.4026 - val_mae: 2.3393
Epoch 154/300
9/9 [==============================] - 0s 4ms/step - loss: 1.5697 - mae: 0.8515 - val_loss: 9.9953 - val_mae: 2.3969
Epoch 155/300
9/9 [==============================] - 0s 7ms/step - loss: 1.5768 - mae: 0.9144 - val_loss: 8.7126 - val_mae: 2.2218
Epoch 156/300
9/9 [==============================] - 0s 6ms/step - loss: 1.4210 - mae: 0.8514 - val_loss: 9.0210 - val_mae: 2.2747
Epoch 157/300
9/9 [==============================] - 0s 5ms/step - loss: 1.4513 - mae: 0.9029 - val_loss: 10.2513 - val_mae: 2.4694
Epoch 158/300
9/9 [==============================] - 0s 4ms/step - loss: 2.1007 - mae: 1.0810 - val_loss: 9.1315 - val_mae: 2.3760
Epoch 159/300
9/9 [==============================] - 0s 5ms/step - loss: 1.7019 - mae: 0.9969 - val_loss: 10.4056 - val_mae: 2.4281
Epoch 160/300
9/9 [==============================] - 0s 5ms/step - loss: 2.0384 - mae: 1.0796 - val_loss: 9.1242 - val_mae: 2.3538
Epoch 161/300
9/9 [==============================] - 0s 4ms/step - loss: 2.5954 - mae: 1.1834 - val_loss: 10.7179 - val_mae: 2.4016
Epoch 162/300
9/9 [==============================] - 0s 4ms/step - loss: 2.3765 - mae: 1.0934 - val_loss: 10.7456 - val_mae: 2.2903
Epoch 163/300
9/9 [==============================] - 0s 4ms/step - loss: 2.2380 - mae: 1.1480 - val_loss: 10.0621 - val_mae: 2.3263
Epoch 164/300
9/9 [==============================] - 0s 6ms/step - loss: 1.7771 - mae: 1.0122 - val_loss: 11.2053 - val_mae: 2.5333
Epoch 165/300
9/9 [==============================] - 0s 5ms/step - loss: 2.2161 - mae: 1.1461 - val_loss: 9.6989 - val_mae: 2.2865
Epoch 166/300
9/9 [==============================] - 0s 5ms/step - loss: 1.8663 - mae: 1.1001 - val_loss: 8.3490 - val_mae: 2.2128
Epoch 167/300
9/9 [==============================] - 0s 5ms/step - loss: 1.5458 - mae: 0.9676 - val_loss: 8.9728 - val_mae: 2.2055
Epoch 168/300
9/9 [==============================] - 0s 5ms/step - loss: 1.0666 - mae: 0.7762 - val_loss: 8.0403 - val_mae: 2.2507
Epoch 169/300
9/9 [==============================] - 0s 5ms/step - loss: 0.9485 - mae: 0.7186 - val_loss: 8.3311 - val_mae: 2.1313
Epoch 170/300
9/9 [==============================] - 0s 5ms/step - loss: 0.8993 - mae: 0.7085 - val_loss: 9.1098 - val_mae: 2.2427
Epoch 171/300
9/9 [==============================] - 0s 5ms/step - loss: 1.0048 - mae: 0.7440 - val_loss: 8.6925 - val_mae: 2.2814
Epoch 172/300
9/9 [==============================] - 0s 5ms/step - loss: 1.1166 - mae: 0.8039 - val_loss: 8.2941 - val_mae: 2.2678
Epoch 173/300
9/9 [==============================] - 0s 5ms/step - loss: 1.0726 - mae: 0.7838 - val_loss: 9.5063 - val_mae: 2.2588
Epoch 174/300
9/9 [==============================] - 0s 4ms/step - loss: 0.8270 - mae: 0.6810 - val_loss: 9.8260 - val_mae: 2.2487
Epoch 175/300
9/9 [==============================] - 0s 5ms/step - loss: 0.8832 - mae: 0.7179 - val_loss: 9.1483 - val_mae: 2.3474
Epoch 176/300
9/9 [==============================] - 0s 5ms/step - loss: 0.8099 - mae: 0.6743 - val_loss: 8.7547 - val_mae: 2.1984
Epoch 177/300
9/9 [==============================] - 0s 4ms/step - loss: 0.8207 - mae: 0.6515 - val_loss: 8.4726 - val_mae: 2.2075
Epoch 178/300
9/9 [==============================] - 0s 4ms/step - loss: 1.3277 - mae: 0.9031 - val_loss: 9.1474 - val_mae: 2.2600
Epoch 179/300
9/9 [==============================] - 0s 5ms/step - loss: 1.7676 - mae: 1.0496 - val_loss: 10.1643 - val_mae: 2.3928
Epoch 180/300
9/9 [==============================] - 0s 4ms/step - loss: 2.3427 - mae: 1.1998 - val_loss: 9.9066 - val_mae: 2.3269
Epoch 181/300
9/9 [==============================] - 0s 4ms/step - loss: 2.4786 - mae: 1.1890 - val_loss: 13.8454 - val_mae: 2.6728
Epoch 182/300
9/9 [==============================] - 0s 5ms/step - loss: 2.4081 - mae: 1.1965 - val_loss: 11.7348 - val_mae: 2.4666
Epoch 183/300
9/9 [==============================] - 0s 4ms/step - loss: 2.5278 - mae: 1.2875 - val_loss: 10.6881 - val_mae: 2.3248
Epoch 184/300
9/9 [==============================] - 0s 5ms/step - loss: 2.4543 - mae: 1.2171 - val_loss: 8.1384 - val_mae: 2.0511
Epoch 185/300
9/9 [==============================] - 0s 5ms/step - loss: 1.7103 - mae: 0.9993 - val_loss: 9.9495 - val_mae: 2.2941
Epoch 186/300
9/9 [==============================] - 0s 5ms/step - loss: 1.4442 - mae: 0.8468 - val_loss: 9.5321 - val_mae: 2.2843
Epoch 187/300
9/9 [==============================] - 0s 4ms/step - loss: 1.3422 - mae: 0.8833 - val_loss: 10.4890 - val_mae: 2.4036
Epoch 188/300
9/9 [==============================] - 0s 4ms/step - loss: 1.3150 - mae: 0.8656 - val_loss: 10.6262 - val_mae: 2.3134
Epoch 189/300
9/9 [==============================] - 0s 5ms/step - loss: 1.4160 - mae: 0.9032 - val_loss: 8.5405 - val_mae: 2.2157
Epoch 190/300
9/9 [==============================] - 0s 5ms/step - loss: 1.9008 - mae: 1.0881 - val_loss: 9.7380 - val_mae: 2.3269
Epoch 191/300
9/9 [==============================] - 0s 4ms/step - loss: 1.9524 - mae: 1.1132 - val_loss: 10.2444 - val_mae: 2.3343
Epoch 192/300
9/9 [==============================] - 0s 4ms/step - loss: 1.6260 - mae: 0.9849 - val_loss: 9.3994 - val_mae: 2.3264
Epoch 193/300
9/9 [==============================] - 0s 5ms/step - loss: 1.0688 - mae: 0.7795 - val_loss: 9.2354 - val_mae: 2.3150
Epoch 194/300
9/9 [==============================] - 0s 5ms/step - loss: 1.0556 - mae: 0.7776 - val_loss: 8.3169 - val_mae: 2.1814
Epoch 195/300
9/9 [==============================] - 0s 5ms/step - loss: 1.1207 - mae: 0.7991 - val_loss: 9.7100 - val_mae: 2.2739
Epoch 196/300
9/9 [==============================] - 0s 5ms/step - loss: 1.1259 - mae: 0.7845 - val_loss: 12.6399 - val_mae: 2.4150
Epoch 197/300
9/9 [==============================] - 0s 5ms/step - loss: 1.1511 - mae: 0.8128 - val_loss: 11.4426 - val_mae: 2.3466
Epoch 198/300
9/9 [==============================] - 0s 5ms/step - loss: 1.2447 - mae: 0.7989 - val_loss: 9.4152 - val_mae: 2.2920
Epoch 199/300
9/9 [==============================] - 0s 4ms/step - loss: 0.8338 - mae: 0.6755 - val_loss: 9.3051 - val_mae: 2.2597
Epoch 200/300
9/9 [==============================] - 0s 5ms/step - loss: 0.9948 - mae: 0.7324 - val_loss: 8.8952 - val_mae: 2.2650
Epoch 201/300
9/9 [==============================] - 0s 4ms/step - loss: 0.7528 - mae: 0.6482 - val_loss: 8.8380 - val_mae: 2.2227
Epoch 202/300
9/9 [==============================] - 0s 5ms/step - loss: 0.7648 - mae: 0.6370 - val_loss: 10.2099 - val_mae: 2.3074
Epoch 203/300
9/9 [==============================] - 0s 7ms/step - loss: 0.8848 - mae: 0.6935 - val_loss: 12.2827 - val_mae: 2.4238
Epoch 204/300
9/9 [==============================] - 0s 5ms/step - loss: 1.0436 - mae: 0.7377 - val_loss: 10.2438 - val_mae: 2.3005
Epoch 205/300
9/9 [==============================] - 0s 5ms/step - loss: 1.1904 - mae: 0.8019 - val_loss: 9.2372 - val_mae: 2.2499
Epoch 206/300
9/9 [==============================] - 0s 5ms/step - loss: 1.1743 - mae: 0.8330 - val_loss: 8.1517 - val_mae: 2.3329
Epoch 207/300
9/9 [==============================] - 0s 5ms/step - loss: 1.0371 - mae: 0.7742 - val_loss: 9.2403 - val_mae: 2.2577
Epoch 208/300
9/9 [==============================] - 0s 5ms/step - loss: 0.9461 - mae: 0.7179 - val_loss: 9.0389 - val_mae: 2.2408
Epoch 209/300
9/9 [==============================] - 0s 6ms/step - loss: 0.9368 - mae: 0.6680 - val_loss: 10.7048 - val_mae: 2.3385
Epoch 210/300
9/9 [==============================] - 0s 5ms/step - loss: 1.2368 - mae: 0.8708 - val_loss: 12.0716 - val_mae: 2.4437
Epoch 211/300
9/9 [==============================] - 0s 5ms/step - loss: 1.3079 - mae: 0.8882 - val_loss: 9.2968 - val_mae: 2.3130
Epoch 212/300
9/9 [==============================] - 0s 4ms/step - loss: 1.1586 - mae: 0.8214 - val_loss: 9.3256 - val_mae: 2.2696
Epoch 213/300
9/9 [==============================] - 0s 6ms/step - loss: 1.0216 - mae: 0.7334 - val_loss: 9.2118 - val_mae: 2.3230
Epoch 214/300
9/9 [==============================] - 0s 4ms/step - loss: 0.7468 - mae: 0.6578 - val_loss: 9.5151 - val_mae: 2.3314
Epoch 215/300
9/9 [==============================] - 0s 5ms/step - loss: 0.8789 - mae: 0.7021 - val_loss: 8.7394 - val_mae: 2.1720
Epoch 216/300
9/9 [==============================] - 0s 5ms/step - loss: 0.7111 - mae: 0.6178 - val_loss: 11.1819 - val_mae: 2.3482
Epoch 217/300
9/9 [==============================] - 0s 5ms/step - loss: 0.6275 - mae: 0.5830 - val_loss: 10.1497 - val_mae: 2.3064
Epoch 218/300
9/9 [==============================] - 0s 5ms/step - loss: 0.5705 - mae: 0.5437 - val_loss: 8.4889 - val_mae: 2.2265
Epoch 219/300
9/9 [==============================] - 0s 5ms/step - loss: 0.5365 - mae: 0.5205 - val_loss: 9.8475 - val_mae: 2.2844
Epoch 220/300
9/9 [==============================] - 0s 5ms/step - loss: 0.7341 - mae: 0.6250 - val_loss: 8.4437 - val_mae: 2.2381
Epoch 221/300
9/9 [==============================] - 0s 4ms/step - loss: 0.9226 - mae: 0.6906 - val_loss: 10.5048 - val_mae: 2.4157
Epoch 222/300
9/9 [==============================] - 0s 4ms/step - loss: 1.3694 - mae: 0.8195 - val_loss: 9.6705 - val_mae: 2.3631
Epoch 223/300
9/9 [==============================] - 0s 7ms/step - loss: 1.0285 - mae: 0.7815 - val_loss: 7.8171 - val_mae: 2.1289
Epoch 224/300
9/9 [==============================] - 0s 5ms/step - loss: 1.1003 - mae: 0.7315 - val_loss: 10.0517 - val_mae: 2.4437
Epoch 225/300
9/9 [==============================] - 0s 5ms/step - loss: 0.7908 - mae: 0.6714 - val_loss: 10.8995 - val_mae: 2.3941
Epoch 226/300
9/9 [==============================] - 0s 5ms/step - loss: 0.8159 - mae: 0.6343 - val_loss: 9.7545 - val_mae: 2.3603
Epoch 227/300
9/9 [==============================] - 0s 7ms/step - loss: 0.8158 - mae: 0.6759 - val_loss: 8.7914 - val_mae: 2.2676
Epoch 228/300
9/9 [==============================] - 0s 5ms/step - loss: 0.7480 - mae: 0.6625 - val_loss: 9.2566 - val_mae: 2.3250
Epoch 229/300
9/9 [==============================] - 0s 5ms/step - loss: 0.7026 - mae: 0.6191 - val_loss: 8.9425 - val_mae: 2.2408
Epoch 230/300
9/9 [==============================] - 0s 4ms/step - loss: 0.6249 - mae: 0.5773 - val_loss: 10.0514 - val_mae: 2.3363
Epoch 231/300
9/9 [==============================] - 0s 5ms/step - loss: 0.6351 - mae: 0.5736 - val_loss: 11.9069 - val_mae: 2.4519
Epoch 232/300
9/9 [==============================] - 0s 5ms/step - loss: 0.9321 - mae: 0.7543 - val_loss: 10.4653 - val_mae: 2.3294
Epoch 233/300
9/9 [==============================] - 0s 5ms/step - loss: 1.0245 - mae: 0.7797 - val_loss: 10.1195 - val_mae: 2.3041
Epoch 234/300
9/9 [==============================] - 0s 5ms/step - loss: 0.6885 - mae: 0.6225 - val_loss: 10.9759 - val_mae: 2.4265
Epoch 235/300
9/9 [==============================] - 0s 5ms/step - loss: 0.7106 - mae: 0.6502 - val_loss: 11.7651 - val_mae: 2.4692
Epoch 236/300
9/9 [==============================] - 0s 5ms/step - loss: 0.9171 - mae: 0.7488 - val_loss: 11.5120 - val_mae: 2.3870
Epoch 237/300
9/9 [==============================] - 0s 5ms/step - loss: 0.9904 - mae: 0.7531 - val_loss: 10.7283 - val_mae: 2.3640
Epoch 238/300
9/9 [==============================] - 0s 4ms/step - loss: 0.8144 - mae: 0.6991 - val_loss: 9.9591 - val_mae: 2.3440
Epoch 239/300
9/9 [==============================] - 0s 5ms/step - loss: 0.6549 - mae: 0.6044 - val_loss: 9.0811 - val_mae: 2.2441
Epoch 240/300
9/9 [==============================] - 0s 5ms/step - loss: 0.7284 - mae: 0.6381 - val_loss: 11.3871 - val_mae: 2.4817
Epoch 241/300
9/9 [==============================] - 0s 5ms/step - loss: 1.0762 - mae: 0.7693 - val_loss: 11.1542 - val_mae: 2.4379
Epoch 242/300
9/9 [==============================] - 0s 5ms/step - loss: 1.0528 - mae: 0.7532 - val_loss: 9.5840 - val_mae: 2.2619
Epoch 243/300
9/9 [==============================] - 0s 6ms/step - loss: 1.0288 - mae: 0.7926 - val_loss: 12.7300 - val_mae: 2.5774
Epoch 244/300
9/9 [==============================] - 0s 5ms/step - loss: 1.3034 - mae: 0.8801 - val_loss: 13.4818 - val_mae: 2.5218
Epoch 245/300
9/9 [==============================] - 0s 5ms/step - loss: 1.4451 - mae: 0.9204 - val_loss: 10.6946 - val_mae: 2.3779
Epoch 246/300
9/9 [==============================] - 0s 5ms/step - loss: 0.9316 - mae: 0.7591 - val_loss: 11.1190 - val_mae: 2.3469
Epoch 247/300
9/9 [==============================] - 0s 6ms/step - loss: 0.8024 - mae: 0.6910 - val_loss: 9.8753 - val_mae: 2.3424
Epoch 248/300
9/9 [==============================] - 0s 5ms/step - loss: 0.6277 - mae: 0.6147 - val_loss: 10.4548 - val_mae: 2.4303
Epoch 249/300
9/9 [==============================] - 0s 5ms/step - loss: 0.6574 - mae: 0.5853 - val_loss: 10.2268 - val_mae: 2.3086
Epoch 250/300
9/9 [==============================] - 0s 4ms/step - loss: 0.7134 - mae: 0.6378 - val_loss: 11.5265 - val_mae: 2.3930
Epoch 251/300
9/9 [==============================] - 0s 5ms/step - loss: 0.6883 - mae: 0.6153 - val_loss: 10.1113 - val_mae: 2.3587
Epoch 252/300
9/9 [==============================] - 0s 5ms/step - loss: 0.6121 - mae: 0.5958 - val_loss: 9.2886 - val_mae: 2.2743
Epoch 253/300
9/9 [==============================] - 0s 5ms/step - loss: 0.6460 - mae: 0.5794 - val_loss: 10.9439 - val_mae: 2.4021
Epoch 254/300
9/9 [==============================] - 0s 4ms/step - loss: 1.2858 - mae: 0.9116 - val_loss: 15.3294 - val_mae: 2.7171
Epoch 255/300
9/9 [==============================] - 0s 5ms/step - loss: 5.0198 - mae: 1.6387 - val_loss: 15.7793 - val_mae: 2.7841
Epoch 256/300
9/9 [==============================] - 0s 5ms/step - loss: 4.0174 - mae: 1.6131 - val_loss: 12.8497 - val_mae: 2.4906
Epoch 257/300
9/9 [==============================] - 0s 5ms/step - loss: 3.5874 - mae: 1.4847 - val_loss: 10.9274 - val_mae: 2.4845
Epoch 258/300
9/9 [==============================] - 0s 5ms/step - loss: 2.3178 - mae: 1.1446 - val_loss: 12.7039 - val_mae: 2.6382
Epoch 259/300
9/9 [==============================] - 0s 5ms/step - loss: 2.7830 - mae: 1.2762 - val_loss: 10.2091 - val_mae: 2.5211
Epoch 260/300
9/9 [==============================] - 0s 4ms/step - loss: 2.9140 - mae: 1.3299 - val_loss: 12.8336 - val_mae: 2.4322
Epoch 261/300
9/9 [==============================] - 0s 4ms/step - loss: 2.1975 - mae: 1.1135 - val_loss: 12.9531 - val_mae: 2.4280
Epoch 262/300
9/9 [==============================] - 0s 5ms/step - loss: 2.2864 - mae: 1.1076 - val_loss: 12.6443 - val_mae: 2.4199
Epoch 263/300
9/9 [==============================] - 0s 5ms/step - loss: 1.6013 - mae: 0.9466 - val_loss: 11.7617 - val_mae: 2.4558
Epoch 264/300
9/9 [==============================] - 0s 6ms/step - loss: 1.5413 - mae: 0.9820 - val_loss: 9.0542 - val_mae: 2.2073
Epoch 265/300
9/9 [==============================] - 0s 5ms/step - loss: 1.7211 - mae: 0.9825 - val_loss: 7.8031 - val_mae: 2.2912
Epoch 266/300
9/9 [==============================] - 0s 5ms/step - loss: 1.7452 - mae: 0.9013 - val_loss: 14.6236 - val_mae: 2.6603
Epoch 267/300
9/9 [==============================] - 0s 4ms/step - loss: 1.2725 - mae: 0.8188 - val_loss: 8.4640 - val_mae: 2.1808
Epoch 268/300
9/9 [==============================] - 0s 6ms/step - loss: 1.5279 - mae: 0.8467 - val_loss: 16.4770 - val_mae: 2.6503
Epoch 269/300
9/9 [==============================] - 0s 5ms/step - loss: 1.8114 - mae: 0.9564 - val_loss: 9.9901 - val_mae: 2.3290
Epoch 270/300
9/9 [==============================] - 0s 5ms/step - loss: 1.1704 - mae: 0.7997 - val_loss: 10.1141 - val_mae: 2.3869
Epoch 271/300
9/9 [==============================] - 0s 5ms/step - loss: 0.8594 - mae: 0.7048 - val_loss: 9.0247 - val_mae: 2.2829
Epoch 272/300
9/9 [==============================] - 0s 5ms/step - loss: 0.7056 - mae: 0.6288 - val_loss: 10.5115 - val_mae: 2.3431
Epoch 273/300
9/9 [==============================] - 0s 5ms/step - loss: 0.5610 - mae: 0.5303 - val_loss: 10.2686 - val_mae: 2.3462
Epoch 274/300
9/9 [==============================] - 0s 5ms/step - loss: 0.4554 - mae: 0.4985 - val_loss: 11.5764 - val_mae: 2.4010
Epoch 275/300
9/9 [==============================] - 0s 5ms/step - loss: 0.4324 - mae: 0.4670 - val_loss: 11.1157 - val_mae: 2.3915
Epoch 276/300
9/9 [==============================] - 0s 4ms/step - loss: 0.4436 - mae: 0.4767 - val_loss: 10.5370 - val_mae: 2.3335
Epoch 277/300
9/9 [==============================] - 0s 5ms/step - loss: 0.4064 - mae: 0.4313 - val_loss: 10.1817 - val_mae: 2.3414
Epoch 278/300
9/9 [==============================] - 0s 5ms/step - loss: 0.4329 - mae: 0.4821 - val_loss: 10.7567 - val_mae: 2.3778
Epoch 279/300
9/9 [==============================] - 0s 5ms/step - loss: 0.4314 - mae: 0.4633 - val_loss: 10.7639 - val_mae: 2.4035
Epoch 280/300
9/9 [==============================] - 0s 5ms/step - loss: 0.4351 - mae: 0.4843 - val_loss: 10.7459 - val_mae: 2.4060
Epoch 281/300
9/9 [==============================] - 0s 5ms/step - loss: 0.5301 - mae: 0.5268 - val_loss: 10.6510 - val_mae: 2.4131
Epoch 282/300
9/9 [==============================] - 0s 5ms/step - loss: 0.5348 - mae: 0.5351 - val_loss: 10.4999 - val_mae: 2.3904
Epoch 283/300
9/9 [==============================] - 0s 5ms/step - loss: 0.5098 - mae: 0.5193 - val_loss: 11.7025 - val_mae: 2.4170
Epoch 284/300
9/9 [==============================] - 0s 6ms/step - loss: 0.5033 - mae: 0.5359 - val_loss: 9.6724 - val_mae: 2.3019
Epoch 285/300
9/9 [==============================] - 0s 7ms/step - loss: 0.3557 - mae: 0.4400 - val_loss: 10.7177 - val_mae: 2.4252
Epoch 286/300
9/9 [==============================] - 0s 5ms/step - loss: 0.4347 - mae: 0.4871 - val_loss: 10.0554 - val_mae: 2.3571
Epoch 287/300
9/9 [==============================] - 0s 7ms/step - loss: 0.4490 - mae: 0.4666 - val_loss: 10.4033 - val_mae: 2.3652
Epoch 288/300
9/9 [==============================] - 0s 5ms/step - loss: 0.4716 - mae: 0.4994 - val_loss: 11.1858 - val_mae: 2.4118
Epoch 289/300
9/9 [==============================] - 0s 5ms/step - loss: 0.6891 - mae: 0.6014 - val_loss: 11.1075 - val_mae: 2.3193
Epoch 290/300
9/9 [==============================] - 0s 5ms/step - loss: 0.4984 - mae: 0.5201 - val_loss: 10.4326 - val_mae: 2.3350
Epoch 291/300
9/9 [==============================] - 0s 7ms/step - loss: 0.3945 - mae: 0.4692 - val_loss: 9.9435 - val_mae: 2.3339
Epoch 292/300
9/9 [==============================] - 0s 5ms/step - loss: 0.3879 - mae: 0.4472 - val_loss: 10.2581 - val_mae: 2.3418
Epoch 293/300
9/9 [==============================] - 0s 5ms/step - loss: 0.4337 - mae: 0.4703 - val_loss: 9.9980 - val_mae: 2.3395
Epoch 294/300
9/9 [==============================] - 0s 5ms/step - loss: 0.3705 - mae: 0.4370 - val_loss: 10.7709 - val_mae: 2.3609
Epoch 295/300
9/9 [==============================] - 0s 4ms/step - loss: 0.3976 - mae: 0.4482 - val_loss: 10.6427 - val_mae: 2.3920
Epoch 296/300
9/9 [==============================] - 0s 6ms/step - loss: 0.4513 - mae: 0.4976 - val_loss: 11.4199 - val_mae: 2.4365
Epoch 297/300
9/9 [==============================] - 0s 5ms/step - loss: 0.3964 - mae: 0.4550 - val_loss: 10.5426 - val_mae: 2.3301
Epoch 298/300
9/9 [==============================] - 0s 5ms/step - loss: 0.3931 - mae: 0.4445 - val_loss: 11.3062 - val_mae: 2.4102
Epoch 299/300
9/9 [==============================] - 0s 5ms/step - loss: 0.3621 - mae: 0.4354 - val_loss: 11.0344 - val_mae: 2.3798
Epoch 300/300
9/9 [==============================] - 0s 5ms/step - loss: 0.2954 - mae: 0.3820 - val_loss: 10.5709 - val_mae: 2.3950

모델 평가

model.evaluate(x_test, y_test)
4/4 [==============================] - 0s 3ms/step - loss: 11.6894 - mae: 2.3793
[11.689364433288574, 2.3793373107910156]
history.history.keys()
dict_keys(['loss', 'mae', 'val_loss', 'val_mae'])
history_dict = history.history

loss = history_dict['loss']
val_loss = history_dict['val_loss']

epochs = range(1, len(loss) + 1)
fig = plt.figure(figsize=(12, 6))

ax1 = fig.add_subplot(1, 2, 1)
ax1.plot(epochs, loss, color='blue', label='train_loss')
ax1.plot(epochs, val_loss, color='red', label='val_loss')
ax1.set_title('Train and Validation Loss')
ax1.set_xlabel('Epochs')
ax1.set_ylabel('Loss')
ax1.grid()
ax1.legend()

mae = history_dict['mae']
val_mae = history_dict['val_mae']

ax2 = fig.add_subplot(1, 2, 2)
ax2.plot(epochs, mae, color='blue', label='train_mae')
ax2.plot(epochs, val_mae, color='red', label='val_mae')
ax2.set_title('Train and Validation MAE')
ax2.set_xlabel('Epochs')
ax2.set_ylabel('MAE')
ax2.grid()
ax2.legend()

plt.show()