Atrición de Empleados

El abandono de empleados especialmente cuando son buenos, cuesta mucho, por costos de recontratación y entrenamiento. Es posible identificar cuales empleados pueden abandonar el empleo por medio de metodos de clasificación. El siguiente código utiliza una red neuronal profunda (Deep Learning) llamada: Perceptrón Multicapas.

attrition

Este es un programa de atricion de personal, usando un dataset de IBM disenado para el problema.

Elaborado por German Alfaro 17/08/2017

In [1]:
import numpy as np
        import seaborn as sns
        import pandas as pd
        import matplotlib.pyplot as plt
        %matplotlib inline
        from sklearn.model_selection import train_test_split
        
/usr/local/lib/python2.7/dist-packages/matplotlib/__init__.py:913: UserWarning: axes.color_cycle is deprecated and replaced with axes.prop_cycle; please use the latter.
          warnings.warn(self.msg_depr % (key, alt_key))
        
In [2]:
# semilla rnd para reproduccion posterior 
        seed = 7
        np.random.seed(seed)
        

Cargar dataset el cual contiene informacion como edad, viaje, sueldo, distancia a casa, educacion, hrs de trabajo, horas extras, puesto etc.

Checamos las 5 primeros renglones.

In [3]:
data = pd.read_csv("/home/german/Desktop/lambda_bayes/atrition/data/WA_Fn-UseC_-HR-Employee-Attrition.csv")
        data.head(2)
        
Out[3]:
Age Attrition BusinessTravel DailyRate Department DistanceFromHome Education EducationField EmployeeCount EmployeeNumber ... RelationshipSatisfaction StandardHours StockOptionLevel TotalWorkingYears TrainingTimesLastYear WorkLifeBalance YearsAtCompany YearsInCurrentRole YearsSinceLastPromotion YearsWithCurrManager
0 41 Yes Travel_Rarely 1102 Sales 1 2 Life Sciences 1 1 ... 1 80 0 8 0 1 6 4 0 5
1 49 No Travel_Frequently 279 Research & Development 8 1 Life Sciences 1 2 ... 4 80 1 10 3 3 10 7 1 7

2 rows × 35 columns

In [4]:
data.columns
        
Out[4]:
Index([u'Age', u'Attrition', u'BusinessTravel', u'DailyRate', u'Department',
               u'DistanceFromHome', u'Education', u'EducationField', u'EmployeeCount',
               u'EmployeeNumber', u'EnvironmentSatisfaction', u'Gender', u'HourlyRate',
               u'JobInvolvement', u'JobLevel', u'JobRole', u'JobSatisfaction',
               u'MaritalStatus', u'MonthlyIncome', u'MonthlyRate',
               u'NumCompaniesWorked', u'Over18', u'OverTime', u'PercentSalaryHike',
               u'PerformanceRating', u'RelationshipSatisfaction', u'StandardHours',
               u'StockOptionLevel', u'TotalWorkingYears', u'TrainingTimesLastYear',
               u'WorkLifeBalance', u'YearsAtCompany', u'YearsInCurrentRole',
               u'YearsSinceLastPromotion', u'YearsWithCurrManager'],
              dtype='object')

Correlacion de las variables para buscar variables que no son de utilidad y cuales se relacion mas.

In [5]:
data.corr()
        
Out[5]:
Age DailyRate DistanceFromHome Education EmployeeCount EmployeeNumber EnvironmentSatisfaction HourlyRate JobInvolvement JobLevel ... RelationshipSatisfaction StandardHours StockOptionLevel TotalWorkingYears TrainingTimesLastYear WorkLifeBalance YearsAtCompany YearsInCurrentRole YearsSinceLastPromotion YearsWithCurrManager
Age 1.000000 0.010661 -0.001686 0.208034 NaN -0.010145 0.010146 0.024287 0.029820 0.509604 ... 0.053535 NaN 0.037510 0.680381 -0.019621 -0.021490 0.311309 0.212901 0.216513 0.202089
DailyRate 0.010661 1.000000 -0.004985 -0.016806 NaN -0.050990 0.018355 0.023381 0.046135 0.002966 ... 0.007846 NaN 0.042143 0.014515 0.002453 -0.037848 -0.034055 0.009932 -0.033229 -0.026363
DistanceFromHome -0.001686 -0.004985 1.000000 0.021042 NaN 0.032916 -0.016075 0.031131 0.008783 0.005303 ... 0.006557 NaN 0.044872 0.004628 -0.036942 -0.026556 0.009508 0.018845 0.010029 0.014406
Education 0.208034 -0.016806 0.021042 1.000000 NaN 0.042070 -0.027128 0.016775 0.042438 0.101589 ... -0.009118 NaN 0.018422 0.148280 -0.025100 0.009819 0.069114 0.060236 0.054254 0.069065
EmployeeCount NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
EmployeeNumber -0.010145 -0.050990 0.032916 0.042070 NaN 1.000000 0.017621 0.035179 -0.006888 -0.018519 ... -0.069861 NaN 0.062227 -0.014365 0.023603 0.010309 -0.011240 -0.008416 -0.009019 -0.009197
EnvironmentSatisfaction 0.010146 0.018355 -0.016075 -0.027128 NaN 0.017621 1.000000 -0.049857 -0.008278 0.001212 ... 0.007665 NaN 0.003432 -0.002693 -0.019359 0.027627 0.001458 0.018007 0.016194 -0.004999
HourlyRate 0.024287 0.023381 0.031131 0.016775 NaN 0.035179 -0.049857 1.000000 0.042861 -0.027853 ... 0.001330 NaN 0.050263 -0.002334 -0.008548 -0.004607 -0.019582 -0.024106 -0.026716 -0.020123
JobInvolvement 0.029820 0.046135 0.008783 0.042438 NaN -0.006888 -0.008278 0.042861 1.000000 -0.012630 ... 0.034297 NaN 0.021523 -0.005533 -0.015338 -0.014617 -0.021355 0.008717 -0.024184 0.025976
JobLevel 0.509604 0.002966 0.005303 0.101589 NaN -0.018519 0.001212 -0.027853 -0.012630 1.000000 ... 0.021642 NaN 0.013984 0.782208 -0.018191 0.037818 0.534739 0.389447 0.353885 0.375281
JobSatisfaction -0.004892 0.030571 -0.003669 -0.011296 NaN -0.046247 -0.006784 -0.071335 -0.021476 -0.001944 ... -0.012454 NaN 0.010690 -0.020185 -0.005779 -0.019459 -0.003803 -0.002305 -0.018214 -0.027656
MonthlyIncome 0.497855 0.007707 -0.017014 0.094961 NaN -0.014829 -0.006259 -0.015794 -0.015271 0.950300 ... 0.025873 NaN 0.005408 0.772893 -0.021736 0.030683 0.514285 0.363818 0.344978 0.344079
MonthlyRate 0.028051 -0.032182 0.027473 -0.026084 NaN 0.012648 0.037600 -0.015297 -0.016322 0.039563 ... -0.004085 NaN -0.034323 0.026442 0.001467 0.007963 -0.023655 -0.012815 0.001567 -0.036746
NumCompaniesWorked 0.299635 0.038153 -0.029251 0.126317 NaN -0.001251 0.012594 0.022157 0.015012 0.142501 ... 0.052733 NaN 0.030075 0.237639 -0.066054 -0.008366 -0.118421 -0.090754 -0.036814 -0.110319
PercentSalaryHike 0.003634 0.022704 0.040235 -0.011111 NaN -0.012944 -0.031701 -0.009062 -0.017205 -0.034730 ... -0.040490 NaN 0.007528 -0.020608 -0.005221 -0.003280 -0.035991 -0.001520 -0.022154 -0.011985
PerformanceRating 0.001904 0.000473 0.027110 -0.024539 NaN -0.020359 -0.029548 -0.002172 -0.029071 -0.021222 ... -0.031351 NaN 0.003506 0.006744 -0.015579 0.002572 0.003435 0.034986 0.017896 0.022827
RelationshipSatisfaction 0.053535 0.007846 0.006557 -0.009118 NaN -0.069861 0.007665 0.001330 0.034297 0.021642 ... 1.000000 NaN -0.045952 0.024054 0.002497 0.019604 0.019367 -0.015123 0.033493 -0.000867
StandardHours NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
StockOptionLevel 0.037510 0.042143 0.044872 0.018422 NaN 0.062227 0.003432 0.050263 0.021523 0.013984 ... -0.045952 NaN 1.000000 0.010136 0.011274 0.004129 0.015058 0.050818 0.014352 0.024698
TotalWorkingYears 0.680381 0.014515 0.004628 0.148280 NaN -0.014365 -0.002693 -0.002334 -0.005533 0.782208 ... 0.024054 NaN 0.010136 1.000000 -0.035662 0.001008 0.628133 0.460365 0.404858 0.459188
TrainingTimesLastYear -0.019621 0.002453 -0.036942 -0.025100 NaN 0.023603 -0.019359 -0.008548 -0.015338 -0.018191 ... 0.002497 NaN 0.011274 -0.035662 1.000000 0.028072 0.003569 -0.005738 -0.002067 -0.004096
WorkLifeBalance -0.021490 -0.037848 -0.026556 0.009819 NaN 0.010309 0.027627 -0.004607 -0.014617 0.037818 ... 0.019604 NaN 0.004129 0.001008 0.028072 1.000000 0.012089 0.049856 0.008941 0.002759
YearsAtCompany 0.311309 -0.034055 0.009508 0.069114 NaN -0.011240 0.001458 -0.019582 -0.021355 0.534739 ... 0.019367 NaN 0.015058 0.628133 0.003569 0.012089 1.000000 0.758754 0.618409 0.769212
YearsInCurrentRole 0.212901 0.009932 0.018845 0.060236 NaN -0.008416 0.018007 -0.024106 0.008717 0.389447 ... -0.015123 NaN 0.050818 0.460365 -0.005738 0.049856 0.758754 1.000000 0.548056 0.714365
YearsSinceLastPromotion 0.216513 -0.033229 0.010029 0.054254 NaN -0.009019 0.016194 -0.026716 -0.024184 0.353885 ... 0.033493 NaN 0.014352 0.404858 -0.002067 0.008941 0.618409 0.548056 1.000000 0.510224
YearsWithCurrManager 0.202089 -0.026363 0.014406 0.069065 NaN -0.009197 -0.004999 -0.020123 0.025976 0.375281 ... -0.000867 NaN 0.024698 0.459188 -0.004096 0.002759 0.769212 0.714365 0.510224 1.000000

26 rows × 26 columns

Algunos graficos de ejemplo para visualizar el problema.

En este caso sueldo mensual vs sexo

In [6]:
sns.distplot(data.MonthlyIncome[data.Gender == 'Male'], bins = np.linspace(0,20000,60))
        sns.distplot(data.MonthlyIncome[data.Gender == 'Female'], bins = np.linspace(0,20000,60))
        plt.legend(['Males','Females'])
        
Out[6]:
<matplotlib.legend.Legend at 0x7f66b958b850>

Eliminar columnas no necesarias y "one hot encoding"

In [7]:
#one hot encode
        data[['BusinessTravel', 'Department', 'EducationField', 'Gender', 'JobRole', 'MaritalStatus', 'Over18', 'OverTime']]
        data.columns
        
Out[7]:
Index([u'Age', u'Attrition', u'BusinessTravel', u'DailyRate', u'Department',
               u'DistanceFromHome', u'Education', u'EducationField', u'EmployeeCount',
               u'EmployeeNumber', u'EnvironmentSatisfaction', u'Gender', u'HourlyRate',
               u'JobInvolvement', u'JobLevel', u'JobRole', u'JobSatisfaction',
               u'MaritalStatus', u'MonthlyIncome', u'MonthlyRate',
               u'NumCompaniesWorked', u'Over18', u'OverTime', u'PercentSalaryHike',
               u'PerformanceRating', u'RelationshipSatisfaction', u'StandardHours',
               u'StockOptionLevel', u'TotalWorkingYears', u'TrainingTimesLastYear',
               u'WorkLifeBalance', u'YearsAtCompany', u'YearsInCurrentRole',
               u'YearsSinceLastPromotion', u'YearsWithCurrManager'],
              dtype='object')
In [8]:
#Eliminacion de columbas de datos constantes 
        del data['Over18']
        del data['StandardHours']
        del data['EmployeeCount']
        
In [9]:
data.dtypes      #object type
        #data[['Age']].hist
        data = pd.get_dummies(data, prefix=['BusinessTravel', 'Department', 'EducationField', 'Gender', 'JobRole', \
                                            'MaritalStatus',  'OverTime'], columns=['BusinessTravel', 'Department',\
                                            'EducationField', 'Gender', 'JobRole', 'MaritalStatus',  'OverTime'])
        data.columns
        
Out[9]:
Index([u'Age', u'Attrition', u'DailyRate', u'DistanceFromHome', u'Education',
               u'EmployeeNumber', u'EnvironmentSatisfaction', u'HourlyRate',
               u'JobInvolvement', u'JobLevel', u'JobSatisfaction', u'MonthlyIncome',
               u'MonthlyRate', u'NumCompaniesWorked', u'PercentSalaryHike',
               u'PerformanceRating', u'RelationshipSatisfaction', u'StockOptionLevel',
               u'TotalWorkingYears', u'TrainingTimesLastYear', u'WorkLifeBalance',
               u'YearsAtCompany', u'YearsInCurrentRole', u'YearsSinceLastPromotion',
               u'YearsWithCurrManager', u'BusinessTravel_Non-Travel',
               u'BusinessTravel_Travel_Frequently', u'BusinessTravel_Travel_Rarely',
               u'Department_Human Resources', u'Department_Research & Development',
               u'Department_Sales', u'EducationField_Human Resources',
               u'EducationField_Life Sciences', u'EducationField_Marketing',
               u'EducationField_Medical', u'EducationField_Other',
               u'EducationField_Technical Degree', u'Gender_Female', u'Gender_Male',
               u'JobRole_Healthcare Representative', u'JobRole_Human Resources',
               u'JobRole_Laboratory Technician', u'JobRole_Manager',
               u'JobRole_Manufacturing Director', u'JobRole_Research Director',
               u'JobRole_Research Scientist', u'JobRole_Sales Executive',
               u'JobRole_Sales Representative', u'MaritalStatus_Divorced',
               u'MaritalStatus_Married', u'MaritalStatus_Single', u'OverTime_No',
               u'OverTime_Yes'],
              dtype='object')
In [10]:
#normalizacion de datos 
        def preprocess(raw_X):
            from sklearn import preprocessing


            X = preprocessing.scale(raw_X)
            return X
        
In [11]:
#covertir variables yes y no a 0 y 1s. 
        yes_no = lambda x: 1 if x == 'Yes'else 0
        data['Attrition']=data.Attrition.apply(yes_no)
        y=data['Attrition']
        del data['Attrition']
        del data['EmployeeNumber']
        

Separar 'split' dataset en entrenamiento y prueba. Comprobar forma de datos para modelo

In [12]:
data = preprocess(data)
        X_train, X_test, y_train, y_test = train_test_split(data, y, test_size=0.33, random_state=seed)

        X_train[3]
        X_train.shape
        X_test.shape
        y_train.shape
        y_test.shape
        
Out[12]:
(486,)
In [13]:
#importar modelo 
        import model
        
Cargando modelo
        
Using TensorFlow backend.
        
In [14]:
from keras.models import Sequential
        from keras.layers import Dense, Dropout
        import numpy as np
        import pandas as pd
        from keras.regularizers import l2
        from keras.utils import np_utils
        import seaborn as sns
        #'''
        drop=0.3
        # create model
        model = Sequential()
        model.add(Dense(102, input_dim=51, kernel_initializer='uniform', activation='relu'))
        model.add(Dropout(drop))
        #model.add(Dense(80, input_dim=51, kernel_initializer='uniform', activation='relu'))
        #model.add(Dropout(drop))
        model.add(Dense(40, input_dim=51, kernel_initializer='uniform', activation='relu'))
        model.add(Dropout(drop))
        model.add(Dense(1, activation='sigmoid'))
        #'''

        model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
        
In [15]:
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

        model.summary()
        
_________________________________________________________________
        Layer (type)                 Output Shape              Param #
        =================================================================
        dense_1 (Dense)              (None, 102)               5304
        _________________________________________________________________
        dropout_1 (Dropout)          (None, 102)               0
        _________________________________________________________________
        dense_2 (Dense)              (None, 40)                4120
        _________________________________________________________________
        dropout_2 (Dropout)          (None, 40)                0
        _________________________________________________________________
        dense_3 (Dense)              (None, 1)                 41
        =================================================================
        Total params: 9,465
        Trainable params: 9,465
        Non-trainable params: 0
        _________________________________________________________________
        
In [16]:
model.fit(X_train, y_train, epochs=100, batch_size=30,  verbose=2)
        
Epoch 1/100
        0s - loss: 0.5483 - acc: 0.8140
        Epoch 2/100
        0s - loss: 0.3749 - acc: 0.8394
        Epoch 3/100
        0s - loss: 0.3306 - acc: 0.8455
        Epoch 4/100
        0s - loss: 0.3010 - acc: 0.8811
        Epoch 5/100
        0s - loss: 0.2772 - acc: 0.8974
        Epoch 6/100
        0s - loss: 0.2615 - acc: 0.9055
        Epoch 7/100
        0s - loss: 0.2503 - acc: 0.9065
        Epoch 8/100
        0s - loss: 0.2352 - acc: 0.9146
        Epoch 9/100
        0s - loss: 0.2268 - acc: 0.9157
        Epoch 10/100
        0s - loss: 0.2133 - acc: 0.9187
        Epoch 11/100
        0s - loss: 0.2114 - acc: 0.9167
        Epoch 12/100
        0s - loss: 0.1954 - acc: 0.9319
        Epoch 13/100
        0s - loss: 0.1991 - acc: 0.9167
        Epoch 14/100
        0s - loss: 0.1881 - acc: 0.9329
        Epoch 15/100
        0s - loss: 0.1710 - acc: 0.9339
        Epoch 16/100
        0s - loss: 0.1673 - acc: 0.9411
        Epoch 17/100
        0s - loss: 0.1699 - acc: 0.9350
        Epoch 18/100
        0s - loss: 0.1708 - acc: 0.9421
        Epoch 19/100
        0s - loss: 0.1510 - acc: 0.9472
        Epoch 20/100
        0s - loss: 0.1508 - acc: 0.9411
        Epoch 21/100
        0s - loss: 0.1377 - acc: 0.9502
        Epoch 22/100
        0s - loss: 0.1360 - acc: 0.9512
        Epoch 23/100
        0s - loss: 0.1203 - acc: 0.9604
        Epoch 24/100
        0s - loss: 0.1113 - acc: 0.9614
        Epoch 25/100
        0s - loss: 0.1164 - acc: 0.9553
        Epoch 26/100
        0s - loss: 0.1132 - acc: 0.9665
        Epoch 27/100
        0s - loss: 0.1114 - acc: 0.9644
        Epoch 28/100
        0s - loss: 0.0994 - acc: 0.9695
        Epoch 29/100
        0s - loss: 0.1092 - acc: 0.9604
        Epoch 30/100
        0s - loss: 0.0884 - acc: 0.9726
        Epoch 31/100
        0s - loss: 0.0885 - acc: 0.9736
        Epoch 32/100
        0s - loss: 0.0830 - acc: 0.9736
        Epoch 33/100
        0s - loss: 0.0787 - acc: 0.9715
        Epoch 34/100
        0s - loss: 0.0746 - acc: 0.9766
        Epoch 35/100
        0s - loss: 0.0673 - acc: 0.9776
        Epoch 36/100
        0s - loss: 0.0618 - acc: 0.9776
        Epoch 37/100
        0s - loss: 0.0669 - acc: 0.9756
        Epoch 38/100
        0s - loss: 0.0739 - acc: 0.9776
        Epoch 39/100
        0s - loss: 0.0565 - acc: 0.9817
        Epoch 40/100
        0s - loss: 0.0691 - acc: 0.9726
        Epoch 41/100
        0s - loss: 0.0677 - acc: 0.9766
        Epoch 42/100
        0s - loss: 0.0521 - acc: 0.9797
        Epoch 43/100
        0s - loss: 0.0444 - acc: 0.9858
        Epoch 44/100
        0s - loss: 0.0400 - acc: 0.9898
        Epoch 45/100
        0s - loss: 0.0491 - acc: 0.9817
        Epoch 46/100
        0s - loss: 0.0457 - acc: 0.9888
        Epoch 47/100
        0s - loss: 0.0478 - acc: 0.9868
        Epoch 48/100
        0s - loss: 0.0390 - acc: 0.9898
        Epoch 49/100
        0s - loss: 0.0478 - acc: 0.9848
        Epoch 50/100
        0s - loss: 0.0338 - acc: 0.9898
        Epoch 51/100
        0s - loss: 0.0331 - acc: 0.9898
        Epoch 52/100
        0s - loss: 0.0260 - acc: 0.9919
        Epoch 53/100
        0s - loss: 0.0474 - acc: 0.9807
        Epoch 54/100
        0s - loss: 0.0373 - acc: 0.9888
        Epoch 55/100
        0s - loss: 0.0371 - acc: 0.9888
        Epoch 56/100
        0s - loss: 0.0351 - acc: 0.9878
        Epoch 57/100
        0s - loss: 0.0252 - acc: 0.9949
        Epoch 58/100
        0s - loss: 0.0359 - acc: 0.9878
        Epoch 59/100
        0s - loss: 0.0305 - acc: 0.9909
        Epoch 60/100
        0s - loss: 0.0290 - acc: 0.9909
        Epoch 61/100
        0s - loss: 0.0349 - acc: 0.9888
        Epoch 62/100
        0s - loss: 0.0361 - acc: 0.9878
        Epoch 63/100
        0s - loss: 0.0386 - acc: 0.9878
        Epoch 64/100
        0s - loss: 0.0311 - acc: 0.9909
        Epoch 65/100
        0s - loss: 0.0297 - acc: 0.9919
        Epoch 66/100
        0s - loss: 0.0253 - acc: 0.9909
        Epoch 67/100
        0s - loss: 0.0260 - acc: 0.9939
        Epoch 68/100
        0s - loss: 0.0181 - acc: 0.9990
        Epoch 69/100
        0s - loss: 0.0254 - acc: 0.9919
        Epoch 70/100
        0s - loss: 0.0215 - acc: 0.9929
        Epoch 71/100
        0s - loss: 0.0223 - acc: 0.9939
        Epoch 72/100
        0s - loss: 0.0181 - acc: 0.9959
        Epoch 73/100
        0s - loss: 0.0208 - acc: 0.9909
        Epoch 74/100
        0s - loss: 0.0214 - acc: 0.9929
        Epoch 75/100
        0s - loss: 0.0219 - acc: 0.9939
        Epoch 76/100
        0s - loss: 0.0227 - acc: 0.9919
        Epoch 77/100
        0s - loss: 0.0280 - acc: 0.9898
        Epoch 78/100
        0s - loss: 0.0324 - acc: 0.9898
        Epoch 79/100
        0s - loss: 0.0295 - acc: 0.9858
        Epoch 80/100
        0s - loss: 0.0210 - acc: 0.9929
        Epoch 81/100
        0s - loss: 0.0186 - acc: 0.9909
        Epoch 82/100
        0s - loss: 0.0183 - acc: 0.9959
        Epoch 83/100
        0s - loss: 0.0217 - acc: 0.9929
        Epoch 84/100
        0s - loss: 0.0181 - acc: 0.9949
        Epoch 85/100
        0s - loss: 0.0208 - acc: 0.9949
        Epoch 86/100
        0s - loss: 0.0152 - acc: 0.9980
        Epoch 87/100
        0s - loss: 0.0202 - acc: 0.9919
        Epoch 88/100
        0s - loss: 0.0126 - acc: 0.9949
        Epoch 89/100
        0s - loss: 0.0242 - acc: 0.9939
        Epoch 90/100
        0s - loss: 0.0157 - acc: 0.9959
        Epoch 91/100
        0s - loss: 0.0213 - acc: 0.9939
        Epoch 92/100
        0s - loss: 0.0136 - acc: 0.9980
        Epoch 93/100
        0s - loss: 0.0206 - acc: 0.9959
        Epoch 94/100
        0s - loss: 0.0125 - acc: 0.9949
        Epoch 95/100
        0s - loss: 0.0138 - acc: 0.9959
        Epoch 96/100
        0s - loss: 0.0106 - acc: 0.9970
        Epoch 97/100
        0s - loss: 0.0112 - acc: 0.9970
        Epoch 98/100
        0s - loss: 0.0152 - acc: 0.9959
        Epoch 99/100
        0s - loss: 0.0086 - acc: 0.9980
        Epoch 100/100
        0s - loss: 0.0128 - acc: 0.9970
        
Out[16]:
<keras.callbacks.History at 0x7f6688ca38d0>

Exactitud de la prediccion final del modelo 85.6%

In [17]:
metrics =model.evaluate(X_test, y_test, batch_size=128, verbose=2)

        print 'accuracy:'
        print metrics[1]*100
        
accuracy:
        85.5967073774
        
In [ ]: