Fine Tune VGG16 example

因為綱上的sample 可能巳經過了幾代不能純利運行,

花了好多時間去找....
這是假設大家巳經做了第2個example , 巳經弄好個weight , 只是把整個變成model.

https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html
https://gist.github.com/fchollet/f35fbc80e066a49d65f1688a7e99f069
https://gist.github.com/fchollet/7eb39b44eb9e16e59632d25fb3119975


example 教用model.add(VGG16) 是不可, 因為Model 沒有 add 這個function,
但原來作者 fchollet 在另一邊教學巳有新方法merge model

https://keras.io/applications/#fine-tune-inceptionv3-on-a-new-set-of-classes
我在綱上找了60幅相,在沒有GPU 下,花大約一時去訓練,但如果用這方法,2-3分鐘就可

先把feature 找出來,再save weigths


==============code on below ====================


from keras.applications.vgg16 import VGG16
from keras.models import Model, Sequential
import numpy as np
from keras.layers import Flatten, Dropout, Dense
from keras.preprocessing import image



# dimensions of our images.
img_width, img_height = 150, 150

top_model_weights_path = 'bottleneck_fc_model.h5'


#方便自己讀image 做test 時,一句就可而巳
def readimage(path):
    x=image.load_img(path, target_size=(img_width, img_height))
    x=image.img_to_array(x)
    x=np.expand_dims(x,axis=0)
    return x




tmp_model=VGG16(weights='imagenet',include_top=False,input_shape=(150,150,3))


t=tmp_model.output
t=Flatten()(t)
t=Dense(256, activation='relu')(t)
t=Dropout(0.5)(t)
t=Dense(1, activation='sigmoid')(t)

#Merge the Model
model = Model(tmp_model.input, t)





model.summary()

#load previous training weight, 由頭train 過很花時間
model.load_weights(top_model_weights_path,by_name=True)



for layer in model.layers[:19]:
    layer.trainable=False

model.compile(optimizer='rmsprop',loss='binary_crossentropy', metrics=['accuracy'])


y=readimage('dog_test.jpg')


# In[15]:


model.predict(y)


# In[16]:


model.save('vggfinetune.h5')

留言