欢迎来到尧图网

客户服务 关于我们

您的位置:首页 > 文旅 > 手游 > 【升华】两小时学会使用tensorflow框架,tensorflow使用步骤(7步)

【升华】两小时学会使用tensorflow框架,tensorflow使用步骤(7步)

2024/10/25 0:33:19 来源:https://blog.csdn.net/dongjing991/article/details/142955862  浏览:    关键词:【升华】两小时学会使用tensorflow框架,tensorflow使用步骤(7步)

tensorflow: 一个深度学习框架,有必要了解的是tensorflow 指的tensor:张量, 张量: TensorFlow 中的基本数据对象。,flow:流程图。 

tensorflow 即指张量的工作流。 那么tensorflow的框架应该是一张流程图。

import tensorflow as tf
mnist = tf.keras.datasets.mnist(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0model = tf.keras.models.Sequential([tf.keras.layers.Flatten(input_shape=(28, 28)),tf.keras.layers.Dense(128, activation='relu'),tf.keras.layers.Dropout(0.2),tf.keras.layers.Dense(10, activation='softmax')
])model.compile(optimizer='adam',loss='sparse_categorical_crossentropy',metrics=['accuracy'])model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test)

tensorflow使用步骤(7步)

  1. import packages (导包)
  2. import data(导数据 )
  3. build model(构建模型)
  4. compile(编译)
  5. callbacks(优化调整,完善模型精确度)
  6. fit(训练验证)
  7. evaluate(评估)

1.导入包,这个很简单,导入的时候可以导入的更细致一点,比如from ... import ...

#### PACKAGE IMPORTS ##### Run this cell first to import all required packages. Do not make any imports elsewhere in the notebookimport tensorflow as tf
from tensorflow.keras.preprocessing.image import load_img, img_to_array
from tensorflow.keras.models import Sequential, load_model
from tensorflow.keras.layers import Dense, Flatten, Conv2D, MaxPooling2D
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
import os
import numpy as np
import pandas as pd# If you would like to make further imports from tensorflow, add them here

2.导入数据,很多数据集都有自己的dataloader,导入需要训练的数据。

# Run this cell to import the Eurosat datadef load_eurosat_data():data_dir = 'data/'x_train = np.load(os.path.join(data_dir, 'x_train.npy'))y_train = np.load(os.path.join(data_dir, 'y_train.npy'))x_test  = np.load(os.path.join(data_dir, 'x_test.npy'))y_test  = np.load(os.path.join(data_dir, 'y_test.npy'))return (x_train, y_train), (x_test, y_test)(x_train, y_train), (x_test, y_test) = load_eurosat_data()
x_train = x_train / 255.0
x_test = x_test / 255.0
# x_train=x_train[:500]  
# y_train=y_train[:500]

3.搭建模型

#### GRADED CELL ##### Complete the following function. 
# Make sure to not change the function name or arguments.def get_new_model(input_shape):"""This function should build a Sequential model according to the above specification. Ensure the weights are initialised by providing the input_shape argument in the first layer, given by thefunction argument.Your function should also compile the model with the Adam optimiser, sparse categorical crossentropy loss function, and a single accuracy metric."""model=Sequential([Conv2D(16,(3,3),activation='relu',padding='same',name='conv_1',input_shape=input_shape),
#这个16是说的通道数,可以理解为提取特征的数量,紧跟其后的(3,3)说的是卷积核的大小。
#再就是激活函数选择ReLU。填白的方式是same。然后给这个层取个名字‘conv_1’。最后给定输入的形状。
#当然这里还能加上初始化比如kernel_initializer='he_uniform',bias_initializer='ones'。Conv2D(8,(3,3),activation='relu',padding='same',name='conv_2'),MaxPooling2D((8,8),name='pool_1'),
#这里的池化层直接上了8,8的卷积核,给了个命名。Flatten(name='flatten'),
#注意,只有在卷积池化操作之后,你的数据通常不是一维的,你得把他拉平了才能用上全连接。Dense(32,activation='relu',name='dense_1'),
#32是神经元的数量Dense(10,activation='softmax',name='dense_2')
#我们的问题是十分类问题,所以我们最后一层一定是10个神经元,激活函数是softmax])
#在这里我们直接把模型打包好,等会调用这个函数,就能直接训练。model.compile(optimizer=tf.keras.optimizers.Adam(lr=1e-3),# 我这里只设置了一个学习率,你也可以设置更多参数https://zhuanlan.zhihu.com/p/86261902loss=tf.keras.losses.SparseCategoricalCrossentropy(),#设置这个损失函数,是因为我们的标签是自然数,如果标签是独热码,就不要前面的sparse就行了。metrics=['accuracy']#我只用了正确率这一个指标,这个东西主要是为了得到模型训练中的记录和配合callback进行参数调整的操作。)return model

callbacks是你用来调整训练模型

#### GRADED CELL ##### Complete the following functions. 
# Make sure to not change the function names or arguments.def get_checkpoint_every_epoch():"""This function should return a ModelCheckpoint object that:- saves the weights only at the end of every epoch- saves into a directory called 'checkpoints_every_epoch' inside the current working directory- generates filenames in that directory like 'checkpoint_XXX' whereXXX is the epoch number formatted to have three digits, e.g. 001, 002, 003, etc."""checkpoint_path= 'checkpoints_every_epoch/checkpoint_{epoch:03d}'checkpoint=ModelCheckpoint(filepath=checkpoint_path,save_weights_only=True,#只保存参数save_freq='epoch',#每次epoch都保存,也可以写成frequency=1,verbose=1#会一直打印现在的情况)return checkpointdef get_checkpoint_best_only():"""This function should return a ModelCheckpoint object that:- saves only the weights that generate the highest validation (testing) accuracy- saves into a directory called 'checkpoints_best_only' inside the current working directory- generates a file called 'checkpoints_best_only/checkpoint' """checkpoint_path= 'checkpoints_best_only/checkpoint'checkpoint=ModelCheckpoint(filepath=checkpoint_path,save_weights_only=True,monitor='val_accuracy',#把验证集上的正确率作为监控对象save_best_only=True,#以监控对象为指标,只保存最好的模型参数verbose=1)return checkpoint

训练

# Train model using the callbacks you just createdcallbacks = [checkpoint_every_epoch, checkpoint_best_only, early_stopping]
model.fit(
x_train, y_train, #训练数据集
epochs=20, #训练次数20次
validation_data=(x_test, y_test), #验证集用测试集。还可以用validation_split=0.15来获取验证集。但是我感觉没有必要,反正你又不用验证集训练,你直接用测试集不就行了嘛。
callbacks=callbacks#最后加上callbacks
)

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com