首先使用Anaconda创建一个 3.5的环境
然后使用pip install -i https://pypi.tuna.tsinghua.edu.cn/simple tf-nightly-gpu 安装gpu版本,版本号是tf_nightly_gpu-1.14.1.dev20190525, 上面这种安装方式可以避免包冲突。
保证 显卡驱动版本不低于425,然后安装cuda 10.0,不能是10.1,因为tensorflow-gpu会去找cufft64_100.dll,如果安装了其他版本,就找不到这个。
再安装cudnn,7.5版本就行了。 下载地址在常用库安装那一篇有。 安装cudnn只需要把三个文件复制到cuda的目录下就行
以上操作结束,就可以运行了。
完整环境:
Package Version -------------------- -------------------- absl-py 0.7.1 astor 0.8.0 certifi 2018.8.24 gast 0.2.2 google-pasta 0.1.6 grpcio 1.21.1 h5py 2.9.0 Keras-Applications 1.0.7 Keras-Preprocessing 1.0.9 Markdown 3.1.1 numpy 1.16.3 pip 10.0.1 protobuf 3.7.1 setuptools 40.2.0 six 1.12.0 tb-nightly 1.14.0a20190525 termcolor 1.1.0 tf-estimator-nightly 1.14.0.dev2019052401 tf-nightly-gpu 1.14.1.dev20190525 Werkzeug 0.15.4 wheel 0.31.1 wincertstore 0.2 wrapt 1.11.1升级包:
pip install --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple tf-nightly-gpu升级python环境: conda install -c anaconda python
安装好了可以运行以下代码,如果结果差不多,就是没问题
import tensorflow as tf import numpy as np import datetime starttime = datetime.datetime.now() # 制造数据 train_X = np.linspace(-1,1,100) train_Y = 2*train_X+np.random.randn(*train_X.shape)*0.3 # y =2x+b # 训练模型 train_epochs = 200 display_step =4 #展示模型参数 def moving_average(a,w=10): if len(a)<w: return a[:] return [val if i<w else sum(a[(i-w):i])/w for i,val in enumerate(a)] # config = tf.ConfigProto(log_device_placement = True,allow_soft_placement=True) # config.gpu_options.allow_growth = True with tf.Session() as sess: # with tf.device("/gpu:0"): # 创建模型 X = tf.placeholder("float") Y = tf.placeholder("float") W = tf.Variable(tf.random_normal([1]), name="weight") b = tf.Variable(tf.zeros([1]), name='bias') z = tf.multiply(X, W) + b cost = tf.reduce_mean(tf.square(Y - z)) # 损失函数 learning_rate = 0.01 # 学习率,越小精度越高,速度越慢 optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) # 梯度下降算法 pltdata = {'batchsize': [], "loss": []} # 存放批次值和损失值 sess.run(tf.global_variables_initializer()) # 向模型填充数据 for epoch in range(train_epochs): for (x, y) in zip(train_X, train_Y): sess.run(optimizer, feed_dict={X: x, Y: y}) if epoch % display_step == 0: loss = sess.run(cost, feed_dict={X: train_X, Y: train_Y}) print("Epoch", epoch + 1, "cost", loss, "W=", sess.run(W), "b=", sess.run(b)) if not (loss == 'NA'): pltdata['batchsize'].append(epoch) pltdata['loss'].append(loss) print('完成了。') print('cost=',sess.run(cost,feed_dict={X:train_X,Y:train_Y}),'W=',sess.run(W),'b=',sess.run(b)) # plt.show() #使用模型预测 print(sess.run(z,feed_dict={X:0.2})) endtime = datetime.datetime.now() print((endtime - starttime).seconds)运行结果
C:\ProgramData\Anaconda3\envs\deepg\python.exe C:/projects/p520/p1.py WARNING: Logging before flag parsing goes to stderr. W0526 10:26:26.659331 2308 deprecation_wrapper.py:119] From C:/projects/p520/p1.py:23: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead. 2019-05-26 10:26:26.668002: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library nvcuda.dll 2019-05-26 10:26:26.752551: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties: name: GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.755 pciBusID: 0000:01:00.0 2019-05-26 10:26:26.752719: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check. 2019-05-26 10:26:26.753032: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0 2019-05-26 10:26:26.753378: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 2019-05-26 10:26:26.754949: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties: name: GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.755 pciBusID: 0000:01:00.0 2019-05-26 10:26:26.755085: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check. 2019-05-26 10:26:26.755354: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0 2019-05-26 10:26:27.334236: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-05-26 10:26:27.334359: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0 2019-05-26 10:26:27.334435: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N 2019-05-26 10:26:27.334946: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 8694 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:01:00.0, compute capability: 7.5) W0526 10:26:27.336520 2308 deprecation_wrapper.py:119] From C:/projects/p520/p1.py:26: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead. W0526 10:26:27.337517 2308 deprecation_wrapper.py:119] From C:/projects/p520/p1.py:28: The name tf.random_normal is deprecated. Please use tf.random.normal instead. W0526 10:26:27.345496 2308 deprecation_wrapper.py:119] From C:/projects/p520/p1.py:34: The name tf.train.GradientDescentOptimizer is deprecated. Please use tf.compat.v1.train.GradientDescentOptimizer instead. W0526 10:26:27.371427 2308 deprecation_wrapper.py:119] From C:/projects/p520/p1.py:37: The name tf.global_variables_initializer is deprecated. Please use tf.compat.v1.global_variables_initializer instead. Epoch 1 cost 0.7211578 W= [0.63478076] b= [-0.03276369] Epoch 1 cost 0.70252275 W= [0.6578704] b= [-0.05632939] Epoch 1 cost 0.68547904 W= [0.6809186] b= [-0.08034799] Epoch 1 cost 0.6711159 W= [0.70232224] b= [-0.10313256] Epoch 1 cost 0.6598311 W= [0.72094744] b= [-0.12339516] Epoch 1 cost 0.6521096 W= [0.7350165] b= [-0.13904506] Epoch 1 cost 0.64300734 W= [0.75364625] b= [-0.1602444] Epoch 1 cost 0.63379073 W= [0.776071] b= [-0.18636262] Epoch 1 cost 0.6308198 W= [0.78463596] b= [-0.19657867] Epoch 1 cost 0.6242468 W= [0.8084542] b= [-0.22568986] Epoch 1 cost 0.62169486 W= [0.8217175] b= [-0.24231094] Epoch 1 cost 0.62036306 W= [0.83193773] b= [-0.25545126] Epoch 1 cost 0.6200135 W= [0.8361488] b= [-0.26100987] 。。。 Epoch 153 cost 0.09066088 W= [1.9745536] b= [0.00509782] Epoch 153 cost 0.09065532 W= [1.9748045] b= [0.00535933] Epoch 153 cost 0.09061071 W= [1.9772507] b= [0.007856] Epoch 153 cost 0.0905913 W= [1.978773] b= [0.00937827] Epoch 157 cost 0.09057157 W= [1.984189] b= [0.00396216] Epoch 157 cost 0.0905696 W= [1.9820896] b= [0.00610489] Epoch 157 cost 0.09056988 W= [1.9819486] b= [0.00625191] Epoch 157 cost 0.09056974 W= [1.9820172] b= [0.00617894] Epoch 157 cost 0.090572976 W= [1.9810274] b= [0.00725581] Epoch 157 cost 0.09061596 W= [1.9770781] b= [0.01164888] Epoch 165 cost 0.09081806 W= [1.987512] b= [-0.00785136] Epoch 165 cost 0.09068552 W= [1.9866278] b= [-0.00324412] Epoch 165 cost 0.0906076 W= [1.9859699] b= [0.00058711] Epoch 165 cost 0.09073182 W= [1.9868168] b= [-0.0050025] Epoch 165 cost 0.09069343 W= [1.9866265] b= [-0.00355392] Epoch 165 cost 0.09054514 W= [1.9855735] b= [0.00592261] Epoch 165 cost 0.090541765 W= [1.9855192] b= [0.00652042] Epoch 165 cost 0.0905363 W= [1.9854151] b= [0.00799263] Epoch 165 cost 0.090542965 W= [1.9854988] b= [0.00633499] Epoch 165 cost 0.09055714 W= [1.9855549] b= [0.00448342] Epoch 165 cost 0.09053434 W= [1.9855] b= [0.00992332] Epoch 165 cost 0.09056041 W= [1.9855462] b= [0.01449925] Epoch 165 cost 0.09053523 W= [1.9853697] b= [0.00867187] Epoch 173 cost 0.090757035 W= [1.9922602] b= [-0.00646905] Epoch 173 cost 0.09082645 W= [1.9939458] b= [-0.00858147] Epoch 173 cost 0.090811804 W= [1.9936193] b= [-0.00816173] Epoch 173 cost 0.09062753 W= [1.988243] b= [-0.0010649] Epoch 173 cost 0.0906094 W= [1.9874208] b= [5.0043625e-05] Epoch 173 cost 0.09056159 W= [1.9824991] b= [0.00691267] Epoch 173 cost 0.09057003 W= [1.9846733] b= [0.00379327] Epoch 173 cost 0.09056097 W= [1.9826927] b= [0.00671967] Epoch 173 cost 0.0905957 W= [1.9855791] b= [0.00144337] Epoch 173 cost 0.09056565 W= [1.9849229] b= [0.00404223] Epoch 173 cost 0.09060622 W= [1.9856844] b= [0.00076465] Epoch 173 cost 0.09081806 W= [1.987512] b= [-0.00785136] Epoch 173 cost 0.09068552 W= [1.9866278] b= [-0.00324412] Epoch 173 cost 0.0906076 W= [1.9859699] b= [0.00058711] Epoch 173 cost 0.09073182 W= [1.9868168] b= [-0.0050025] Epoch 173 cost 0.09069343 W= [1.9866265] b= [-0.00355392] Epoch 173 cost 0.09054514 W= [1.9855735] b= [0.00592261] Epoch 173 cost 0.090541765 W= [1.9855192] b= [0.00652042] Epoch 173 cost 0.0905363 W= [1.9854151]