Are readers and friends of Double Eleven buying and buying? In the deep learning community, Google actually provided benefits on this day – Colab can use the P100 GPU for free.

Editor’s note: This article is from WeChat public number “machine heart” (ID: almosthuman2014), author: Synced, participation: Yat Ming, Siyuan.

Recently, developers on Reddit have found that his Colab environment is not the same when performing training tasks. In the training task, Colab shows that the device being used is NVIDIA’s Tesla P100 GPU, and the version is PCIE 16G memory version.

Google

After the user posted a post on Reddit, Colab was able to confirm with a free P100 GPU.

Google

Before, the heart of the machine introduced how to use the power resources on Colab. In April of this year, Colab upgraded the GPU from the antique K80 to the Tesla T4. This new Turing-based GPU is ideal for low-precision inference and training is much faster than the K80. Today, Colab is once again opening the P100, and this year has been two hardware upgrades.

The heart of the machine is also verified immediately. When we choose to use GPU acceleration, the print is indeed the Tesla P100 GPU.

Google

How strong is P100?

T4 balances the need for training and reasoning well in deep learning calculations, and costs are much lower than V100. But this time Colab upgraded the computing power to P100, which is arguably a very top GPU.

Google

Not to mention, these are free. Now, if you want to use these calculations normally, I am afraid it is not a small amount. As shown above, the heart of the machine has found the price of the current GPU computing power from Google Cloud. In the table, T4 requires an hourly $1.03/training unit. The P100 is as high as $1.60 per hour per training unit.

It doesn’t seem to save much money? Be aware that training a ResNet-50 on ImageNet with a P100 GPU takes almost a day (see DAWNBench). If you don’t interrupt, it will cost more than 40 dollars. On Colab, the money can be saved.

How strong is the computing power of P100? It can be said that this is a GPU that is quite cost-effective in the field of deep learning. Since its release in 2017, the P100 has become the standard for model training in many research institutions and companies. Compared to GPUs such as the K80, the P100 has a clear performance advantage.

Google

Compared with the performance of K80 and P100 on NVIDIA website.

Google

P100 related parameters.

Although the T4 is a big improvement over the K80, the P100 is still much stronger than the previous two, which is enough to show that Colab’s benefits are worthwhile.

Colab, far better than you think

When many developers use Colab, they always complain about the termination from time to time, complaining that all packages and files will be deleted after each end. But in fact, in addition to science online, many other problems can be solved, Google’s Colab is far stronger than we think. Not to mention its support for various frameworks, it also provides a lot of free resources on various hardware such as TPU and GPU, not to mention saving data with Google Drive.

First of all, the biggest problem is that Colab will break, but Xiaobian has used it many times. Almost every time, as long as the page is not closed, running for more than ten hours is no problem. According to our experience, it is best to start running at 9:00 am Beijing time, because at this time North America just after 12 o’clock in the morning, the continuous running time is longer. For a GPU like the T4 or P100, running for more than 10 hours is a good deal, and even a complex model can get initial training.

So if it is broken? This is about loading Google Drive. Colab’s very good point is that it can interact with Google Cloud Drive, which means that after training some Epoch, you can save the model in Drive, so you can do persistent training. Whenever Colab is broken, we can read the saved model from Drive and continue training.

Google

The above two lines of code can load Google Cloud Drive into the “content/drive” directory of the remote instance. All subsequent model operations and dataset operations can be done in this directory, even if Colab broke the connection and the contents of all operations are also saved on Google Cloud.

As long as you get the above two tips, Colab’s practicality is very strong. Of course, if the reader sendsThe now allocated GPU is K80. You can restart Colab several times, which is to release the memory and local file restart. Every time you reboot, the GPU hardware will be reassigned. You can “wait” to P100.

In addition to the most important framework and computing support, Colab has many more interesting features. For example, use the magic symbol “%” to call TensorBoard, dark code theme, file browsing and operating system, and the recently updated Pandas DataFrame visualization.

Google

Colab’s tabular data extension, which allows for visual sorting and filtering of Pandas’ DataFrame.

As Colab supports more and more powerful computing power and provides more and more features and components, it will be a very good open tool for beginners and students who have difficulty getting enough computing power.

Reference link: https://www.reddit.com/r/MachineLearning/comments/duds5d/d_colab_has_p100_gpus/

The cover image is from pexels