Introduction to TensorFlow

Here you will get TensorFlow introduction.

TensorFlow is a framework which is used for machine learning and deep learning applications like neural network. It is used for data flow programming as an open source software library over various range of tasks. It is symbolic  math library used in research and production both.

Tensorflow is used for general purpose computing on graphics processing units that are able to run on multiple CPUs and GPUs unlike the reference implementation that runs on single devices. It is available on Linux, macOS, windows with 64-bit configuration and also on mobile computing platforms like Android and IOS. As the neural network performs operations on multidimensional data arrays that derives from tensorflow and these arrays are called tensors.

Introduction to TensorFlow

Image Source

Also Read: Introduction to Deep Learning

Also Read: Introduction to Neural Networks

Tensorflow has multiple application programmable interfaces. The lowest level application programmable interface is tensorflow core that provides complete programming control. Generally tensorflow core is used by machine learning researchers and those who require control of fine levels over their models. The higher level application programmable interface unit makes task in repetition with easy task and maintain consistent between different users.

The higher level APIs are easier to use than tensorflow core and built on top of tensor flow core.

Tensors

Tensor is the central unit of data in tensorflow and it comprises of primitive values set shaped as an array of multi-dimension. Tensorflow is a framework with generalized tensor of vectors and matrices of higher dimensions. Tensors are represented as an n-dimensional array with n as rank of tensor of base data types. In tensor flow programming, tf.tensor is the main object that can be manipulated and passed around. The tf.tensor is an object that will produce value by partial computation. Firstly tf.tensor graph is built that describes detailed description about each tensor.

Tf.tensor has a data type and a shape as its properties. The element of tensor has same known data type and partially known shape i.e. the dimension.

Interaction of GPUs on TensorFlow System

Tensorflow by default maps almost all the GPU memory visible to the process. This process is efficiently used by reducing memory fragmentation of precious GPU memory resources on the devices. Tensorflow gives two configurations on the session to control the growth of memory usage, it only allocate a subset of memory as is needed by the process.

The first configuration option is “allow_growth” which allows to allocate only GPU memory based on runtime allocations. In starting it allocates very less amount of memory and thereafter as per the requirement they extent the GPU memory needed by tensorflow process. However it does not release memory automatically and this scenario leads to memory fragmentation. To release memory automatically we have to turn on this option by setting the option in configProto by:-

The second configuration method is “per process GPU memory fraction option”, it determines the amount of memory that the particular GPU should allocate.

About Author:

Shubham Sharma, currently working as Analytics Engineer in Data Science Domain. Has around 2+ years of experience in Data Science. Skilled in Python, Pandas, Anaconda, Tensorflow, Keras, Scikit learn, Numpy, Scipy, Microsoft Excel, SQL, Cassandra and Statistical Data Analysis, Hadoop, Hive, Pig, Spark, Pyspark. Connect with him at shubhamsharma1318@gmail.com.

Comment below if you have any queries related to above introduction to tensorflow.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.