In November 2015, Google released Tensorflow and by now it has become extremely high profile. At this point, most people in tech have likely heard about it and think to themselves "Tensorflow? That's the Google library for machine learning, right". But those of us, like me, who aren't involved in machine learning likely don't know much more than that.

Recently, as part of a project exploring interesting programming techniques to build programs that play the card game Bridge, I have been starting to learn about machine learning and Tensorflow. In this post, I will share some of the insight I've gained so far.

I think this is a relevant analogy to use to describe Tensorflow: Netflix is to AWS as Machine Learning is to Tensorflow. AWS works great for Netflix and Netflix is the most high profile application of it, but AWS is a way more general purpose tool. Similarly, Tensorflow works great for machine intelligence. The Tensorflow website itself acknowledges this saying in the header "TensorFlow is an Open Source Software Library for Machine Intelligence" but in the larger explanation of the library, the website says "TensorFlowâ„¢ is an open source software library for numerical computation using data flow graphs." Tensorflow is an excellent library for performing linear-algebra based computation. (It just so happens that one of the most common applications of this sort of computation is machine learning.)

When working with Tensorflow, there are two steps. The first step is the responsibility of the programmer. In this step, the programmer builds a computation graph. Using the Python or C/C++ APIs, he or she initializes nodes describing constants, variables, calculations, etc. In the second step, Tensorflow processes the graph and evaluates results. As an example, see below the code to multiply two numbers using Tensorflow.

```
import tensorflow as tf
x1 = tf.constant(4.0)
x2 = tf.constant(3.0)
result = tf.mul(x1, x2)
sess = tf.Session()
sess.run(result)
## 12.000
```

In the sample we build up a computation graph, declaring two nodes that are constants, and a third node that is the result of multiplying the two constants. We then create a Tensorflow session and run the computation.

Alternately we could declare one of the nodes as a variable, which instructs Tensorflow that the node can change value. This would be useful for example if we were instructing Tensorflow to run an optimization on a computation.

```
import tensorflow as tf
x1 = tf.Variable(4.0)
x2 = tf.Variable(3.0)
result = tf.mul(x1, x2)
sess = tf.Session()
init = tf.initialize_all_variables()
sess.run(init)
sess.run(result)
## 12.000
```

Because the entire computation graph is built up at once, and because mutable vs immutable data is explicitly declared, Tensorflow is able to perform the computation very efficiently. It can distribute the computation across multiple nodes (the library uses Google's Protobuf library to facilitate this). It can also perform the computation on GPUs using CUDA.

I've found the small number of things I've done so far on Tensorflow to be pretty intuitive, but I definitely have not pushed myself very far. I will continue to use it as part of my Bridge project, and will report further as I learn more.