This is our new platform provided by Google. It is amazing as it is so accurate!

cheesecake-608963_640

In Deep learning project for digital marketing,  we need superior tools to perform data analysis and deep learning.  I have watched “TensorFlow“, which is an open source software provided by Google since it was published on Nov 2015.   According to one of the latest surveys by  KDnuggets, “TensorFlow” is the top ranked tool for deep learning (H2O, which our company uses as main AI engine, is also getting popular)(1).

I try to perform an image recognition task with TensorFlow and ensure how it works. These are results of my experiment. MNIST, which is hand written digits from 0 to 9, is used for the experiment. I choose convolutional network to perform it.  How can TensorFlow can classify them correctly?

MNIST

I set the program of TensorFlow in jupyter like this. This comes from tutorials of TensorFlow.

MNIST 0.81

 

This is the result . It is obtained after 80-minute training. My machine is MAC air 11 (1.4 GHz Intel Core i5, 4GB memory)

MNIST 0.81 3

Could you see the accuracy rate?  Accuracy rate is 0.9929. So error rate is just 0.71%!  It is amazing!

MNIST 0.81 2r

Based on my experiment, TensorFlow is an awesome tool for deep learning.  I found that many other algorithms, such as LSTM and Reinforcement learning, are available in TensorFlow. The more algorithms we have,  the more flexible our strategy for solutions of digital marketing can be.

 

We obtain this awesome tool to perform deep learning. From now we can analyze many data with TensorFlow.  I will provide good insights from data in the project to promote digital marketing. As I said before “TensorFlow” is open source software. It is free to use in our businesses.  No fees is required to pay. This is a big advantage for us!

I can not say TensorFlow is a tool for beginners as it is a computer language for deep leaning. (H2O can be operated without programming by GUI). If you are familiar with Python or similar languages, It is for you!  You can download and use it without paying any fees. So you can try it by yourself. This is my strong recommendation!

 

TensorFlow: Large-scale machine learning on heterogeneous systems

1 : R, Python Duel As Top Analytics, Data Science software – KDnuggets 2016 Software Poll Results

http://www.kdnuggets.com/2016/06/r-python-top-analytics-data-mining-data-science-software.html

 

 

Notice: TOSHI STATS SDN. BHD. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. TOSHI STATS SDN. BHD. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on TOSHI STATS SDN. BHD. and me to correct any errors or defects in the codes and the software.

 

“DEEP LEARNING PROJECT” starts now. I believe it works in digital marketing and economic analysis

desert-956825_640

As the new year starts,  I would like to set up a new project of my company.  This is beneficial not only for my company, but also readers of the article because this project will provide good examples of predictive analytics and implementation of new tools as well as platforms. The new project is called “Deep Learning project” because “Deep Learning” is used as a core calculation engine in the project.  Through the project,  I would like to create “predictive analytics environment”. Let me explain the details.

 

1.What is the goal of the project?

There are three goals of the project.

  • Obtain knowledge and expertise of predictive analytics
  • Obtain solutions for data-driven management
  • Obtain basic knowledge of Deep Learning

As big data are available more and more, we need to know how to consume big data to get insight from them so that we can make better business decisions.  Predictive analytics is a key for data-driven management as it can make predictions “What comes next?” based on data. I hope you can obtain expertise of predictive analytics by reading my articles about the project. I believe it is good and important for us  as we are in the digital economy now and in future.

 

2.Why is “Deep Learning” used in the project?

Since the November last year, I tried “Deep Learning” many times to perform predictive analytics. I found that it is very accurate.  It is sometimes said that It requires too much time to solve problems. But in my case, I can solve many problems within 3 hours. I consider “Deep Learning” can solve the problems within a reasonable time. In the project I would like to develop the skills of tuning parameters in an effective manner as “Deep Learning” requires several parameters setting such as the number of hidden layers. I would like to focus on how number of layers, number of neurons,  activate functions, regularization, drop-out  can be set according to datasets. I think they are key to develop predictive models with good accuracy.  I have challenged MNIST hand-written digit classifications and our error rate has been improved to 1.9%. This is done by H2O, an awesome analytic tool, and MAC Air11 which is just a normal laptop PC.   I would like to set my cluster on AWS  in order to improve our error rate more. “Spark” is one of the candidates to set up a cluster. It is an open source.

DL.002

3. What businesses can benefit from introducing “Deep Learning “?

“Deep Learning ” is very flexible. Therefore, it can be applied to many problems cross industries.  Healthcare, financial, retails, travels, food and beverage might be benefit from introducing “Deep Learning “.  Governments could benefit, too. In the project, I would like to focus these areas as follows.

  • Digital marketing
  • Economic analysis

I would like to create a database to store the data to be analyzed, first. Once it is created,  I perform predictive analytics on “Digital marketing” and “Economic analysis”.  Best practices will be shared with you to reach our goal “Obtain knowledge and expertise of predictive analytics” here.  Deep Learning is relatively new to apply both of the problems.  So I expect new insight will be obtained. For digital marketing,  I would like to focus on social media and measurement of effectiveness of digital marketing strategies.  “Natural language processing” has been developed recently at astonishing speed.  So I believe there could be a good way to analyze text data.  If you have any suggestions on predictive analytics in digital marketing,  could you let me know?  It is always welcome!

 

I use open source software to create an environment of predictive analytics. Therefore, it is very easy for you to create a similar environment on your system/cloud. I believe open source is a key to develop superior predictive models as everyone can participate in the project.  You do not need to pay any fee to introduce tools which are used in the project as they are open source. Ownership of the problems should be ours, rather than software vendors.  Why don’t you join us and enjoy it! If you want to receive update the project, could you sing up here?

 

 

Notice: TOSHI STATS SDN. BHD. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. TOSHI STATS SDN. BHD. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on TOSHI STATS SDN. BHD. and me to correct any errors or defects in the codes and the software.

This is my first “Deep learning” with “R+H2O”. It is beyond my expectation!

dessert-352475_1280

Last Sunday,  I tried “deep learning” in H2O because I need this method of analysis in many cases. H2O can be called from R so it is easy to integrate H2O into R. The result is completely beyond my expectation. Let me see in detail now!

1. Data

Data used in the analysis is ” The MNIST database of handwritten digits”. It is well known by data-scientists because it is frequently used to validate statistical model performance.  Handwritten digits look like that (1).

MNIST

Each row of the data contains the 28^2 =784 raw grayscale pixel values from 0 to 255 of the digitized digits (0 to 9). The original data set of The MNIST is as follows.

  • Training set of 60,000 examples,
  • Test set of 10,000 examples.
  • Number of features is 784 (28*28 pixel)

The data in this analysis can be obtained from the website (Training set of 19,000 examples, Test set of 10,000 examples).

 

 

2. Developing models

Statistical models learn by using training set and predict what each digit is by using test set.  The error rate is obtained  as “number of wrong predictions /10,000″. The world record is ” 0.83%”  for models without convolutional layers, data augmentation (distortions) or unsupervised pre-training (2). It means that the model has only 83 error predictions in 10,000 samples.

This is an image of RStudio, IDE of R.  I called H2O from R and write code “h2o.deeplearning( )”.  The detail is shown in the blue box below.  I developed the model with 2 layers and 50 size for each. The error rate is 15.29% (in the red box).  I need more improvement of the model.

DL 15.2

Then I increase the number of layers and sizes.  This time,   I developed the model with 3 layers and 1024, 1024, 2048 size for each. The error rate is 3.22%, much better than before (in the red box).  It took about 23 minutes to be completed. So there is no need to use more high-power machines or clusters so far ( I use only my MAC Air 11 in this analysis). I think I can improve the model more if I tune parameters carefully.

DL 3.2

Usually,  Deep learning programming is a little complicated. But H2O enable us to use deep learning without programming when graphic user interface “H2O FLOW” is used.  When you would like to use R,   the command of deep learning to call H2O  is similar to the commands for linear model (lm) or generalized linear model (glm) in R. Therefore, it is easy to use H2O with R.

 

 

This is my first deep learning with R+H2O. I found that it could be used for a variety cases of data analysis. When I cannot be satisfied with traditional methods, such as logistic regression, I can use deep learning without difficulties. Although it needs  a little parameter tuning such as number of layers and sizes,  it might bring better results as I said in my experiment. I would like to try “R+H2O” in Kaggle competitions, where many experts compete for the best result of predictive analytics.

 

P.S.

The strongest competitor to H2O appears on 9 Nov 2015.  This is ” TensorFlow” from Google.  Next week,  I will report this open source software.

 

Source

1. The image from GitHub  cazala/mnist

https://github.com/cazala/mnist

2. The Definitive Performance Tuning Guide for H2O Deep Learning , Arno Candel, February 26, 2015

http://h2o.ai/blog/2015/02/deep-learning-performance/

 

Note: Toshifumi Kuga’s opinions and analyses are personal views and are intended to be for informational purposes and general interest only and should not be construed as individual investment advice or solicitation to buy, sell or hold any security or to adopt any investment strategy.  The information in this article is rendered as at publication date and may change without notice and it is not intended as a complete analysis of every material fact regarding any country, region market or investment.

Data from third-party sources may have been used in the preparation of this material and I, Author of the article has not independently verified, validated such data. I and TOSHI STATS.SDN.BHD. accept no liability whatsoever for any loss arising from the use of this information and relies upon the comments, opinions and analyses in the material is at the sole discretion of the user.