What is the marketing strategy at the age of “everything digital”?


In July,  I have researched TensorFlow, which is a deep learning library by Google, and performed several classification tasks.  Although it is open-source software and free for everyone, its performance is incredible as I said in my last article.

When I perform image classification task with TensorFlow,  I found that computers can see our world better and better as deep learning algorithms are improved dramatically. Especially it is getting better to extract “features“, what we need to classify images.

Images are just a sequence of numbers for computers. So some features are difficult for us to understand what they are. However computers can do that. It means that computers might see what we cannot see in images. This is amazing!

Open CV


Open CV2

This is an example “how images are represented as a sequence of numbers. You can see many numbers above (These are just a small part of all numbers). These numbers can be converted to the image above which we can see. But computers cannot see the image directly.  It can only see the image through numbers above. On the other hand, we can  not understand the sequence of numbers above at all as they are too complicated. It is interesting.

In marketing,  when images of products are provided,  computers might see what are needed to improve the products and to be sold more. Because computers can understand these products more in a deferent way as we do. It might give us new way to consider marketing strategy.  Let us take T shirts as an example. We usually consider things like  color, shape,  texture,  drawings on it,  price. Yes, they are examples of “features” of T shirts because T-shirts can be represented by them. But computers might think more from the images of T shirts than we do. Computers might create their own features of T-shirts.


Then, I would like to point out three things to consider new marketing strategy.

1.Computers might extract more information that we do from same images.

As I explained, computers can see the images in a different way as we do. We can say same things for other data, such as text or voice mail as they are also just a sequence of numbers for computers. Therefore computers might understand our customers behavior more based on customer related data than we do when deep learning algorithms are much improved. We sometimes might not understand how computers can understand many data because computers can understand text/speech as a sequence of numbers and provide many features that are difficult to explain for us.


2.Computers might see many kind of data as massive amount data generated by costomers

Not only images but also other data, such as text or voice mail are available for computers as they are also just a sequence of numbers for computers. Now everything from images to voice massages is going to digital.  I would like to make computers understand all of them with deep learning. We cannot say what features are used when computers see images or text in advance. But I believe some useful and beneficial things must be found.


3. Computers can work in real-time basis

As you know, computers can work 24 hours a day, 365 days a year. Therefore it can operate in real-time basis. When new data is input, answer can be obtained in real-time basis. This answer can be triggered next actions by customers. These actions also can be recorded as digital and fed to into computers again. Therefore many digital data will be generated when computers are operated without stop /rest time and the interactions with customers might trigger chain-reactions. I would like to call it “digital on digital”


Images, social media, e-mails from customers, voice mail,  sentences in promotions, sensor data from customers are also “digital”. So there are many things that computers can see. Computers may find many features to understand customer behaviors and preferences in real-time basis. We need to have system infrastructures to enable computers to see them and tell the insight from them. Do you agree with that?




Notice: TOSHI STATS SDN. BHD. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. TOSHI STATS SDN. BHD. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on TOSHI STATS SDN. BHD. and me to correct any errors or defects in the codes and the software.



This is my first “Deep learning” with “R+H2O”. It is beyond my expectation!


Last Sunday,  I tried “deep learning” in H2O because I need this method of analysis in many cases. H2O can be called from R so it is easy to integrate H2O into R. The result is completely beyond my expectation. Let me see in detail now!

1. Data

Data used in the analysis is ” The MNIST database of handwritten digits”. It is well known by data-scientists because it is frequently used to validate statistical model performance.  Handwritten digits look like that (1).


Each row of the data contains the 28^2 =784 raw grayscale pixel values from 0 to 255 of the digitized digits (0 to 9). The original data set of The MNIST is as follows.

  • Training set of 60,000 examples,
  • Test set of 10,000 examples.
  • Number of features is 784 (28*28 pixel)

The data in this analysis can be obtained from the website (Training set of 19,000 examples, Test set of 10,000 examples).



2. Developing models

Statistical models learn by using training set and predict what each digit is by using test set.  The error rate is obtained  as “number of wrong predictions /10,000″. The world record is ” 0.83%”  for models without convolutional layers, data augmentation (distortions) or unsupervised pre-training (2). It means that the model has only 83 error predictions in 10,000 samples.

This is an image of RStudio, IDE of R.  I called H2O from R and write code “h2o.deeplearning( )”.  The detail is shown in the blue box below.  I developed the model with 2 layers and 50 size for each. The error rate is 15.29% (in the red box).  I need more improvement of the model.

DL 15.2

Then I increase the number of layers and sizes.  This time,   I developed the model with 3 layers and 1024, 1024, 2048 size for each. The error rate is 3.22%, much better than before (in the red box).  It took about 23 minutes to be completed. So there is no need to use more high-power machines or clusters so far ( I use only my MAC Air 11 in this analysis). I think I can improve the model more if I tune parameters carefully.

DL 3.2

Usually,  Deep learning programming is a little complicated. But H2O enable us to use deep learning without programming when graphic user interface “H2O FLOW” is used.  When you would like to use R,   the command of deep learning to call H2O  is similar to the commands for linear model (lm) or generalized linear model (glm) in R. Therefore, it is easy to use H2O with R.



This is my first deep learning with R+H2O. I found that it could be used for a variety cases of data analysis. When I cannot be satisfied with traditional methods, such as logistic regression, I can use deep learning without difficulties. Although it needs  a little parameter tuning such as number of layers and sizes,  it might bring better results as I said in my experiment. I would like to try “R+H2O” in Kaggle competitions, where many experts compete for the best result of predictive analytics.



The strongest competitor to H2O appears on 9 Nov 2015.  This is ” TensorFlow” from Google.  Next week,  I will report this open source software.



1. The image from GitHub  cazala/mnist


2. The Definitive Performance Tuning Guide for H2O Deep Learning , Arno Candel, February 26, 2015



Note: Toshifumi Kuga’s opinions and analyses are personal views and are intended to be for informational purposes and general interest only and should not be construed as individual investment advice or solicitation to buy, sell or hold any security or to adopt any investment strategy.  The information in this article is rendered as at publication date and may change without notice and it is not intended as a complete analysis of every material fact regarding any country, region market or investment.

Data from third-party sources may have been used in the preparation of this material and I, Author of the article has not independently verified, validated such data. I and TOSHI STATS.SDN.BHD. accept no liability whatsoever for any loss arising from the use of this information and relies upon the comments, opinions and analyses in the material is at the sole discretion of the user. 

Do it yourself for programming of image recognition. It works!

cat-205757_640Recently, Facebook, Pinterest and Instagram have gotten very popular.  A lot of pictures and images are generated and sent by users.  From human faces to landscape, there are a lot of varieties of pictures on them.  In order to enhance their services,  image recognition technology has been developed at the astonishing rate.  By this technology, computers can understand what the objects in images are.  Today,  I would like to re-create the simple image recognition by just following the tutorials on the web.

Image recognition can be done by the state of the art “deep learning”.  This is  one of the latest iterations of computer programming. It sounds so complicated that business personnel may not want to do that by themselves.  However,  specific programming languages for deep learning are provided as open source and good tutorials are also available on the web,  it is possible that the business persons  program simple image recognition by themselves even though  they may have no expertise in computer science. Let me tell you my experience of that.


1. Choose programming languages

There are several programming languages for deep learning. I choose “Torch” is provided Facebook artificial intelligence research as it becomes open source at the beginning of this year. I think it is easy to learn for beginners.


2.  Find good tutorials for the theory

In order to understand what the theory is behind image recognition,   I find the best tutorials and lectures provided by the Computer Science Department of University of Oxford 1 .  This is a good reference to understand what deep learning is and its applications.  Even though the theory is not always required for programming,  it is recommended to watch the tutorials before programming in order to grasp broad pictures of image recognition.


3.  Let us program image recognition and find what computer says

Programm itself is provided by the tutorial 2.  In the tutorial I use image dataset, which has the classes: ‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’, ‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’.  So computer should classify each image into one of 10 classes above. I just copy and past programs which are provided in the tutorial.  It takes less that 10 minutes. I run the program and obtain the results. Then I choose three of the results and see what the computer says. Name of objects above images are correct answers.  The computer  provides its answers as the probability of the each class.  Therefore sum of the 10 numbers below is close to “1”.

スクリーンショット 2015-08-04 15.59.42

In this result, the correct answer is “frog”.  In computer answer, frog has the highest probability of 0.4749….  So  the computer has a good guess!


スクリーンショット 2015-08-04 15.58.41

In this result, the correct answer is “cat”.  In computer answer, cat has the highest probability of 0.3508….  So  the computer has a good guess!


スクリーンショット 2015-08-04 16.00.08

In this result, the correct answer is “automobile”.  In computer answer, automobile has the highest probability of 0.3622….  So  the computer has a good guess!  Although this program is not perfect in terms of accuracy of whole test results, it is reasonable to learn programming of image recognition.


You may not be  a computer scientist.  However, it is good to program this image recognition by themselves because it enables you to understand how it works based on the state of art deep learning.  Once you do it,  you do not need to consider image recognition as “Black box”.  It is beneficial for you at the age of the digital economy.

Yes, torch and the tutorials are free.  No fee is required.  Could you try it as your hobby?



1.   Machine Learning: 2014-2015,  Nando de Freitas, the Computer Science Department of University of Oxford https://www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/

2.  Deep Learning with Torch – A 60-minute blitz




Note: Toshifumi Kuga’s opinions and analyses are personal views and are intended to be for informational purposes and general interest only and should not be construed as individual investment advice or solicitation to buy, sell or hold any security or to adopt any investment strategy.  The information in this article is rendered as at publication date and may change without notice and it is not intended as a complete analysis of every material fact regarding any country, region market or investment.

Data from third-party sources may have been used in the preparation of this material and I, Author of the article has not independently verified, validated such data. I accept no liability whatsoever for any loss arising from the use of this information and relies upon the comments, opinions and analyses in the material is at the sole discretion of the user. 

This new toy looks so bright! Do you know why ?


Last week I found that new toy  called “CogniToys” for infants will be developed in the project of Kickstarter, one of the biggest platforms in cloud funding.  The developer is elemental path, one of the three winners of the IBM Watson competition. Let see why it is so bright!

According to the web site of this company,  this toy is connected to the internet.  When a child talks to this toy, it can reply because this toy can see what a child says and answer the question from a child.  It usually requires less than one second to answer because IBM Watson-powered system is powerful enough to calculate answers quickly.


Let us look at the descriptions of this company’s technology.

“The Elemental Path technology is built to easily license and integrate into existing product lines. Our dialog engine is able to utilize some of the most advanced language processing algorithms available driving the personalization of our platform, and keeping the interaction going between toy and child.”

Key words are 1. Dialog    2. Language processing   3. Personalization


1. Dialog

This toy communicates with children by conversation, rather than programming. Therefore technology called “speech recognition” is needed in it.  This technology is applied in real-time machine translation such as Microsoft Skype, too.


2. Language processing

In the area of machine learning, it is called “Natural language processing”. Based on the structure of sentence and phrase, the toy understands what children say.  IBM Watson is very expert in the field of natural language processing because Watson should understand the meaning of questions in Jeopardy contests before.


3. Personalization

It is beneficial when children talk to this toy, it knows children preference in advance. This technology is called “Personalization”.  Through interactions between children and the toy, it can learn what children like to cognize. This technology is oftentimes used in retailers such as Amazon and Netflix. There is no disclosure about the method of personalization as far as I know.  I am very interested in how the personalization mechanism works.


In short, machine learning enables this toy to work and be smart. Functions of Machine Learning are provided as a service by big IT companies, such as IBM and Microsoft.  Therefore, this kind of applications is expected to be put out to the market in future. This is amazing, isn’t it?  I imagine next versions of the toy can see images,  identify what they are and share images with children because technology called image recognition is also offered as a service by big companies.

I ordered one CogniToy through Kickstarter. It is expected to deliver in November this year. I will report how it works when I get it!


Note:IBM, IBM Watson Analytics, the IBM logo are trademarks of International Business Machines Corporation, registered in many jurisdictions worldwide. 

What can computers do now ? It looks very smart !


Lately I found that several companies such as Microsoft and IBM provide us services by machine learning. Let us see what is going on now.

These new services are based on the progress on Machine learning recently. For example, Machine translation services between English and Spanish are provided by Microsoft skype. It uses Natural Language Processing by Machine learning. Although it started at Dec 2014, the quality of the services is expected to be improved quickly as a lot of people use and computer can learn the data from such users.


It is beneficial for you to explain what computers can do lately so that you can imagine new services in future. First, computers can see the images and videos and identify what it is. This is image recognition. Second, it can listen to our speech and interpret what you mean. This is speech recognition. It can translate one language to another, as well. This is machine translation. Third, computers can research based on concepts rather than key words. Fourth, it can calculate best choice among the potential options. This is an optimization. In short computers can see, listen to, read, speak and think.

These functions are utilized in many products and services although you cannot notice it. For example, IBM Watson Analytics provides these functions through platform as a service to developers.


I expect these functions enable computers to behave just like us. At the initial phase, it may be not so good just like a baby. However, machine learning allows computers to learn from experience. It means that the computer will perform better than we do in many fields. As you know, Shogi, one of the popular Japanese board game, artificial machine players can beat human professional teams. This is amazing!

Proceeding forward, it is recommended that you understand how computers are progressing in terms of the functions above. Many companies such as Google, Facebook invest a great deal of money in this filed. Therefore, many services are anticipated to be released in near future. Some of new services can impact our jobs, education and society a lot. Some of them may arise new industries in future.


Some day, when you are in the room, the computer can identify you by computer vision. Then ask if you want to drink a cup of coffee. The computer holds a lot of data, such as temperature, weather, time, season, your preference in it and generates the best coffee for you. If you want to know how this coffee is generated, the computer provides you a detailed report about the coffee. All settings are done automatically. It is the ultimate coffee maker by using powerful computer algorithm. Do you want it for you?



Note:IBM, IBM Watson Analytics, the IBM logo are trademarks of International Business Machines Corporation, registered in many jurisdictions worldwide. 

Mobile services will be enhanced by machine learning dramatically in 2015


Merry Christmas !  The end of 2014 is approaching.  It is a good time to consider what will happen in the fields of machine learning and mobile services in 2015.  This week we consider machine translation and image recognition,  next week recommender systems and internet of things as well as mobile services by machine leaning. I hope you can enjoy it !


1.  Machine translation / Text mining

Skype is a top innovator in this fields.   Microsoft already announced that machine translation between English and Spanish is available by Skype. So in 2015,  it would be possible to translate between English and other languages. Text translation is also available among 40 languages in its chat service.  So language barrier are getting lower and lower.  It is still difficult to answer to questions by computers automatically.  But it is also gradually improved.  Mizuho bank announced that it will use IBM Watson, one of the famous artificial intelligence to assist call center operators.  These technologies make global service to be developed more easily as manuscripts and frequent Q&A are translated from the language to another automatically.  I love that because my educational programs can be expanded to all over the world!


2. Image recognition

Since computers identified the image of cats automatically by deep learning, images recognition technology progresses dramatically.  Soft bank announced that Pepper, new robot for consumers, will be able to read human emotions. In my view, the most important factor to read emotions must be image recognition of  human facial expressions. Pepper could be very good at doing this therefore it can read human emotions.  Image recognition technology is very good for us as each smart phone has a nice camera and it is easy for people to take pictures and send them to clouds and social media.  Image recognition can enable us to analyze massive amount of images, which are sent through internet. That data must be a treasure for us.


These machine learning technologies must be connected to mobile phone of each customer in 2015. It means that mobile services are enhanced by machine learning dramatically. All information around us will be collected through internet and send to machine learning in real-time basis and machine learning will return the best answer for individuals. This will be standard model of mobile services as speed of calculation and communication are increasing rapidly.

Next week we consider recommender systems,  internet of things and investment technology.  See you next week!