BERT also works very well as a feature extractor in NLP!

Two years ago, I developed car classification models by ResNet. I use transfer learning to develop models as I can prepare only small amount of images. My model is already pre-trained by a huge amount of data such as ImageNet. I can extract features of each image of cars and train classification models on top of that. It works very well. If you are interested in it, could you see the article?

Then, I am wondering how BERT(1) works as a feature extractor. If it works well, it can be applied to many downstream tasks with ease. Let us try the experiment here. BERT is one of the best Natural Language Processing (NLP) models by Google. I wrote how BERT works in my article before. It is amazing!

Let me explain features a little. Feature means “How texts can be represented by vectors”. Each word can be converted to a number before inputting to BERT then whole sentence can be converted to 768-length-vectors by BERT. In this experiment, feature extraction can be done by TensorFlow Hub of BERT. Let us see its website. It says there are two kinds of outputs by BERT…

It means that when text data is input to BERT, the model returns two type of vectors. One is “one vector for each sentence”, the other is “sequence of vectors for each sentence”. In this task, we need “one vector for each sentence” because it is classification task and one vector is enough to input classification models. We can see the first 3 vectors out of 3503 samples below.

This is a training result of the classification model. Accuracy is 82.99% at 105 epoch. Although it is reasonable it is worsen than the result of the last article 88.58%. The deference is considered as advantage of fine tuning. In this experiment, weights of BERT are fixed and there is no fine tuning. So if you need more accuracy, let us try fine tuning just like the experiment in the last article.

BERT means “Bidirectional Encoder Representations from Transformers”. So it looks good as a tool for feature extractions. Especially this is multi-language model therefore we can use it for 104 languages. It is amazing!

I will perform other experiments about BERT in my article. Stay tuned!

  1. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
    11 Oct 2018, Jacob Devlin Ming-Wei Chang Kenton Lee Kristina Toutanova, Google AI Language

Notice: Toshi Stats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. Toshi Stats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on Toshi Stats Co., Ltd. and me to correct any errors or defects in the codes and the software

Advertisements

BERT performs very well in the classification task in Japanese, too!

As I promised in the last article, I perform experiments about classification of news title in Japanese. The result is very good as I expected. Let me explain the details.

I use “livedoor news corpus” (2) for this experiment. These are five-class of news title in this experiment. These are about life, movie, sports, chats, and electronics. Here is the detail of the class. I would like to classify each title of news according to this class correctly.

Then I train BERT(1) model with a sample of news title written in Japanese. Here is the result. The BERT model, which I used, is the multi-language model. All I have to do is fine-tuning to apply my task. As you can see below, The accuracy ratio is about 88%. It is very good while I use very small sample data (3503 for training, 876 for test). It took less than one minute on colab with GPU.

With 3 epochs, I confirmed that the accuracy ratio is over 88%

Let me take 10 samples for validation and see each of them. These samples are not used for training so they are new to the computer. Nine out of ten are classified correctly. It is so good, isn’t it?

The beauty is that the pre-trained model is not specific for only Japanese. As it is a multi-language model, it should work in many kinds of languages with the same fine-tuning as I did in Japanese. Therefore It should work in your languages, too!

How about this experiment? I continue to do experiments of BERT in many tasks of natural language and update my article soon. Stay tuned!

  1. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
    11 Oct 2018, Jacob Devlin Ming-Wei Chang Kenton Lee Kristina Toutanova, Google AI Language
  2. livedoor news corpus CC BY-ND 2.1 JP

Notice: Toshi Stats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. Toshi Stats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on Toshi Stats Co., Ltd. and me to correct any errors or defects in the codes and the software

BERT performs near state of the art in question and answering! I confirm it now

Today, I write the article of BERT, which a new natural language model, again because it works so well in question and answering task. In my last article, I explained how BERT works so if you are new about BERT, could you read it?

For this experiment, I use SQuADv1.1data as it is very famous in the field of question and answering.  Here is an explanation by them.

“Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowd workers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.” (This is from SQuAD2.0, a new version of Q&A data)

This is a very challenging task for computers to answer correctly. How does BERT work for this task? As you saw below, BERT recorded f1 90.70 after one-hour training on TPU on colab in our experiment. It is amazing because based on the Leaderboard of SQuAD1.1 below, it is the third or fourth among top universities and companies although the Leaderboard may be different from our experiment setting. It is also noted BERT is as good as a human is!

 

 

 

I tried both Base model and Large model with different batch size.  Large model is better than Base model with around 3 points. Large model takes around 60 minutes to complete training while Base model takes around 30 munites. I use TPU on Google colab for training. Here is the result. EM means “exact match”.

Question & answering can be applied to many tasks in businesses, such as information extraction from documents and automation for customer centers. It must be exciting when we can apply BERT to businesses in the near future.

 

Next, I would like to perform text-classification of news title in Japanese because BERT has a multi-language model which works in 104 languages globally. As I live in Tokyo now, it is easy to find good data for this experiment. I will update my article soon. So stay tuned!

 

 

 

 

 

@article{devlin2018bert,
  title={BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding},
  author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
  journal={arXiv preprint arXiv:1810.04805},
  year={2018}
}

Notice: Toshi Stats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. Toshi Stats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on Toshi Stats Co., Ltd. and me to correct any errors or defects in the codes and the software

 

“BERT” can be a game changer to accelerate digital transformation!

Since Q1 of 2019 is close to ending,  I would like to talk about one of the biggest innovation of deep learning in Natural Language Processing (NLP).  This is called “BERT” presented by Google AI in Oct 2018. As far as I know, it is the first model to perform very well in many language tasks such as sentimental analysis, question answering without any change of the model itself. It is amazing! Let us start now.

1. How BERT works?

The secrets of BERT are its structure and method of training.   BERT introduces transformer as the main blocks in it.  I mentioned transformer before as it is a new structure to extract information of sequential data. The key is the attention mechanism. It means to measure “how we should pay attention to each word in the sentence”. If you want to know more, it is a good reference. Then let us move on how BERT is trained. BERT means “Bidirectional Encoder Representations from Transformers”. For example, the word “bank” has different meanings in “bank account” and “bank of the river”. When the model can learn from data only forward direction, it is difficult to distinguish the difference of meaning of “bank”. But if it can learn not only forward but backward direction, the model can do so. It is the secret for BERT to perform the state of art in many NLP tasks without modifications. This is the chart from the research paper (1).

2. How can we apply BERT to our tasks for solutions?

BERT is so large that it needs a lot of data and computing resources such as GPU/TPU.  Therefore it takes time and cost if we train BERT from scratch. But no need to worry.  Google released a number of pre-trained models of BERT. It is great because we can use them as base models and all we have to do is just a small training to adjust to our own tasks such as text classification. It is called “fine-tuning”. These pre-trained models are open source and available for everyone. If you want to know more, please see the blog. The beauty is one of the pre-trained models is a multi-language model which works in 104 languages without any modifications. It is amazing! So it works in your language, too!

3. Can BERT accelerate digital transformation in our daily lives?

I think “Yes” because we are surrounding a massive amount of documentation such as contracts, customer reports, emails, financial reports, regulatory instructions, newspapers, and so on. It is impossible to understand everything and extract the information needed in a real-time manner.  With BERT, we can develop much better applications to handle many text data and extract information needed efficiently. It is very exciting when we consider how many applications can be created by using BERT in the near future.

Hope you enjoy my article. Now I research BERT intensively and update my article soon. Stay tuned!

  1. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
    11 Oct 2018, Jacob Devlin Ming-Wei Chang Kenton Lee Kristina Toutanova, Google AI Language

 

 

 

Notice: Toshi Stats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. Toshi Stats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on Toshi Stats Co., Ltd. and me to correct any errors or defects in the codes and the software

How can we develop machine intelligence with a little data in text analysis?

ginza-725794_1280

Whenever you want to create machine intelligence model, the first question to ask is “Where is my data?”.   It is usually difficult to find good data to create models because it is time-consuming and may require costs to do that. Unless you work in good companies such as Google or Facebook, it might be a headache for you. But fortunately, there are good ways to solve this problem. It is “Transfer learning”.  Let us find out!

1. Transfer learning

When we need to train machine intelligence models, we usually use “supervised learning”. It means that we need “teachers” who can tell which is a right answer. For example, when we need to classify “which is a cat or a dog?”, we need to tell “this is a cat and that is a dog” to computers.  It is the powerful method of learning to achieve higher accuracy.  So most of the current AI applications are developed by “Supervised learning”.  But a problem arises here. There are a little data for supervised learning.  While we have many images on our smartphones, each image has no information about “what it is”. So we need to add this information to each image manually.  It takes time to complete as a massive amount of images are needed in training. I explained it a little in computer vision in my blog before. We can say the same thing in text analysis or natural language processing. We have many tweets on the internet. But no one tells you which has positive and negative sentiment. Therefore we need to put “positive or negative’ to each tweet by ourselves. No one wants to do that. Then “Transfer learning” comes here.  You do not need training from scratch. Just transfer someone’s results to your models as someone did the similar training before you do!  The beauty of “Transfer Learning” is that we need just a little data in our training. No need for a massive amount of data anymore. It makes preparing data far easier for us!

Cat and dogs

2. “Transformer”

This model (1) is one of the most sophisticated models for machine translation in 2017. It is created by Google brain. As you know, it achieved the state of art of accuracy in Neural Machine translation at the time it was public.  The key architecture of Transformer is “Self-attention”.  It can tell us where the model should pay attention to among all words in a sentence, regardless of their respective position, by using “Query, Key, and Value” mechanism. The Research paper “Attention Is All You Need” is available here.  “Self-attention mechanism” takes times to explain in details. If you want to know more, this blog is strongly recommended. I just want to say “Self-attention mechanism” might be a game changer to develop machine intelligence in the future.

3.  Transfer learning based on “Transformer”

It has been more than one year since “Transformer” was public, There are several variations based on”Transformer”.  I found the good model for “transfer learning” I mentioned earlier in this article.  This is “Universal Sentence Encoder“(2).  In this website, we can find a good explanation of what it is.

“The Universal Sentence Encoder encodes text into high dimensional vectors that can be used for text classification, semantic similarity, clustering and other natural language tasks.”

The model takes sentences, phrases or short paragraphs and outputs vectors to be fed into the next process. “The universal-sentence-encoder-large” is trained with “Transformer” (-light is trained with a different model). The beauty is that Universal Sentence Encoder is already trained by Google and these results are available to perform “transfer learning” by ourselves.  This is great! This chart tells you how it works.

Sentense encoderThe team in Google claimed that “With transfer learning via sentence embeddings, we observe surprisingly good performance with minimal amounts of supervised training data for a transfer task.”.  So let me confirm how it works with a little data. I performed a small experiment based on this awesome article.  I modify the classification model and change the number of training samples. With only 100 training data,  I could achieve 79.2% accuracy.  With 300 data, 95.8% accuracy. This is great!  I believe the results come from the power of transfer learning with Universal Sentence Encoder.

result1red

In this article, I introduce transfer learning and perform a small experiment with the latest model “Universal Sentence Encoder”.  It looks very promising so far. I would like to continue transfer learning experiments and update the results here.  Stay tuned!

 

When you need AI consulting,  could you go to  TOSHI STATS website?

 

 

 

 

 

  1. Attention Is All You Need,  Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin, Google, 12 June 2017.
  2. Universal Sentence Encoder,  Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, Ray Kurzweil,  Google, 29 March 2018

 

Notice: Toshi Stats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. Toshi Stats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on Toshi Stats Co., Ltd. and me to correct any errors or defects in the codes and the software

 

This is my first machine intelligence model. It looks good so far!

DenceNet121_1-1

There are many images on the internet. A lot of people upload selfie-images to Instagram every day.   There are also many text data on the internet because Not only professionals writers but many people express their opinions on blogs and tweets. No one can see every image and text on the internet as it is a huge volume. In addition, images and texts sometimes have a relationship to each other. For example, people upload images and put explanations of them. Therefore I am always wondering how we can analyze both images and text at once.  There are several methods to do that. I choose image-captioning model out of these methods as it is easy to understand how it works.

 

1. What is an image-captioning model?

Before I start the project on image captioning, I performed computer vision projects and Natural language projects independently.  Computer vision means to classify cats and dogs or detect a specific type of cars and distinguish each of them from other types of cars. I also develop natural language models such as sentiment analysis of movie reviews. Image-captioning model is a kind of combined model of “computer vision and natural language model”.  Let us see the chart below.

image-captioning

A computer takes a picture as input. Then the encoder extracts features from the picture that is taken.  “feature” means the characteristics of an object”. Based on these features, the decoder generates sentences which describes what the picture tells us. This is how our “image-captioning” model works.

 

2. How can we find the template of “image captioning model” and modify it?

I found a good framework to develop our image-captioning models. It is “colab” provided by Google. Although it is free to use, there are many templates to start with the projects and GPU is available in it for research/interaction usages. It can provide us with a computational power to be required for developing image-captioning models. I found the original template of image-captioning in colab. The template is awesome as “the attention mechanism” is implemented. It uses inceptionV3 as an encoder and GRU as a decoder. But I would like to try other methods. I modify this template a little to change from inceptionV3 to densenet121 and from GRU to LSTM.  Let us see how it works on my experiment!

 

3. The results after 3-hour-training

Here is one of the outputs from my experiment of our image-captioning model. It says “a couple of two sugar covered in chocolate frosting are laid on top of a wooden table”. Although it is not perfect, it works very well.  When we input more data and computation time, it should be more accurate.

DenceNet121_1-2

 

This is the first step toward machine intelligence.  Of course, it is a long way to go.  But the combined images and texts, I believe we can develop many cool applications in the future. In addition, I found that “the attention mechanism” is very powerful to extract relevant information. I would like to focus on this mechanism to improve our algorithms going forward. Stay tuned!

 

(1) Olah&Carter, “Attention and Augmented Recurrent Neural Networks“, Distill, 2016.

 

When you need AI consulting,  could you see TOSHI STATS website?

Notice: Toshi Stats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. Toshi Stats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on Toshi Stats Co., Ltd. and me to correct any errors or defects in the codes and the software

 

 

 

 

We start AI Lab in the company and research “attention mechanism” in deep learning

As I said before. I completed the online course “deeplearning ai“. This is an awesome course I want to recommend to everyone. There are many topics we can learn in the course. One of the most interesting things for me is “attention mechanism” in neural translation.  So I would like to explain it in details. Do not worry as I do not use mathematics in this article.  Let us start.

 

The definition of attention mechanism is “The attention mechanism tells a Neural Machine Translation model where it should pay attention to at any step”. It may be natural when we consider how we translate language from one to another. Yes, human-being pays more attention to specific objects than others when they are more interesting to them. When we are hungry,  we tend to look for the sign of “restaurant” or ” food court”,  do not care the sing of “library”,  right?

We want to apply the same thing for translation by computers. Let me consider again. It is true that when we translate English to our mother tongue, such as Japanese, we look at the whole part of the sentences first, then make sure what words are important to us.  we do not perform translation one on one basis. In another word, we pay more attention to specific words than other words. So we want to introduce the same method in performing neural translation by computers.

 

Originally, attention mechanism was introduced (1) in Sep 2014. Since then there are many attention mechanisms introduced. One of the strongest attention models is “Transformer” by Google brain in  June 2017.  I think you use Google translation every day. It performs very well. But transformer is better than the model used in Google translation. This chart shows deference between  GNMT (Google translation) and Transformer(2).

Fortunately, Google prepares the framework to facilitate AI research.  It is called “Tensor2Tensor (T2T) “. It is open sourced and can be used without any fees. It means that you can do it by yourself! I decide to set up “AI Lab” in my company and introduce this framework to research attention mechanism. There are many pre-trained models including “Transformer”.  Why don’t you join us?

 

I used translations as our example to explain how attention mechanism works. But it can be applied to many other fields such as object detection which is used in face recognition and a self-driving car. It must be excited when we consider what can be achieved by attention mechanism.  I would like to update the progress.  So stay tuned!

 

 

When you need AI consulting,  do not hesitate to contact TOSHISTATS

 

(1) NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE.  By Dzmitry Bahdanau, KyungHyun Cho, Yoshua Bengio in Sep 2014

(2) Attention Is All You Need,  By Ashish Vaswani,Noam Shazeer,Niki Parmar,Jakob Uszkoreit,Llion Jones,Aidan N. Gomez, Łukasz Kaiser,Illia Polosukhin,  in June 2017

 

 

 

Notice: TOSHI STATS SDN. BHD. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. TOSHI STATS SDN. BHD. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on TOSHI STATS SDN. BHD. and me to correct any errors or defects in the codes and the software.

Is this a real voice by human being? It is amazing as generated by computers

girl-926225_640

As I shared the article this week,  I found the exciting system to generate voices by computers. When I heard the voice I was very surprised as it sounds so real. I recommend you to listen to them in the website here.  There are versions of English and Mandarine. This is created by DeepMind, which is one of the best research arms of artificial intelligence in the world. What makes it happen?   Let us see it now.

 

1. Computers learns our voices deeper and deeper

According to the explanation of DeepMind, they use “WaveNet, a deep neural network for generating raw audio waveforms”.  They also explain”pixel RNN and pixel CNN”, which are invented by them earlier this year. (They have got one of best paper award at ICML 2016, which are one of the biggest international conference about machine learning, based on the research). By applying pixel RNN and CNN to voice generation, computers can learn wave of voices far more details than previous methods. It enables computers generate more natural voices. It is how WaveNet is born this time.

As the result of learning raw audio waveforms, computer can generate voices that sound so real. Could you see the metrics below?  The score of WaveNet is not so different from the score of Human Speech (1). It is amazing!

%e3%82%b9%e3%82%af%e3%83%aa%e3%83%bc%e3%83%b3%e3%82%b7%e3%83%a7%e3%83%83%e3%83%88-2016-09-14-9-29-29

2. Computers can generate man’s voice as well as woman’s voice at the same time

As computer can learn wave of our voices more details,  they can create both man’s voice and woman’s voice. You can also listen to each of them in the web. DeepMind says “Similarly, we could provide additional inputs to the model, such as emotions or accents”(2) . I would like to listen them, too!

 

3. Computers can generate not only voice but also music!

In addition to that,  WaveNet can create music, too.  I listen to the piano music by WaveNet and I like it very much as it sounds so real. You can try it in the web, too.  When we consider music and voice as just data of audio waveforms, it is natural that WaveNets can generate not only voices but also music.

 

If we can use WaveNet in digital marketing, it must be awesome! Every promotions, instructions and guidance to customers can be done by voice of  WaveNet!  Customers may not recognize “it is the voice by computers”.  Background music could be optimized to each customer by WaveNet, too!  In my view, this algorithm could be applied to many other problems such as detections of cyber security attack, anomaly detections of vibrations of engines, analysis of earthquake as long as data can form  of “wave”.  I want to try many things by myself!

Could you listen the voice by WaveNet? I believe that in near future, computers could learn how I speech and generate my voice just as I say.  It must be exciting!

 

 

1,2.  WaveNet:A generative model for Raw Audio

https://deepmind.com/blog/wavenet-generative-model-raw-audio/

 

 

Notice: TOSHI STATS SDN. BHD. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. TOSHI STATS SDN. BHD. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on TOSHI STATS SDN. BHD. and me to correct any errors or defects in the codes and the software

“DEEP LEARNING PROJECT” starts now. I believe it works in digital marketing and economic analysis

desert-956825_640

As the new year starts,  I would like to set up a new project of my company.  This is beneficial not only for my company, but also readers of the article because this project will provide good examples of predictive analytics and implementation of new tools as well as platforms. The new project is called “Deep Learning project” because “Deep Learning” is used as a core calculation engine in the project.  Through the project,  I would like to create “predictive analytics environment”. Let me explain the details.

 

1.What is the goal of the project?

There are three goals of the project.

  • Obtain knowledge and expertise of predictive analytics
  • Obtain solutions for data-driven management
  • Obtain basic knowledge of Deep Learning

As big data are available more and more, we need to know how to consume big data to get insight from them so that we can make better business decisions.  Predictive analytics is a key for data-driven management as it can make predictions “What comes next?” based on data. I hope you can obtain expertise of predictive analytics by reading my articles about the project. I believe it is good and important for us  as we are in the digital economy now and in future.

 

2.Why is “Deep Learning” used in the project?

Since the November last year, I tried “Deep Learning” many times to perform predictive analytics. I found that it is very accurate.  It is sometimes said that It requires too much time to solve problems. But in my case, I can solve many problems within 3 hours. I consider “Deep Learning” can solve the problems within a reasonable time. In the project I would like to develop the skills of tuning parameters in an effective manner as “Deep Learning” requires several parameters setting such as the number of hidden layers. I would like to focus on how number of layers, number of neurons,  activate functions, regularization, drop-out  can be set according to datasets. I think they are key to develop predictive models with good accuracy.  I have challenged MNIST hand-written digit classifications and our error rate has been improved to 1.9%. This is done by H2O, an awesome analytic tool, and MAC Air11 which is just a normal laptop PC.   I would like to set my cluster on AWS  in order to improve our error rate more. “Spark” is one of the candidates to set up a cluster. It is an open source.

DL.002

3. What businesses can benefit from introducing “Deep Learning “?

“Deep Learning ” is very flexible. Therefore, it can be applied to many problems cross industries.  Healthcare, financial, retails, travels, food and beverage might be benefit from introducing “Deep Learning “.  Governments could benefit, too. In the project, I would like to focus these areas as follows.

  • Digital marketing
  • Economic analysis

I would like to create a database to store the data to be analyzed, first. Once it is created,  I perform predictive analytics on “Digital marketing” and “Economic analysis”.  Best practices will be shared with you to reach our goal “Obtain knowledge and expertise of predictive analytics” here.  Deep Learning is relatively new to apply both of the problems.  So I expect new insight will be obtained. For digital marketing,  I would like to focus on social media and measurement of effectiveness of digital marketing strategies.  “Natural language processing” has been developed recently at astonishing speed.  So I believe there could be a good way to analyze text data.  If you have any suggestions on predictive analytics in digital marketing,  could you let me know?  It is always welcome!

 

I use open source software to create an environment of predictive analytics. Therefore, it is very easy for you to create a similar environment on your system/cloud. I believe open source is a key to develop superior predictive models as everyone can participate in the project.  You do not need to pay any fee to introduce tools which are used in the project as they are open source. Ownership of the problems should be ours, rather than software vendors.  Why don’t you join us and enjoy it! If you want to receive update the project, could you sing up here?

 

 

Notice: TOSHI STATS SDN. BHD. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. TOSHI STATS SDN. BHD. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on TOSHI STATS SDN. BHD. and me to correct any errors or defects in the codes and the software.

How will “Deep Learning” change our daily lives in 2016?

navigation-1048294_640

“Deep Learning” is one of the major technologies of artificial intelligence.  In April 2013, two and half years ago, MIT technology review selected “Deep Learning” as one of the 10  breakthrough technologies 2013.  Since then it has been developed so rapidly that it is not a dream anymore now.   This article is the final one in 2015.  Therefore, I would like to look back the progress of “Deep Learning” this year and consider how it changes our daily lives in 2016.

 

How  has “Deep Learning” progressed in 2015? 

1.  “Deep Learning” moves from laboratories to software developers in the real world

In 2014,  Major breakthrough of deep learning occurred in the major laboratory of big IT companies and universities. Because it required complex programming and huge computational resources.  To do that effectively, massive computational assets and many machine learning researchers were required.  But in 2015,  many programs, softwares of deep learning jumped out of the laboratory into the real world.  Torch, Chainer, H2O and TensorFlow are the examples of them.  Anyone can develop apps with these softwares as they are open-source. They are also expected to use in production. For example, H2O can generate the models to POJO (Plain Old Java Code) automatically. This code can be implemented into production system. Therefore, there are fewer barriers between development and production anymore.  It will accelerate the development of apps in practice.

 

2. “Deep Learning” start understanding languages gradually.

Most of people use more than one social network, such as Facebook, Linkedin, twitter and Instagram. There are many text format data in them.  They must be treasury if we can understand what they say immediately. In reality, there are too much data for people to read them one by one.  Then the question comes.  Can computers read text data instead of us?  Many top researchers are challenging this area. It is sometimes called “Natural Language Processing“.  In short sentences, computers can understand the meaning of sentences now. This app already appeared in the late of 2015.  This is “Smart Reply” by  Google.  It can generate candidates of a reply based on the text in a receiving mail. Behind this app,  “LSTM (Long short term memory)” which is one of the deep learning algorithm is used.  In 2016, computers might understand longer sentences/paragraphs and answer questions based on their understanding. It means that computers can step closer to us in our daily lives.

 

3. Cloud services support “Deep Learning” effectively.

Once big data are obtained,  infrastructures, such as computational resources, storages, network are needed. If we want to try deep learning,  it is better to have fast computational resources, such as Spark.  Amazon web services, Microsoft Azure, Google Cloud Platform and IBM Bluemix provide us many services to implement deep learning with scale. Therefore, it is getting much easier to start implementing “Deep Learning” in the system. Most cloud services are “pay as you go” so there is no need to pay the initial front cost to start these services. It is good, especially for small companies and startups as they usually have only limited budgets for infrastructures.

 

 

How will “Deep Learning” change our daily lives in 2016? 

Based on the development of “Deep learning” in 2015,  many consumer apps with “Deep learning” might appear in the market in 2016.   The deference between consumer apps with and without “Deep Learning” is ” Apps can behave differently by users and conditions”. For example,  you and your colleagues might see a completely different home screen even though  you and your colleagues use the same app because “Deep learning” enables the app to optimize itself to maximize customer satisfaction.  In apps of retail shops,  top pages can be different by customers according to customer preferences. In apps of education,  learners can see different contents and questions as they have progressed in the courses.  In apps of navigations,  the path might be automatically appeared based on your specific schedule, such as the path going airport on the day of the business trip.  They are just examples.  It can be applied across the industries.  In addition to that,  it can be more sophisticated and accurate if you continue to use the same app  because it can learn your behavior rapidly.  It can always be updated to maximize customer satisfactions.  It means that we do not need to choose what we want, one by one because computers do that instead of us.  Buttons and navigators are less needed in such apps.  All you have to do is to input the latest schedules in your computers.  Everything can be optimized based on the updated information.  People are getting lazy?  Maybe yes if apps are getting more sophisticated as expected. It must be good for all of us.  We may be free to do what we want!

 

 

Actually,  I quit an investment bank in Tokyo to set up my start-up at the same time when MIT  technology review released 10 breakthrough technologies 2013.  Initially I knew the word “Deep Learning” but I could not understand how important is is to us because it was completely new for me. However, I am so confident now that I always say “Deep Learning'” is changing the landscape of jobs, industries and societies.  Could you agree with that?  I imagine everyone can agree that by the end of 2016!

 

 

 

Notice: TOSHI STATS SDN. BHD. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. TOSHI STATS SDN. BHD. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on TOSHI STATS SDN. BHD. and me to correct any errors or defects in the codes and the software.