“Keras” is a great library to create AI. It is the time to start learning it!

I started learning “Keras” in 2016 as it is easy to start.  It is open source software therefore no cost is needed when you use it.  There are many pre-defined layers and models in Keras. So no need for computer science PhD to start using the library.  That is why I would like to recommend “Keras” for everyone who is interested in AI today. Let me explain how awesome “Keras” is!

  1. “Keras” is a friend of beginners about AI

When the beginners starts programming of AI, they want to produce awesome results from AI models they create. This is important to keep motivated to continue learning AI. “Keras” enables them to do that. We can start AI programming from simple models without deep knowledge of AI. It is the best to start AI for everyone who is interested in AI.  I remember how glad I was when I created computer vision model to classify digits from 0 to 9 by “Keras”.  Now it is your turn!

2. We can write state of the art AI models with less code

While it is easy to create simple AI models with “Keras”, we can create more complex AI models with “Keras”, too. Currently I reaserach Variational autoencoder model by Keras. It is one of the most famous generated models all over the world. Keras provides great template to start. It you are interested in this, please check it out here.

3. We can implement AI models by “Keras” into Google Cloud

When you complete to create your own AI models,  you may want to create your own microservice with your own models. This is my example for that. Sentiment analysis model is created by “Keras on TensorFlow“. The model is implemented on Google Cloud Platform as microservice. So someone sent me emails (left),  correspondence emails(right) are sent to them automatically. The email tells us whether it is positive/negative sentiment as “porbability of positive”. It works very well. 

Actually,  99% of my AI models written in my articles are created by “Keras”. Although keras used to have several backends libraries before, now keras has only one backend “TensorFlow”, which is supported by Google.  “Keras” is now tightly integrated into “TensorFlow”.  It means that  “Keras” has more and more opportunities to expand its capabilities in “TensorFlow” eco-system. I think it is the time that you started learning AI with “Keras”

Why don’t you start learning “Keras” with us today! I would like to update my articles about “Keras”. Stay tuned!

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software

This is my favorite NLP model. It is small but works very well!

Since BERT was released from Google in Oct 2018, there are many models which improve original model BERT. Recently, I found ALBERT, which was released from Google research last year. So I perform small experiments about ALBERT. Let us make sentiment analysis model by using IMDB data. IMDB is movie review data, including review content and its sentiment from each user. I prepare 25000 training and 3000 test data. The reslut is very good. Let us see more details.

  1. It is easy to train the model as it has less parameters than BERT does.

BERT is very famous as it keeps good perfomance for NLP (natural language processing) tasks. But for me, it is a little too big. BERT has around 101 millions parameters. It means that it takes long to train the model and sometimes go over the capacity of memory of GPUs. On the other hand, ALBERT has around 11 millions so easy to train. It takes only about 1 hour to reach 90% accuracy when NVIDIA Tesla P100 GPU is used. It is awsome!

In this expriment, max_length is 256 for each sample

2. It is very accurate with a little data

ALBERT is not only fast to learn but also very accurate. I used pre-trained ALBERT base model from TensorFlow Hub. Because it is pretrained in advance, ALBERT is accurate with less data. For example, with only 500 training data, its accuracy is over 80%! It is very good when we apply it into real problems as we do not always have enough training data in practice.

max_length is 128 for each sample.

3. It is easily integrated to TensorFlow and keras

Finally, I would like to pointout ALBERT is easy to integrate TensorFlow keras, which is the framework of deep learning. All we have to do is to import ALBERT as “keras layer”. If you want to know more, check TensorFlow Hub. It says how to do it. I use TensorFlow keras everyday so ALBERT can be my favorate model automatically.

As I said, ALBERT is released in TensorFlow Hub and is free to use. So everyone can start using it easily. It is good to democratise “artificial intelligence”. I want to apply ALBERT into many applicatons in real world. Stay tuned!

Cheers Toshi

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software

This is my own AI developed when I stay home. It works well!

What are you doing when you should stay home under lock-down? Cooking? Yoga? reading books? For me, I would like to develop my own AI for myself. Here is a recipe to do that.

main goal : to understand what sentiment each of e-mail has, positive ot nagative?

interface : Gmail as it is my favoriate

AI library : TensroFlow as it is my favoriate

Infrastructure : Google Cloud as it is easy to deploy AI models

connector between AI and email : Microsoft Power Automate as it is easy to use

1.develop AI model

When I develop AI model, massive amount of data is needed. I use “IMDB review data” as it is publicly available and easy to use. Although it is written in English, the model can accept 16 languages, including English, Japanese, Chinese! It is great. If you want to know how it is possible, I recommed to read this paper.

2. connect email to AI model

Once the model is developed, let us connect it to emails so that messages in emails can be sent to AI models and the prediction can be sent back to emails. There are several tools to do that. I choose Microsoft Power Automate as everyone uses Microsoft all over the world. It takes only 2-3 hours even if you are beginners of this tool as it requires only a few lines of codes on it.

3. Let us see the prediction by AI

This is an example to see how my AI works. When I receive the message through e-mail, it is send to AI model automatically. Then AI model recieives the massage from email, it caluculates probability of “positive sentiment” and this probability can be sent back to email address which is set up in advance.

I use my AI several days and confirm these predictions are fairly reasonable for advertisements and announcements. It is very interesting! I would like to develop next version of my AI in near future and update it here. Stay tuned!

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software

The program “AI and innovation”​ will start on 10th April. Our goal is to reach 1000 people by the end of this year!

As I said a few days ago, I start the program “AI and innovation” to share basic knowledge of AI and how to create innovation with AI. This is for everyone who is interested in AI, from CEO to young business person. No programing experience and math are needed as less technical terms and formulas are used. The contents are below.

1. What is artificial intelligence (AI)?

2. The history of AI evolution

3. Applications of AI into businesses

4. AI and innovation

5. How to create internal AI team

These are basics to consider how we can create innovation by AI. Once you understand each of them, you can apply them into your own task.

If you are interested in “How can we create innovations with AI in our businesses?”, this program is for you. This is paid online session with Q&A. Early bird discount are available for the first 30 participants from this link.

https://organizer.runtheworld.today/invitation/1222

This is the beginning of our journey to innovation with AI. Let us on board and go forward in AI era! This is my message from Tokyo to all over the world.

Regards Toshi

I want to start “Challenge 1000 with AI”​ to be ready for the new era !

Currently I am preparing the small on-line program to consider “artificial intelligence and innovation”. This is for not only engineer but also business person, CXO as it has less technical terms and math. “artificial intelligence and innovation” exists for everyone to be ready into new era. So I hope everyone can enjoy the program when staying home.

As you know, many people and businesses are forced to stop their activities because of the COVID-19 . The problem is no one can be sure when it is back to normal. When I was hit by big earthquake at Kobe, Japan in 1995, the most serious problem is not a water supply itself but I did not know when water could be supplied. In such cases, we need to keep our hopes and mental strength to go forward, not backward. Yes, we need our hopes to go forward. What is this now? Unlike 1995, we are now connected each other on line and can use open source software of artificial intelligence. I think it is our hope to us even though some of big cities are currently lock down.

From top management to new graduate, everyone can access the knowledge of ”AI and innovations” through this program. Based on “democratization of AI”, mission of my startup, I am very glad to start this now. As AI has huge power to change our daily lives, I am very excited when we consider what innovations are created with AI here.

“Challenge 1000 with AI” means I want to provide this program to 1000 people all over the world. Then each people can think of “artificial intelligence and innovation” for each of their tasks. Of course, I can not cover everything about them in the small program. But when I get feedback from participants, I can consider how I can make subsequent program if needed. Let you know when it is ready. See you soon!

This is our cross-lingual intelligent system. It is smarter than I am as it recognizes 16 languages!

When I lived in Kuala Lumpur, Malaysia,  I always thought I am in a multilingual environment.  Most people speak Bahasa Malaysia, But when they talk to me, they speak English. Some of them understand Japanese.  Chinese Malaysian people speak Mandarin, Cantonese.  Indian people speak Hindi or other languages.  Yes, I am sure Asia is a “multi-lingual environment”.   Since then I am always wondering how we can develop the system that can accept many languages as inputs.  Now I found that.

This is the first cross-intelligent system by TOSHI STATS.  It can accept 16 languages and perform sentimental analysis.  Let me explain the details.

 

 

1. Inputs in 16 languages

As we use MUE(1) models in TensorFlow Hub,  it can accept 16 languages ( See the list below ).  Usually, Japanese systems cannot be input English and English systems cannot accept Japanese. But this system can be input both of them and work well. This is amazing! The first screenshot of our system is the input in Engish and the second is input in Japanese.

N_Osaka2a

N_Osaka1a

We do not need a system for each language, one by one. The secret is the system can map each sentence to the same space although each of them is written in different languages.

 

 

2. Transfer learning from English to other languages

I think it is the biggest breakthrough in the system. As a result of sharing the same space among languages,  we can train the model in English and transfer its knowledge to other languages.   For example, there are many text data for training models in English but there are a few in Japanese. In such a case, it is difficult to train models effectively in Japanese. But we can train models in English and use it in Japanese. It is great! Of course, we can train the model in another language and transfer it to others. It is extraordinary as it enables us to transfer knowledge and expertise from one languages to another.

 

 

3. Experiment and result

I choose one news title(2) from The Japan Times and perform sentiment analysis with the system. The title is ” Naomi Osaka cruises to victory in Pan Pacific Open final to capture first title on Japanese soil “.   I think it should be positive.

Japan-timesThis English sentence is translated into other 15 languages by Google translation. Then each sentence is input to the system and we measure “probability of positive sentiment”.  Here is the result. 90% of them are over 0.8. It means that in most languages, the system can recognize each sentence as definitely “positive”.  This is amazing!  It works pretty well in 16 languages although the model is trained only in English.

MUE_result

 

 

When I develop this cross-lingual intelligent system, I think it is already smarter than I am as I do not know what sentences in 14 languages mean except Japanese and English. Based on this method, we can develop many intelligent systems which are difficult to develop one year ago.  Let me update the progress of our intelligent system. Stay tuned!

 

 

1. Multilingual Universal Sentence Encoder for Semantic Retrieval ,  Google, YinfeiYang, Daniel Cer, Amin Ahmad, Mandy Guo, Jax Law, Noah Constant, Gustavo Hernandez Abrego, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, Ray Kurzweil, July 9 2019

2.Naomi Osaka cruises to victory in Pan Pacific Open final to capture first title on Japanese soil, The Japan Time, Sep 22, 2019.

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software

BERT also works very well as a feature extractor in NLP!

Two years ago, I developed car classification models by ResNet. I use transfer learning to develop models as I can prepare only small amount of images. My model is already pre-trained by a huge amount of data such as ImageNet. I can extract features of each image of cars and train classification models on top of that. It works very well. If you are interested in it, could you see the article?

Then, I am wondering how BERT(1) works as a feature extractor. If it works well, it can be applied to many downstream tasks with ease. Let us try the experiment here. BERT is one of the best Natural Language Processing (NLP) models by Google. I wrote how BERT works in my article before. It is amazing!

Let me explain features a little. Feature means “How texts can be represented by vectors”. Each word can be converted to a number before inputting to BERT then whole sentence can be converted to 768-length-vectors by BERT. In this experiment, feature extraction can be done by TensorFlow Hub of BERT. Let us see its website. It says there are two kinds of outputs by BERT…

It means that when text data is input to BERT, the model returns two type of vectors. One is “one vector for each sentence”, the other is “sequence of vectors for each sentence”. In this task, we need “one vector for each sentence” because it is classification task and one vector is enough to input classification models. We can see the first 3 vectors out of 3503 samples below.

This is a training result of the classification model. Accuracy is 82.99% at 105 epoch. Although it is reasonable it is worsen than the result of the last article 88.58%. The deference is considered as advantage of fine tuning. In this experiment, weights of BERT are fixed and there is no fine tuning. So if you need more accuracy, let us try fine tuning just like the experiment in the last article.

BERT means “Bidirectional Encoder Representations from Transformers”. So it looks good as a tool for feature extractions. Especially this is multi-language model therefore we can use it for 104 languages. It is amazing!

I will perform other experiments about BERT in my article. Stay tuned!

  1. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
    11 Oct 2018, Jacob Devlin Ming-Wei Chang Kenton Lee Kristina Toutanova, Google AI Language

Notice: Toshi Stats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. Toshi Stats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on Toshi Stats Co., Ltd. and me to correct any errors or defects in the codes and the software

BERT performs very well in the classification task in Japanese, too!

As I promised in the last article, I perform experiments about classification of news title in Japanese. The result is very good as I expected. Let me explain the details.

I use “livedoor news corpus” (2) for this experiment. These are five-class of news title in this experiment. These are about life, movie, sports, chats, and electronics. Here is the detail of the class. I would like to classify each title of news according to this class correctly.

Then I train BERT(1) model with a sample of news title written in Japanese. Here is the result. The BERT model, which I used, is the multi-language model. All I have to do is fine-tuning to apply my task. As you can see below, The accuracy ratio is about 88%. It is very good while I use very small sample data (3503 for training, 876 for test). It took less than one minute on colab with GPU.

With 3 epochs, I confirmed that the accuracy ratio is over 88%

Let me take 10 samples for validation and see each of them. These samples are not used for training so they are new to the computer. Nine out of ten are classified correctly. It is so good, isn’t it?

The beauty is that the pre-trained model is not specific for only Japanese. As it is a multi-language model, it should work in many kinds of languages with the same fine-tuning as I did in Japanese. Therefore It should work in your languages, too!

How about this experiment? I continue to do experiments of BERT in many tasks of natural language and update my article soon. Stay tuned!

  1. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
    11 Oct 2018, Jacob Devlin Ming-Wei Chang Kenton Lee Kristina Toutanova, Google AI Language
  2. livedoor news corpus CC BY-ND 2.1 JP

Notice: Toshi Stats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. Toshi Stats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on Toshi Stats Co., Ltd. and me to correct any errors or defects in the codes and the software

BERT performs near state of the art in question and answering! I confirm it now

Today, I write the article of BERT, which a new natural language model, again because it works so well in question and answering task. In my last article, I explained how BERT works so if you are new about BERT, could you read it?

For this experiment, I use SQuADv1.1data as it is very famous in the field of question and answering.  Here is an explanation by them.

“Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowd workers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.” (This is from SQuAD2.0, a new version of Q&A data)

This is a very challenging task for computers to answer correctly. How does BERT work for this task? As you saw below, BERT recorded f1 90.70 after one-hour training on TPU on colab in our experiment. It is amazing because based on the Leaderboard of SQuAD1.1 below, it is the third or fourth among top universities and companies although the Leaderboard may be different from our experiment setting. It is also noted BERT is as good as a human is!

 

 

 

I tried both Base model and Large model with different batch size.  Large model is better than Base model with around 3 points. Large model takes around 60 minutes to complete training while Base model takes around 30 munites. I use TPU on Google colab for training. Here is the result. EM means “exact match”.

Question & answering can be applied to many tasks in businesses, such as information extraction from documents and automation for customer centers. It must be exciting when we can apply BERT to businesses in the near future.

 

Next, I would like to perform text-classification of news title in Japanese because BERT has a multi-language model which works in 104 languages globally. As I live in Tokyo now, it is easy to find good data for this experiment. I will update my article soon. So stay tuned!

 

 

 

 

 

@article{devlin2018bert,
  title={BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding},
  author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
  journal={arXiv preprint arXiv:1810.04805},
  year={2018}
}

Notice: Toshi Stats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. Toshi Stats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on Toshi Stats Co., Ltd. and me to correct any errors or defects in the codes and the software

 

“BERT” can be a game changer to accelerate digital transformation!

Since Q1 of 2019 is close to ending,  I would like to talk about one of the biggest innovation of deep learning in Natural Language Processing (NLP).  This is called “BERT” presented by Google AI in Oct 2018. As far as I know, it is the first model to perform very well in many language tasks such as sentimental analysis, question answering without any change of the model itself. It is amazing! Let us start now.

1. How BERT works?

The secrets of BERT are its structure and method of training.   BERT introduces transformer as the main blocks in it.  I mentioned transformer before as it is a new structure to extract information of sequential data. The key is the attention mechanism. It means to measure “how we should pay attention to each word in the sentence”. If you want to know more, it is a good reference. Then let us move on how BERT is trained. BERT means “Bidirectional Encoder Representations from Transformers”. For example, the word “bank” has different meanings in “bank account” and “bank of the river”. When the model can learn from data only forward direction, it is difficult to distinguish the difference of meaning of “bank”. But if it can learn not only forward but backward direction, the model can do so. It is the secret for BERT to perform the state of art in many NLP tasks without modifications. This is the chart from the research paper (1).

2. How can we apply BERT to our tasks for solutions?

BERT is so large that it needs a lot of data and computing resources such as GPU/TPU.  Therefore it takes time and cost if we train BERT from scratch. But no need to worry.  Google released a number of pre-trained models of BERT. It is great because we can use them as base models and all we have to do is just a small training to adjust to our own tasks such as text classification. It is called “fine-tuning”. These pre-trained models are open source and available for everyone. If you want to know more, please see the blog. The beauty is one of the pre-trained models is a multi-language model which works in 104 languages without any modifications. It is amazing! So it works in your language, too!

3. Can BERT accelerate digital transformation in our daily lives?

I think “Yes” because we are surrounding a massive amount of documentation such as contracts, customer reports, emails, financial reports, regulatory instructions, newspapers, and so on. It is impossible to understand everything and extract the information needed in a real-time manner.  With BERT, we can develop much better applications to handle many text data and extract information needed efficiently. It is very exciting when we consider how many applications can be created by using BERT in the near future.

Hope you enjoy my article. Now I research BERT intensively and update my article soon. Stay tuned!

  1. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
    11 Oct 2018, Jacob Devlin Ming-Wei Chang Kenton Lee Kristina Toutanova, Google AI Language

 

 

 

Notice: Toshi Stats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. Toshi Stats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on Toshi Stats Co., Ltd. and me to correct any errors or defects in the codes and the software