“DEEP LEARNING PROJECT for Digital marketing” starts today. I present probability of visiting the store here

cake-623579_640

At the beginning of this year,  I set up a new project of my company.  The project is called “Deep Learning project” because “Deep Learning” is used as a core calculation engine in the project. Now that I have set up the predictive system to predict customer response to a direct mailing campaign, I would like to start a sub-project called  “DEEP LEARNING PROJECT for Digital marketing”.  I think the results from the project can be applied across industries, such as healthcare, financial, retails, travels and hotels, food and beverage, entertainments and so on. First, I would like to explain how to obtain probability for each customer to visit the store in our project.

 

1. What is the progress of the project so far?

There are several progresses in the project.

  • Developing the model to obtain the probability of visiting the store
  • Developing the scoring process to assign the probability to each customer
  • Implement the predictive system by using Excel as an interface

Let me explain our predictive system. We constructed the predictive system on the platform of  Microsoft Azure Machine Learning Studio. The beauty of the platform is Excel, which is used by everyone, can be used as an interface to input and output data. This is our interface of the predictive system with on-line Excel. Logistic regression in MS Azure Machine Learning is used as our predictive model.

The second row (highlighted) is the window to input customer data.

Azure ML 1

Once customer data are input, the probability for the customer to visit the store can be output. (See the red characters and number below). In this case (Sample data No.1) the customer is less likely to visit the store as Scored  Probabilities is very low (0.06)

Azure ML 3

 

On the other hand,  In the case (Sample data No.5) the customer is likely to visit the store as Scored Probabilities is relatively high (0.28). If you want to know how it works, could you see the video?

Azure ML 2

Azure ML 4

 

2. What is the next in our project?

Once we create the model and implement the predictive system, we are going to the next stage to reach more advanced topics

  • More marketing cases with variety of data
  • More accuracy by using many models including Deep Learning
  • How to implement data-driven management

 

Our predictive system should be more flexible and accurate. In order to achieve that, we will perform many experiments going forward.

 

3. What data is used in the project?

There are several data to be used for digital marketing. I would like to use this data for our project.

When we are satisfied with the results of our predictions by this data,  next data can be used for our project.

 

 

Digital marketing is getting more important to many industries from retail to financial.   I will update the article about our project on a monthly basis. Why don’t you join us and enjoy it!  When you have your comments or opinions, please do not hesitate to send us!

If you want to receive update of the project or want to know the predictive system more, could you sing up here?

 

 

 

Microsoft, Excel and AZURE are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.

Notice: TOSHI STATS SDN. BHD. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. TOSHI STATS SDN. BHD. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on TOSHI STATS SDN. BHD. and me to correct any errors or defects in the codes and the software.

Will the self-driving cars come to us in 2020?

city-1284400_640Since last year, the progress of development of self-driving cars are accelerated rapidly.  When I wrote about it last year, someone may not be convinced that the self-driving cars come true. But now no one can doubt about self-driving cars anymore. The problem is when it comes in front of us.  I would like to consider several key points to develop the technology of self-driving cars.

 

1.Data from experiments

It is key to develop self-driving car effectively. Because self-driving cars need artificial intelligence in it to drive cars by themselves without human interventions. As you know, artificial intelligence looks like our brains.  When we are born, our brain is almost empty. But as we grow, we can learn many things through our experiences.  This is the same for artificial intelligence. It needs massive amounts of data to learn. Recently, Google and  Fiat Chrysler Automobiles NV announced that they cooperate to enhance development of self-driving cars. According to the article on Bloomberg, “The carmaker plans to develop about 100 self-driving prototypes based on the Chrysler Pacifica hybrid-powered minivan that will be used by Google to test its self-driving technology.”(1)  The more cars are used in the experiments, the more data they can obtain. Therefore, it enables Google to accelerate to develop self-driving cars more rapidly.

 

2. Algorithm of artificial intelligence

With data from experiments, artificial intelligence will be more sophisticated.  The algorithms of artificial intelligence, which are called “Deep Learning” should be more effective from now.  Because driving cars generates sequences of data and need sequential decision making processes, such as stop, go, turn right, accelerate, and so on,  we need algorithms which can handle these situations. In my view, the combination of deep learning and reinforcement learning can be useful to do that.  This kind of technologies is developed in research centers, such as Google DeepMind which is famous for the artificial intelligence Go player. It says this technology can be used for robotics, medical research and economics.  So why not for self-driving cars?

 

3. Interactions with human drivers

It seems to be very difficult to decide who is responsible for driving cars.  Initially, self-driving cars might appear with the handle and brakes. It means that human can intervene the operations of self-driving cars. When accidents happen,  who is responsible?  Human or machines?  When self-driving cars without handle and brakes are available,  machines are responsible as human can not control cars anymore. So the machines are 100% responsible for accidents. It is very difficult to decide which is better, self-driving cars with and without handle and breaks. It depends on the development of technologies and regulations.

 

Impact on society is huge when self-driving cars are introduced to us.  Bus, Taxi, Track could be replaced with self-driving cars.  Not only drivers but also road maintenance  companies, car insurance companies, roadside shops, traffic light makers, railway companies, highway running companies,  car maintenance companies and car parking providers are also heavily impacted. Government should consider how we can implement self-driving cars to our societies effectively. I do not think we have spare time to consider it. Let us start it today!

 

(1) http://www.bloomberg.com/news/articles/2016-05-03/fiat-google-said-to-plan-partnership-on-self-driving-minivans

 

Note: Toshifumi Kuga’s opinions and analyses are personal views and are intended to be for informational purposes and general interest only and should not be construed as individual investment advice or solicitation to buy, sell or hold any security or to adopt any investment strategy.  The information in this article is rendered as at publication date and may change without notice and it is not intended as a complete analysis of every material fact regarding any country, region market or investment.

Data from third-party sources may have been used in the preparation of this material and I, Author of the article has not independently verified, validated such data. I accept no liability whatsoever for any loss arising from the use of this information and relies upon the comments, opinions and analyses in the material is at the sole discretion of the user. 

Will the age of “Brain as a Service” come to us in near future?

macbook-926425_640

15 March 2016,  I found two things which may change the world in the future,  The former, artificial intelligence Go player “AlphaGo” and the latter is an automated marketing system “Google Analytics 360 Suite“. Both of them came from Google. Let me explain why I think the age of “Brain as a service” is coming  based on these two innovations.

1. AlphaGo

You may know what AlphaGo achieved on 15 March 2016.  At  Google DeepMind Challenge, where artificial intelligence Go player had five games against a top professional Go player. It beats Lee sedol, who is one of the strongest Go player in the world, 4 to 1.  Go is one of the oldest games, which are mainly played in China, Korea, Japan and Taiwan. At the beginning of the challenge, few people thought AlphaGo could win the games as it is always said that  Go is so complex that computers can not win professional Go players at least in 10 years. The result was, however, completely opposite. Therefore,  other professional Go players, artificial intelligence researchers and even people who do not play Go must be upset to hear the news. AlfaGo is strengthened by algorithms, which are called “deep learning” and “reinforcement learning“. It can learn the massive amount of Go patterns created by human being for a long time.  Therefore, we need not to program specifically, one by one as computers can learn by themselves. It looks like our brains. We are born without any knowledge and start learning many things as we grow.  Finally, we can be sophisticated enough to be “adult”. Yes, we can see “AlphaGo” as a brain.  It can learn by itself at an astonishing speed as it does not need to rest.  It is highly likely that Google will use this brain to improve many products in it in the future.

 

2. Google Analytics 360 Suite

Data is a king.  But it is very difficult to feed them into computers effectively.  Some data are stored in servers. Others are stored in local PCs. No one knows how we can well-organize data effectively to obtain the insights from data.  Google is strong for consumer use.  G-mails, Android and google search are initially very popular among consumers. But the situations are gradually changing.  Data and algorithms have no-boarders between consumers and enterprises. So it is natural for Google to try to obtain enterprise market more and more. One of the examples is  “Google analytics 360 Suites”. Although I never tried it yet, this is very interesting for me because it can work as a perfect interface to customers. Customers may request many things, ask questions and make complains to your services. It is very difficult to gather these data effectively when systems are not united seamlessly. But with “Google analytics 360 Suites”,  data of customers could be tracked in a timely manner effectively.  For example, the data from Google analytics 360 may be going to Google Audience Center 360,  which is a data management platform (DMP).  It means that the data is available to any analyses that marketers want.  “Google Audience Center 360” can collect data from other sources or third party data providers. It means that many kind of data could be ready to be fed into computers effectively.

 

3. Data is gasoline for “Artificial intelligence”

AlfaGo can be considered as “Artificial intelligence”. “Artificial intelligence” is like our brain.  There is no knowledge in it initially.  It has only structures to learn.  In order to be “intelligent”, it should learn a lot from data. It means that massive amount data should be fed into computers. Without data, “artificial intelligence” can do nothing. Now data management like “Google Audience Center 360” is in progress. It seems that data are getting well organized to be fed into computers.  The centralized data management system can collect data automatically from many systems. It becomes easier to feed massive amounts of data into computers. It enables to computers learn the massive amount of data. These things must be a trigger to change the landscape of our business, societies and lives. Because suddenly computers can be sophisticated enough to work just like our brain.  AlphaGo teaches us that it may happen when a few people think so. Yes, this is why I think that the age of “Brain as a Service” will come in near future.  How do you think of that?

 

 

Note: Toshifumi Kuga’s opinions and analyses are personal views and are intended to be for informational purposes and general interest only and should not be construed as individual investment advice or solicitation to buy, sell or hold any security or to adopt any investment strategy.  The information in this article is rendered as at publication date and may change without notice and it is not intended as a complete analysis of every material fact regarding any country, region market or investment.

Data from third-party sources may have been used in the preparation of this material and I, Author of the article has not independently verified, validated such data. I and TOSHI STATS.SDN.BHD. accept no liability whatsoever for any loss arising from the use of this information and relies upon the comments, opinions and analyses in the material is at the sole discretion of the user. 

 

This is No.1 open-online course of “Deep Learning”. It is a new year present from Google!

desk-918425_640

I am very happy to find this awesome course of “Deep Learning” now . It is the course which is provided by Google through Udacity(1), one of the biggest mooc platforms in the world. So I would like to share it to any person who are interested in “Deep Learning”.

It is the first course which explains Deep Learning from Logistic regression to Recurrent neural net (RNN) in the uniformed manner on mooc platform as far as I know. I looked at it and was very surprised how awesome the quality of the course is.  Let me explain more details.

 

1. We can learn everything from Logistic regression to RNN seamlessly

This course covers many important topics such as logistic regression, neural network,  regularization, dropout, convolutional net, RNN and Long short term memory (LSTM). These topics are seen in some articles independently before. It is however very rare to explain each of them at once in the same place.  This course looks like a story of development of Deep Learning. Therefore, even beginners of Deep Learning can follow the course. Please look at the path of the course. It is taken from the course video of L1 Machine Learning to Deep Learning .

DL path

Especially, explanations of RNN are very easy to understand. So if you do not have enough time to take a whole course, I just recommend to watch the videos of RNN and related topics in the course. I am sure it is worth doing that.

 

2. Math is a little required, but it is not an obstacle to take this course

This is one of the courses in computer science.  The more you understand math, the more you can obtain insights from the course. However, if you are not so familiar with mathematics, all you have to do is to overview basic knowledge of “vectors”, “matrices” and “derivatives”.  I do not think you need to give up the course because of the lack of knowledge of math. Just recall high school math, then you can start this awesome course!

 

3. “Deep learning” can be implemented with “TensorFlow“, which is open source provided by Google

This is the most exciting part of the course if you are developers or programmers.  TensorFlow is a  python-based language. So many developers and programmers can be familiar with TensorFlow easily.  In the program assignments, participants can learn from simple neural net to sequence to sequence net with TensorFlow. It must be good! While I have not tried TensorFlow programming yet, I would like to do that in the near future. It is worth doing that even though you are not programmers. Let us challenge it!

 

 

In my view,  Deep Learning for sequence data is getting more important as time series data are frequently used in economic analysis,  customer management and internet of things.   Therefore, not only data-scientists, but also business personnel, company executives can benefit from this course.  It is free and self-paced when you watch the videos. If you need a credential, small fee is required. Why don’t you try  this awesome course?

 

 

(1) Deep Learning on Udacity

https://www.udacity.com//course/viewer#!/c-ud730/l-6370362152/m-6379811815

 

 

 

Notice: TOSHI STATS SDN. BHD. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. TOSHI STATS SDN. BHD. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on TOSHI STATS SDN. BHD. and me to correct any errors or defects in the codes and the software.

 

 

“DEEP LEARNING PROJECT” starts now. I believe it works in digital marketing and economic analysis

desert-956825_640

As the new year starts,  I would like to set up a new project of my company.  This is beneficial not only for my company, but also readers of the article because this project will provide good examples of predictive analytics and implementation of new tools as well as platforms. The new project is called “Deep Learning project” because “Deep Learning” is used as a core calculation engine in the project.  Through the project,  I would like to create “predictive analytics environment”. Let me explain the details.

 

1.What is the goal of the project?

There are three goals of the project.

  • Obtain knowledge and expertise of predictive analytics
  • Obtain solutions for data-driven management
  • Obtain basic knowledge of Deep Learning

As big data are available more and more, we need to know how to consume big data to get insight from them so that we can make better business decisions.  Predictive analytics is a key for data-driven management as it can make predictions “What comes next?” based on data. I hope you can obtain expertise of predictive analytics by reading my articles about the project. I believe it is good and important for us  as we are in the digital economy now and in future.

 

2.Why is “Deep Learning” used in the project?

Since the November last year, I tried “Deep Learning” many times to perform predictive analytics. I found that it is very accurate.  It is sometimes said that It requires too much time to solve problems. But in my case, I can solve many problems within 3 hours. I consider “Deep Learning” can solve the problems within a reasonable time. In the project I would like to develop the skills of tuning parameters in an effective manner as “Deep Learning” requires several parameters setting such as the number of hidden layers. I would like to focus on how number of layers, number of neurons,  activate functions, regularization, drop-out  can be set according to datasets. I think they are key to develop predictive models with good accuracy.  I have challenged MNIST hand-written digit classifications and our error rate has been improved to 1.9%. This is done by H2O, an awesome analytic tool, and MAC Air11 which is just a normal laptop PC.   I would like to set my cluster on AWS  in order to improve our error rate more. “Spark” is one of the candidates to set up a cluster. It is an open source.

DL.002

3. What businesses can benefit from introducing “Deep Learning “?

“Deep Learning ” is very flexible. Therefore, it can be applied to many problems cross industries.  Healthcare, financial, retails, travels, food and beverage might be benefit from introducing “Deep Learning “.  Governments could benefit, too. In the project, I would like to focus these areas as follows.

  • Digital marketing
  • Economic analysis

I would like to create a database to store the data to be analyzed, first. Once it is created,  I perform predictive analytics on “Digital marketing” and “Economic analysis”.  Best practices will be shared with you to reach our goal “Obtain knowledge and expertise of predictive analytics” here.  Deep Learning is relatively new to apply both of the problems.  So I expect new insight will be obtained. For digital marketing,  I would like to focus on social media and measurement of effectiveness of digital marketing strategies.  “Natural language processing” has been developed recently at astonishing speed.  So I believe there could be a good way to analyze text data.  If you have any suggestions on predictive analytics in digital marketing,  could you let me know?  It is always welcome!

 

I use open source software to create an environment of predictive analytics. Therefore, it is very easy for you to create a similar environment on your system/cloud. I believe open source is a key to develop superior predictive models as everyone can participate in the project.  You do not need to pay any fee to introduce tools which are used in the project as they are open source. Ownership of the problems should be ours, rather than software vendors.  Why don’t you join us and enjoy it! If you want to receive update the project, could you sing up here?

 

 

Notice: TOSHI STATS SDN. BHD. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. TOSHI STATS SDN. BHD. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on TOSHI STATS SDN. BHD. and me to correct any errors or defects in the codes and the software.

It is an awesome course to start learning digital marketing in 2016!

bake-1058862_640Happy new year!  This is the first article in 2016. So I would like to recommend a course to everyone who wants to learn digital marketing.

Social Media in Public Relations“is provided by Dr.Tracy Loh, Visiting Fellow Department of Communications and New Media in National University of Singapore through Coursera, one of the biggest mooc platforms.

This is good as a starting point to learn digital marketing in a theoretical manner.  I would like to introduce several interesting points from the course as they are useful and beneficial for business personnel who are interested in marketing and public relations.   These points come from week 3 ” Content Creation and Management” in the course.

 

1. Levels of content

There are many contents in social media so I would like to classify them effectively.   Dr.Tracy Loh provides us levels of content based on its value as follows.

Filler : information that is copied from other sources

Basic content  : original content, but relatively simple

Authority building content : Original contents that position the organization as an authority in a particular area of relevance to the organization

Pillar content : Educational content that readers use over time, save and share with others.

Flagship  : Seminal works that set the tone on an issue and which people refer back to time to come

I think this classification is very useful when we consider a portfolio of our contents  in terms of strategies of marketing and public relations. We can analyze our own content-portfolio based on levels of content. My article may be classified as “Authority building content”.  I would like to write the contents of “Flagship” in future, even though it is very difficult. Yes, you can challenge “Flagship”, too.  It should be noted that these contents should be used to reach our goals of marketing and public relation as a whole.

 

2. Social currency

To create viral contents, it is important that the contents have “social currency.”   Dr. Tracy Loh explains that “social currency” can be found in content that contain a level of “inner remarkability”.  For example,  when you share the new information that is not shared in your circle yet,  your social currency is increasing.  It is one of three aspects of  “social currency”. Others are explained in the course.

 

3. Trigger

“Trigger” is important to make content viral as daily life events can be associated with certain products.  These two examples are famous because everyone knows they are associated with daily life-events. Let us see these short videos.

Have a break, have a Kit Kat

What time is it?   It’s Tiger Time

Dr.Tracy Loh introduces this phrase  “Social currency gets people talking, but triggers keep them talking. Top of mind means tip of tongue.” (Jonah Berger, 2013)

 

 

I think we can apply these points above to our marketing strategies effectively.  Because they are theory-driven, but not so complicated.  It is easy to get some insights based on the points I referred.

I mentioned just a part of the course for the purpose of introduction. This course has many interesting topics and provides us knowledge of social media. It is free to just see the course. When you need the certification of the course,  some cost is needed to pay. I would like to recommend for you to overview the course first, and if you like it,  you can upgrade the course with the certification when it is available.  Let us enjoy this course in 2016!

How will “Deep Learning” change our daily lives in 2016?

navigation-1048294_640

“Deep Learning” is one of the major technologies of artificial intelligence.  In April 2013, two and half years ago, MIT technology review selected “Deep Learning” as one of the 10  breakthrough technologies 2013.  Since then it has been developed so rapidly that it is not a dream anymore now.   This article is the final one in 2015.  Therefore, I would like to look back the progress of “Deep Learning” this year and consider how it changes our daily lives in 2016.

 

How  has “Deep Learning” progressed in 2015? 

1.  “Deep Learning” moves from laboratories to software developers in the real world

In 2014,  Major breakthrough of deep learning occurred in the major laboratory of big IT companies and universities. Because it required complex programming and huge computational resources.  To do that effectively, massive computational assets and many machine learning researchers were required.  But in 2015,  many programs, softwares of deep learning jumped out of the laboratory into the real world.  Torch, Chainer, H2O and TensorFlow are the examples of them.  Anyone can develop apps with these softwares as they are open-source. They are also expected to use in production. For example, H2O can generate the models to POJO (Plain Old Java Code) automatically. This code can be implemented into production system. Therefore, there are fewer barriers between development and production anymore.  It will accelerate the development of apps in practice.

 

2. “Deep Learning” start understanding languages gradually.

Most of people use more than one social network, such as Facebook, Linkedin, twitter and Instagram. There are many text format data in them.  They must be treasury if we can understand what they say immediately. In reality, there are too much data for people to read them one by one.  Then the question comes.  Can computers read text data instead of us?  Many top researchers are challenging this area. It is sometimes called “Natural Language Processing“.  In short sentences, computers can understand the meaning of sentences now. This app already appeared in the late of 2015.  This is “Smart Reply” by  Google.  It can generate candidates of a reply based on the text in a receiving mail. Behind this app,  “LSTM (Long short term memory)” which is one of the deep learning algorithm is used.  In 2016, computers might understand longer sentences/paragraphs and answer questions based on their understanding. It means that computers can step closer to us in our daily lives.

 

3. Cloud services support “Deep Learning” effectively.

Once big data are obtained,  infrastructures, such as computational resources, storages, network are needed. If we want to try deep learning,  it is better to have fast computational resources, such as Spark.  Amazon web services, Microsoft Azure, Google Cloud Platform and IBM Bluemix provide us many services to implement deep learning with scale. Therefore, it is getting much easier to start implementing “Deep Learning” in the system. Most cloud services are “pay as you go” so there is no need to pay the initial front cost to start these services. It is good, especially for small companies and startups as they usually have only limited budgets for infrastructures.

 

 

How will “Deep Learning” change our daily lives in 2016? 

Based on the development of “Deep learning” in 2015,  many consumer apps with “Deep learning” might appear in the market in 2016.   The deference between consumer apps with and without “Deep Learning” is ” Apps can behave differently by users and conditions”. For example,  you and your colleagues might see a completely different home screen even though  you and your colleagues use the same app because “Deep learning” enables the app to optimize itself to maximize customer satisfaction.  In apps of retail shops,  top pages can be different by customers according to customer preferences. In apps of education,  learners can see different contents and questions as they have progressed in the courses.  In apps of navigations,  the path might be automatically appeared based on your specific schedule, such as the path going airport on the day of the business trip.  They are just examples.  It can be applied across the industries.  In addition to that,  it can be more sophisticated and accurate if you continue to use the same app  because it can learn your behavior rapidly.  It can always be updated to maximize customer satisfactions.  It means that we do not need to choose what we want, one by one because computers do that instead of us.  Buttons and navigators are less needed in such apps.  All you have to do is to input the latest schedules in your computers.  Everything can be optimized based on the updated information.  People are getting lazy?  Maybe yes if apps are getting more sophisticated as expected. It must be good for all of us.  We may be free to do what we want!

 

 

Actually,  I quit an investment bank in Tokyo to set up my start-up at the same time when MIT  technology review released 10 breakthrough technologies 2013.  Initially I knew the word “Deep Learning” but I could not understand how important is is to us because it was completely new for me. However, I am so confident now that I always say “Deep Learning'” is changing the landscape of jobs, industries and societies.  Could you agree with that?  I imagine everyone can agree that by the end of 2016!

 

 

 

Notice: TOSHI STATS SDN. BHD. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. TOSHI STATS SDN. BHD. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on TOSHI STATS SDN. BHD. and me to correct any errors or defects in the codes and the software.

These are small Christmas presents for you. Thanks for your support this year!

christmas-present-83119_640i

I started the group of “big data and digital economy” in Linked in on 15th April this year. Now the participants are over 300 people!  This is beyond my initial expectation. So I would like to appreciate all of you for your support.

I prepare several small Chirstmas presents here. If you are interested in, please let me know. I will do my best!

 

1. Your theme of my weekly letter

As you know, I write the weekly letter “big data and digital economy” every week and publish it in Linkedin. If you are interested in specific themes,  I would like to research and write them as long as I can. Anything is OK if it is about digital economy.  Please let me know!

 

2.  Applications of data analysis in 2016

In 2016,  I would like to develop my applications using data analysis and make them public through the internet.  As long as data is “public”,  we can do any analysis on the data. Therefore,  if you would like to look at your own analysis based on public data,  could you let me know what you are interested in?    These are examples of applications provided by “shiny”,  very famous tool among data scientists.

http://shiny.rstudio.com/gallery/

 

3.   Announcement on the  project of R-programming platform

This is a project of my company in 2016.  To support for business personnel to learn R-programming,  I would like to set up the platform where participants can learn R-programming interactively with ease.  Contents are very important in order for participants to keep learning motivations. When you have specific themes which you want to learn,  could you let me know?  These themes may be included as programs in the platform going forward!    This is an introductory video of the platform.

http://www.toshistats.net/r-programming-platform/

 

Thanks for your support in 2015 and let us enjoy predictive analytics in 2016!

Do we need “snow” to cerebrate Christmas in December?

15285

There are many Christmas trees in shopping malls.  It makes us a little happier.  Children must expect big presents at Christmas eve.   I am also waiting for my presents, although I do not  know where my Santa Claus is now.

This is the second time when I live in KL at the time of Christmas.  Then I feel a little strange because it is hot at the time of  the Christmas season in KL.  In Japan,  it is cold and it has sometimes massive snow in December. Whenever I saw Christmas trees in Japan, it was always cold.  But now it is hot in KL!  I think  most of Asean countries have no snow so there are few opportunities where we can feel “snow”.

The picture above is taken in KL. On the roof of the house, there is snow. But I do not see snow on the trees.  White balls look just decorations for me.   It must be OK as there is no snow in KL.  On the other hand,  this picture below is taken in Japan.  There are many symbols of snow on the Christmas trees.

23182

Some of you have been to Hokkaido, the north part of Japan to enjoy snow in winter.  The whole land is sometimes covered with “snow” there in winter.  So everything looks white and it is very quiet, no sound is heard  because noises are absorbed by thick snow on the ground. In a such case, Christmas trees must have “snow” on them.  So it may be different,  location by location.

I do not have any statistics of ” how many trees have ‘snow” on them in shopping malls all over the world”. But it is interesting for me because it tells us how weather and climate affect our behaviors. Because Japan has four seasons (spring, summer, autumn and winter),  predictions of its climate are very important for companies as well.  Hotter summer means more sales of juice, ice cream and air conditioners, vice versa.  If winter is not so cold than usual,  sweaters and coats are not selling well. It means less flu so it is good for children and senior people, but it is not so good for the pharmaceutical industry. In this way, weather and climate have huge impacts to our behavior and economy.

The data about weather and climate may be relatively unused in companies in order to make business decisions so far.  But as we have more data about them and obtain predictions with accuracy, it is worthwhile using data about weather and climate in the businesses now. I would like to take examples of analysis about weather and climate going forward.

 

Anyway,  Merry Chirstmas for all of you!

 

 

 

Can computers write sentences of docs to support you in the future?

49cad23354bef871147702f5880a45c6_s

This is amazing!  It is one of the most incredible applications for me this year!  I am very excited about that.  Let me share with you as you can use it,  too.

This is “Smart Reply of Inbox”, an e-mail application from Google.  It was announced on 3rd November. I try it today.

For example, I got e-mail from Hiro. He asked me to have a lunch tomorrow. In the screen, three candidates of my answer appear automatically.  1. Yes, what time?  2. Yes, what’s up  3. No, sorry.  These candidates  are created after computers understand what Hiro said in the e-mail. So each of them is very natural for me.

mail1

So all I have to do is just to choose the first candidate and send it to Hiro.  It is easy!

mail2

According to Google, state of the art technology “Long short term memory” is used in this application.

I always wonder how computers understand the meaning of words and sentences.  In this application, sentences are represented in fixed sized vectors. It means that each sentence is converted to sequences of numbers.  If two sentences have the same meaning,  the vector of each sentence should be similar to each other even though the original sentences look different.

 

This technology is one of the machine learning. Therefore,  the more people use it, the more sophisticated it can be because it can learn by itself.  Now it applies to relatively short sentences like e-mail. But I am sure it will be applied to longer sentences, such as official documents in business.  I wonder when it happens in the future.  Pro. Geoffrey Hinton is expected to research this area with intense.  If it happens, computers will be able to understand what documents mean and create some sentences based on their understanding.  I do not know how Industires are changed when it happens.

This kind of technology is sometimes referred as “Natural language processing” or “NLP”.   I want to focus on this area as a main research topic of my company in 2016.  Some progresses will be shared through my weekly letter here.

 

I would like to recommend you to try Smart Reply of Inbox and enjoy it!  Let me know your impressions. Cheers!

 

 

 

Note: Toshifumi Kuga’s opinions and analyses are personal views and are intended to be for informational purposes and general interest only and should not be construed as individual investment advice or solicitation to buy, sell or hold any security or to adopt any investment strategy.  The information in this article is rendered as at publication date and may change without notice and it is not intended as a complete analysis of every material fact regarding any country, region market or investment.

Data from third-party sources may have been used in the preparation of this material and I, Author of the article has not independently verified, validated such data. I and TOSHI STATS.SDN.BHD. accept no liability whatsoever for any loss arising from the use of this information and relies upon the comments, opinions and analyses in the material is at the sole discretion of the user.