We might need less energy as artificial intelligence can enable us to do so

technology-1587673_640

When I heard the news about the reduction of consumption energy in google data center (1), I was very surprised.  Because it has been optimized for a long time. It means that it is very difficult to improve the efficiency of the system more.

It is done by “Google DeepMind“, which has been developing “General artificial intelligence”. Google DeepMind is an expert on “deep learning‘. It is one of major technologies of artificial intelligence. Their deep learning models can reduce the energy consumption in data center of google dramatically.  Many data are corrected in data center, — data such as temperatures, power, pump speeds, etc. — and the models provide more efficient control of energy consumption. This is amazing. If you are interested in the details, you can read their own blog from the link below.

 

It is easy to imagine that there are much room to get more efficiency outside google data centers. There are many huge systems such as factories, airport, power generators, hospitals, schools, shopping mall, etc.. But few systems could have the same control as Google DeepMind provides. I think they can be more effeicent based on the points below.

1.More data will be available from devices, sensors and social media

Most people  have their own mobile devices and use them everyday.  Sensors are getting cheaper and there are many sensors in factories, engines on airplanes and automobile, power generations,etc. People use social media and generate their own contents everyday. It means that massive amount of data are generating and volume of data are increasing dramatically. The more data are available, the more chances we can get to improve energy consumptions.

 

2. Computing resources are available from anywhere and anytime

The data itself can say nothing without analyzing it.  When massive amount of data is available,  massive amount of computer resources are needed. But do not worry about that. Now we have cloud systems. Without buying our own computer resources, such as servers, we can start analyzing data with “cloud“.  Cloud introduces “Pay as you go” system. It means that we do not need huge initial investments to start understanding data. Just start it today with cloud.  Cloud providers, such as Amazon web service, Microsoft Azure and Google Cloud Platform, prepare massive amount of computer resources which are available for us.  Fast computational resources, such as GPU (Graphics processing unit) are also available. So we can make most out of massive amount of data.

 

3. Algorithms will be improved at astonishing speed.

I have heard that there are more than 1000 research papers to submit and apply to one major machine learning international conference. It means that many researchers are developing their own models to improve the algorithms everyday. There are many international conferences on machine learning every year. I can not imagine how many innovations of algorithms will appear in future.

 

At the end of their blog, Google DeepMind says

“We are planning to roll out this system more broadly and will share how we did it in an upcoming publication, so that other data centre and industrial system operators — and ultimately the environment — can benefit from this major step forward.”
So let us see what they say in next publication.  Then we can discuss how to apply their technology to our own problems. It must be exciting!

 

 

(1) DeepMind AI Reduces Google data center cooling bill by 40%,  21st July 2016

https://deepmind.com/blog

 

 

Notice: TOSHI STATS SDN. BHD. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. TOSHI STATS SDN. BHD. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on TOSHI STATS SDN. BHD. and me to correct any errors or defects in the codes and the software

Advertisements

What is “deep learning”? How can we understand it?

macbook-926163_640

“What is deep learning?” It is frequently asked because “deep learning” is one of the hottest topic across the industries. If you are not an expert of this field, I provide the answer from Andrew.Ng below, it is one of the best answer to the question.
“It’s a learning technology that works by loosely simulating the brain. Your brain and mine work by having massive amounts of neurons, jam-packed, talking to each other. And deep learning works by having a loose simulation of neurons — hundreds of thousands of millions of neurons — simulating the computer, talking to each other.”(1)
Yes, it is right. So “deep learning” is explained by comparison with brains. But there is a problem, Do you understand how your brain works? It is very difficult as we can not see it directly and there are no movements in our brain. Electronic signals are just exchanged so frequently. We cannot have clear picture about “how our brain works”. So It is the same as “deep learning”.
I should change my strategy. I would like to take purposes-oriented explanations, rather than technological one. “Deep learning” works for the purpose to understand how human being consider, feel and behave. When we sit down in front of the computers, they can see us, listen to us and understand what we want. “Deep learning” enables computers to do that. Therefore computers are not just calculators anymore. They start understanding us by the technology called “deep learning”.

Then we can understand the terms of computer science with ease.
Power to see the world : Computer vision
Power to read the text : Natural language processing (NLP)
Power to understand what you say : Speech recognition

Yes, these sound like human being. Although it is in an early stage, computers start understanding us slowly but steadily. But in case you are still curious on how it works, you can go to the world of math and programming. With math and programming, we can understand it more precisely. “TF club” is named after “TensorFlow”, which is one of famous libraries for “deep learning”. You can see the image of “TensorFlow” in this article. Hope you can go this journey with us!

Deep Learning

 

 

 

(1) Andrew Ng, the Stanford computer scientist behind Google’s deep learning “Brain” team and now Baidu’s chief scientist. Deep-Learning AI Is Taking Over Tech. What Is It?, Re/Code, July 15, 2015

 

 

Notice: TOSHI STATS SDN. BHD. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. TOSHI STATS SDN. BHD. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on TOSHI STATS SDN. BHD. and me to correct any errors or defects in the codes and the software

How can we track our mobile-e-commerce? Google analytics academy is good to start learning!

iphone-410311_640

Last week, I found that Alibaba, the biggest e-commerce in China, announced the financial result of Q2 2016.  One of things that were attracting me is 75% sales are coming from mobile device, rather than PC.

This is amazing. This is much bigger than I expected.  When we consider many younger people use mobile devices as their main devices. This rate is expected to increase steadily going forward.

Then I wonder how we can track customer behaviors on mobile-e-commerce with ease. Because it is getting more important as many customers come to your e-commerce shop from mobile devices. What do you think?

 

I found that Google analytics academy, which teaches how to use Google analytics, provides awesome online courses for free.  Although you may not be users of Google analytics, it is very beneficial because it shares the idea and concept of mobile-e-commerce. If you want to know which marketing generates the most valuable users, it is worth learning it. Let me explain several take aways

 

1. “High-value user” vs “Low-value user”

When we have many users at our mobile-e-commerce shop,  we find that some users buy many products or subscriptions than other users. They are “High-value users”. On the other hand, some users rarely buy them. They are “Low-value users”. This idea is good and useful to prepare target lists of new campaigns in order to put priority among  many customers. So our goal is to increase the number of  “High-value user” effectively.

 

2. Segmentation of customer is critically important

Segmentation means prepare the correct subset data to get insights form data. It is popular and widely-used across industries. When we analyze data, creating appropriate user segments are critically important. You may want create the segment of “buy-users and not-buy-users” and get the insights of what factors influence people to buy. There are many segmentations you can imagine.  You can create your own segmentations on Google analytics!

 

3. How to measure behavior of customers

It is also important to track behavior of each customer. There are many data to be obtained.  Ex : What screen each customer visit and what actions they take. How many minutes they stay on each screen and how much they spend to buy products. The former data is formed as “categorical” and the latter as “numerical”.  It is noted that these data should be relevant to identify and increase the number of “high-value user” as it is our goal. When you identify good candidates of data to use,  you can add them to your own segmentations and analyze them deeper in order to get insights from these data.

 

In addition to the on-line courses,  Google analytics makes real data of their e-commerce shop “Google Merchandise Store ” available to everyone who wants to learn it for free. It is called “Google analytics demo account“. This is also an amazing service as e-commerce data in real-world are rarely available to us before.  I would like to go deeper and get insights from them in near future.  Of course I will share it here with you as it is beneficial to everyone. Please see the one of awesome reports on Google analytics demo account.

Google analytics DA

 

Do you like it?  I recommend you to start learning with Google analytics academy. When you are getting familiar with data of mobile-e-commerce, it is more easier to learn more advanced data analytics, such as machine learning. Anyway, this course is free so you can access many awesome contents without paying any fee. Let us try and enjoy it!

 

 

 

Notice: TOSHI STATS SDN. BHD. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. TOSHI STATS SDN. BHD. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on TOSHI STATS SDN. BHD. and me to correct any errors or defects in the codes and the software.

 

 

What is the marketing strategy at the age of “everything digital”?

presentation-1311169_640

In July,  I have researched TensorFlow, which is a deep learning library by Google, and performed several classification tasks.  Although it is open-source software and free for everyone, its performance is incredible as I said in my last article.

When I perform image classification task with TensorFlow,  I found that computers can see our world better and better as deep learning algorithms are improved dramatically. Especially it is getting better to extract “features“, what we need to classify images.

Images are just a sequence of numbers for computers. So some features are difficult for us to understand what they are. However computers can do that. It means that computers might see what we cannot see in images. This is amazing!

Open CV

 

Open CV2

This is an example “how images are represented as a sequence of numbers. You can see many numbers above (These are just a small part of all numbers). These numbers can be converted to the image above which we can see. But computers cannot see the image directly.  It can only see the image through numbers above. On the other hand, we can  not understand the sequence of numbers above at all as they are too complicated. It is interesting.

In marketing,  when images of products are provided,  computers might see what are needed to improve the products and to be sold more. Because computers can understand these products more in a deferent way as we do. It might give us new way to consider marketing strategy.  Let us take T shirts as an example. We usually consider things like  color, shape,  texture,  drawings on it,  price. Yes, they are examples of “features” of T shirts because T-shirts can be represented by them. But computers might think more from the images of T shirts than we do. Computers might create their own features of T-shirts.

 

Then, I would like to point out three things to consider new marketing strategy.

1.Computers might extract more information that we do from same images.

As I explained, computers can see the images in a different way as we do. We can say same things for other data, such as text or voice mail as they are also just a sequence of numbers for computers. Therefore computers might understand our customers behavior more based on customer related data than we do when deep learning algorithms are much improved. We sometimes might not understand how computers can understand many data because computers can understand text/speech as a sequence of numbers and provide many features that are difficult to explain for us.

 

2.Computers might see many kind of data as massive amount data generated by costomers

Not only images but also other data, such as text or voice mail are available for computers as they are also just a sequence of numbers for computers. Now everything from images to voice massages is going to digital.  I would like to make computers understand all of them with deep learning. We cannot say what features are used when computers see images or text in advance. But I believe some useful and beneficial things must be found.

 

3. Computers can work in real-time basis

As you know, computers can work 24 hours a day, 365 days a year. Therefore it can operate in real-time basis. When new data is input, answer can be obtained in real-time basis. This answer can be triggered next actions by customers. These actions also can be recorded as digital and fed to into computers again. Therefore many digital data will be generated when computers are operated without stop /rest time and the interactions with customers might trigger chain-reactions. I would like to call it “digital on digital”

 

Images, social media, e-mails from customers, voice mail,  sentences in promotions, sensor data from customers are also “digital”. So there are many things that computers can see. Computers may find many features to understand customer behaviors and preferences in real-time basis. We need to have system infrastructures to enable computers to see them and tell the insight from them. Do you agree with that?

 

 

 

Notice: TOSHI STATS SDN. BHD. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. TOSHI STATS SDN. BHD. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on TOSHI STATS SDN. BHD. and me to correct any errors or defects in the codes and the software.

 

This is our new platform provided by Google. It is amazing as it is so accurate!

cheesecake-608963_640

In Deep learning project for digital marketing,  we need superior tools to perform data analysis and deep learning.  I have watched “TensorFlow“, which is an open source software provided by Google since it was published on Nov 2015.   According to one of the latest surveys by  KDnuggets, “TensorFlow” is the top ranked tool for deep learning (H2O, which our company uses as main AI engine, is also getting popular)(1).

I try to perform an image recognition task with TensorFlow and ensure how it works. These are results of my experiment. MNIST, which is hand written digits from 0 to 9, is used for the experiment. I choose convolutional network to perform it.  How can TensorFlow can classify them correctly?

MNIST

I set the program of TensorFlow in jupyter like this. This comes from tutorials of TensorFlow.

MNIST 0.81

 

This is the result . It is obtained after 80-minute training. My machine is MAC air 11 (1.4 GHz Intel Core i5, 4GB memory)

MNIST 0.81 3

Could you see the accuracy rate?  Accuracy rate is 0.9929. So error rate is just 0.71%!  It is amazing!

MNIST 0.81 2r

Based on my experiment, TensorFlow is an awesome tool for deep learning.  I found that many other algorithms, such as LSTM and Reinforcement learning, are available in TensorFlow. The more algorithms we have,  the more flexible our strategy for solutions of digital marketing can be.

 

We obtain this awesome tool to perform deep learning. From now we can analyze many data with TensorFlow.  I will provide good insights from data in the project to promote digital marketing. As I said before “TensorFlow” is open source software. It is free to use in our businesses.  No fees is required to pay. This is a big advantage for us!

I can not say TensorFlow is a tool for beginners as it is a computer language for deep leaning. (H2O can be operated without programming by GUI). If you are familiar with Python or similar languages, It is for you!  You can download and use it without paying any fees. So you can try it by yourself. This is my strong recommendation!

 

TensorFlow: Large-scale machine learning on heterogeneous systems

1 : R, Python Duel As Top Analytics, Data Science software – KDnuggets 2016 Software Poll Results

http://www.kdnuggets.com/2016/06/r-python-top-analytics-data-mining-data-science-software.html

 

 

Notice: TOSHI STATS SDN. BHD. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. TOSHI STATS SDN. BHD. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on TOSHI STATS SDN. BHD. and me to correct any errors or defects in the codes and the software.

 

“DEEP LEARNING PROJECT for Digital marketing” starts today. I present probability of visiting the store here

cake-623579_640

At the beginning of this year,  I set up a new project of my company.  The project is called “Deep Learning project” because “Deep Learning” is used as a core calculation engine in the project. Now that I have set up the predictive system to predict customer response to a direct mailing campaign, I would like to start a sub-project called  “DEEP LEARNING PROJECT for Digital marketing”.  I think the results from the project can be applied across industries, such as healthcare, financial, retails, travels and hotels, food and beverage, entertainments and so on. First, I would like to explain how to obtain probability for each customer to visit the store in our project.

 

1. What is the progress of the project so far?

There are several progresses in the project.

  • Developing the model to obtain the probability of visiting the store
  • Developing the scoring process to assign the probability to each customer
  • Implement the predictive system by using Excel as an interface

Let me explain our predictive system. We constructed the predictive system on the platform of  Microsoft Azure Machine Learning Studio. The beauty of the platform is Excel, which is used by everyone, can be used as an interface to input and output data. This is our interface of the predictive system with on-line Excel. Logistic regression in MS Azure Machine Learning is used as our predictive model.

The second row (highlighted) is the window to input customer data.

Azure ML 1

Once customer data are input, the probability for the customer to visit the store can be output. (See the red characters and number below). In this case (Sample data No.1) the customer is less likely to visit the store as Scored  Probabilities is very low (0.06)

Azure ML 3

 

On the other hand,  In the case (Sample data No.5) the customer is likely to visit the store as Scored Probabilities is relatively high (0.28). If you want to know how it works, could you see the video?

Azure ML 2

Azure ML 4

 

2. What is the next in our project?

Once we create the model and implement the predictive system, we are going to the next stage to reach more advanced topics

  • More marketing cases with variety of data
  • More accuracy by using many models including Deep Learning
  • How to implement data-driven management

 

Our predictive system should be more flexible and accurate. In order to achieve that, we will perform many experiments going forward.

 

3. What data is used in the project?

There are several data to be used for digital marketing. I would like to use this data for our project.

When we are satisfied with the results of our predictions by this data,  next data can be used for our project.

 

 

Digital marketing is getting more important to many industries from retail to financial.   I will update the article about our project on a monthly basis. Why don’t you join us and enjoy it!  When you have your comments or opinions, please do not hesitate to send us!

If you want to receive update of the project or want to know the predictive system more, could you sing up here?

 

 

 

Microsoft, Excel and AZURE are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.

Notice: TOSHI STATS SDN. BHD. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. TOSHI STATS SDN. BHD. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on TOSHI STATS SDN. BHD. and me to correct any errors or defects in the codes and the software.

Will the self-driving cars come to us in 2020?

city-1284400_640Since last year, the progress of development of self-driving cars are accelerated rapidly.  When I wrote about it last year, someone may not be convinced that the self-driving cars come true. But now no one can doubt about self-driving cars anymore. The problem is when it comes in front of us.  I would like to consider several key points to develop the technology of self-driving cars.

 

1.Data from experiments

It is key to develop self-driving car effectively. Because self-driving cars need artificial intelligence in it to drive cars by themselves without human interventions. As you know, artificial intelligence looks like our brains.  When we are born, our brain is almost empty. But as we grow, we can learn many things through our experiences.  This is the same for artificial intelligence. It needs massive amounts of data to learn. Recently, Google and  Fiat Chrysler Automobiles NV announced that they cooperate to enhance development of self-driving cars. According to the article on Bloomberg, “The carmaker plans to develop about 100 self-driving prototypes based on the Chrysler Pacifica hybrid-powered minivan that will be used by Google to test its self-driving technology.”(1)  The more cars are used in the experiments, the more data they can obtain. Therefore, it enables Google to accelerate to develop self-driving cars more rapidly.

 

2. Algorithm of artificial intelligence

With data from experiments, artificial intelligence will be more sophisticated.  The algorithms of artificial intelligence, which are called “Deep Learning” should be more effective from now.  Because driving cars generates sequences of data and need sequential decision making processes, such as stop, go, turn right, accelerate, and so on,  we need algorithms which can handle these situations. In my view, the combination of deep learning and reinforcement learning can be useful to do that.  This kind of technologies is developed in research centers, such as Google DeepMind which is famous for the artificial intelligence Go player. It says this technology can be used for robotics, medical research and economics.  So why not for self-driving cars?

 

3. Interactions with human drivers

It seems to be very difficult to decide who is responsible for driving cars.  Initially, self-driving cars might appear with the handle and brakes. It means that human can intervene the operations of self-driving cars. When accidents happen,  who is responsible?  Human or machines?  When self-driving cars without handle and brakes are available,  machines are responsible as human can not control cars anymore. So the machines are 100% responsible for accidents. It is very difficult to decide which is better, self-driving cars with and without handle and breaks. It depends on the development of technologies and regulations.

 

Impact on society is huge when self-driving cars are introduced to us.  Bus, Taxi, Track could be replaced with self-driving cars.  Not only drivers but also road maintenance  companies, car insurance companies, roadside shops, traffic light makers, railway companies, highway running companies,  car maintenance companies and car parking providers are also heavily impacted. Government should consider how we can implement self-driving cars to our societies effectively. I do not think we have spare time to consider it. Let us start it today!

 

(1) http://www.bloomberg.com/news/articles/2016-05-03/fiat-google-said-to-plan-partnership-on-self-driving-minivans

 

Note: Toshifumi Kuga’s opinions and analyses are personal views and are intended to be for informational purposes and general interest only and should not be construed as individual investment advice or solicitation to buy, sell or hold any security or to adopt any investment strategy.  The information in this article is rendered as at publication date and may change without notice and it is not intended as a complete analysis of every material fact regarding any country, region market or investment.

Data from third-party sources may have been used in the preparation of this material and I, Author of the article has not independently verified, validated such data. I accept no liability whatsoever for any loss arising from the use of this information and relies upon the comments, opinions and analyses in the material is at the sole discretion of the user. 

Will the age of “Brain as a Service” come to us in near future?

macbook-926425_640

15 March 2016,  I found two things which may change the world in the future,  The former, artificial intelligence Go player “AlphaGo” and the latter is an automated marketing system “Google Analytics 360 Suite“. Both of them came from Google. Let me explain why I think the age of “Brain as a service” is coming  based on these two innovations.

1. AlphaGo

You may know what AlphaGo achieved on 15 March 2016.  At  Google DeepMind Challenge, where artificial intelligence Go player had five games against a top professional Go player. It beats Lee sedol, who is one of the strongest Go player in the world, 4 to 1.  Go is one of the oldest games, which are mainly played in China, Korea, Japan and Taiwan. At the beginning of the challenge, few people thought AlphaGo could win the games as it is always said that  Go is so complex that computers can not win professional Go players at least in 10 years. The result was, however, completely opposite. Therefore,  other professional Go players, artificial intelligence researchers and even people who do not play Go must be upset to hear the news. AlfaGo is strengthened by algorithms, which are called “deep learning” and “reinforcement learning“. It can learn the massive amount of Go patterns created by human being for a long time.  Therefore, we need not to program specifically, one by one as computers can learn by themselves. It looks like our brains. We are born without any knowledge and start learning many things as we grow.  Finally, we can be sophisticated enough to be “adult”. Yes, we can see “AlphaGo” as a brain.  It can learn by itself at an astonishing speed as it does not need to rest.  It is highly likely that Google will use this brain to improve many products in it in the future.

 

2. Google Analytics 360 Suite

Data is a king.  But it is very difficult to feed them into computers effectively.  Some data are stored in servers. Others are stored in local PCs. No one knows how we can well-organize data effectively to obtain the insights from data.  Google is strong for consumer use.  G-mails, Android and google search are initially very popular among consumers. But the situations are gradually changing.  Data and algorithms have no-boarders between consumers and enterprises. So it is natural for Google to try to obtain enterprise market more and more. One of the examples is  “Google analytics 360 Suites”. Although I never tried it yet, this is very interesting for me because it can work as a perfect interface to customers. Customers may request many things, ask questions and make complains to your services. It is very difficult to gather these data effectively when systems are not united seamlessly. But with “Google analytics 360 Suites”,  data of customers could be tracked in a timely manner effectively.  For example, the data from Google analytics 360 may be going to Google Audience Center 360,  which is a data management platform (DMP).  It means that the data is available to any analyses that marketers want.  “Google Audience Center 360” can collect data from other sources or third party data providers. It means that many kind of data could be ready to be fed into computers effectively.

 

3. Data is gasoline for “Artificial intelligence”

AlfaGo can be considered as “Artificial intelligence”. “Artificial intelligence” is like our brain.  There is no knowledge in it initially.  It has only structures to learn.  In order to be “intelligent”, it should learn a lot from data. It means that massive amount data should be fed into computers. Without data, “artificial intelligence” can do nothing. Now data management like “Google Audience Center 360” is in progress. It seems that data are getting well organized to be fed into computers.  The centralized data management system can collect data automatically from many systems. It becomes easier to feed massive amounts of data into computers. It enables to computers learn the massive amount of data. These things must be a trigger to change the landscape of our business, societies and lives. Because suddenly computers can be sophisticated enough to work just like our brain.  AlphaGo teaches us that it may happen when a few people think so. Yes, this is why I think that the age of “Brain as a Service” will come in near future.  How do you think of that?

 

 

Note: Toshifumi Kuga’s opinions and analyses are personal views and are intended to be for informational purposes and general interest only and should not be construed as individual investment advice or solicitation to buy, sell or hold any security or to adopt any investment strategy.  The information in this article is rendered as at publication date and may change without notice and it is not intended as a complete analysis of every material fact regarding any country, region market or investment.

Data from third-party sources may have been used in the preparation of this material and I, Author of the article has not independently verified, validated such data. I and TOSHI STATS.SDN.BHD. accept no liability whatsoever for any loss arising from the use of this information and relies upon the comments, opinions and analyses in the material is at the sole discretion of the user. 

 

This is No.1 open-online course of “Deep Learning”. It is a new year present from Google!

desk-918425_640

I am very happy to find this awesome course of “Deep Learning” now . It is the course which is provided by Google through Udacity(1), one of the biggest mooc platforms in the world. So I would like to share it to any person who are interested in “Deep Learning”.

It is the first course which explains Deep Learning from Logistic regression to Recurrent neural net (RNN) in the uniformed manner on mooc platform as far as I know. I looked at it and was very surprised how awesome the quality of the course is.  Let me explain more details.

 

1. We can learn everything from Logistic regression to RNN seamlessly

This course covers many important topics such as logistic regression, neural network,  regularization, dropout, convolutional net, RNN and Long short term memory (LSTM). These topics are seen in some articles independently before. It is however very rare to explain each of them at once in the same place.  This course looks like a story of development of Deep Learning. Therefore, even beginners of Deep Learning can follow the course. Please look at the path of the course. It is taken from the course video of L1 Machine Learning to Deep Learning .

DL path

Especially, explanations of RNN are very easy to understand. So if you do not have enough time to take a whole course, I just recommend to watch the videos of RNN and related topics in the course. I am sure it is worth doing that.

 

2. Math is a little required, but it is not an obstacle to take this course

This is one of the courses in computer science.  The more you understand math, the more you can obtain insights from the course. However, if you are not so familiar with mathematics, all you have to do is to overview basic knowledge of “vectors”, “matrices” and “derivatives”.  I do not think you need to give up the course because of the lack of knowledge of math. Just recall high school math, then you can start this awesome course!

 

3. “Deep learning” can be implemented with “TensorFlow“, which is open source provided by Google

This is the most exciting part of the course if you are developers or programmers.  TensorFlow is a  python-based language. So many developers and programmers can be familiar with TensorFlow easily.  In the program assignments, participants can learn from simple neural net to sequence to sequence net with TensorFlow. It must be good! While I have not tried TensorFlow programming yet, I would like to do that in the near future. It is worth doing that even though you are not programmers. Let us challenge it!

 

 

In my view,  Deep Learning for sequence data is getting more important as time series data are frequently used in economic analysis,  customer management and internet of things.   Therefore, not only data-scientists, but also business personnel, company executives can benefit from this course.  It is free and self-paced when you watch the videos. If you need a credential, small fee is required. Why don’t you try  this awesome course?

 

 

(1) Deep Learning on Udacity

https://www.udacity.com//course/viewer#!/c-ud730/l-6370362152/m-6379811815

 

 

 

Notice: TOSHI STATS SDN. BHD. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. TOSHI STATS SDN. BHD. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on TOSHI STATS SDN. BHD. and me to correct any errors or defects in the codes and the software.

 

 

“DEEP LEARNING PROJECT” starts now. I believe it works in digital marketing and economic analysis

desert-956825_640

As the new year starts,  I would like to set up a new project of my company.  This is beneficial not only for my company, but also readers of the article because this project will provide good examples of predictive analytics and implementation of new tools as well as platforms. The new project is called “Deep Learning project” because “Deep Learning” is used as a core calculation engine in the project.  Through the project,  I would like to create “predictive analytics environment”. Let me explain the details.

 

1.What is the goal of the project?

There are three goals of the project.

  • Obtain knowledge and expertise of predictive analytics
  • Obtain solutions for data-driven management
  • Obtain basic knowledge of Deep Learning

As big data are available more and more, we need to know how to consume big data to get insight from them so that we can make better business decisions.  Predictive analytics is a key for data-driven management as it can make predictions “What comes next?” based on data. I hope you can obtain expertise of predictive analytics by reading my articles about the project. I believe it is good and important for us  as we are in the digital economy now and in future.

 

2.Why is “Deep Learning” used in the project?

Since the November last year, I tried “Deep Learning” many times to perform predictive analytics. I found that it is very accurate.  It is sometimes said that It requires too much time to solve problems. But in my case, I can solve many problems within 3 hours. I consider “Deep Learning” can solve the problems within a reasonable time. In the project I would like to develop the skills of tuning parameters in an effective manner as “Deep Learning” requires several parameters setting such as the number of hidden layers. I would like to focus on how number of layers, number of neurons,  activate functions, regularization, drop-out  can be set according to datasets. I think they are key to develop predictive models with good accuracy.  I have challenged MNIST hand-written digit classifications and our error rate has been improved to 1.9%. This is done by H2O, an awesome analytic tool, and MAC Air11 which is just a normal laptop PC.   I would like to set my cluster on AWS  in order to improve our error rate more. “Spark” is one of the candidates to set up a cluster. It is an open source.

DL.002

3. What businesses can benefit from introducing “Deep Learning “?

“Deep Learning ” is very flexible. Therefore, it can be applied to many problems cross industries.  Healthcare, financial, retails, travels, food and beverage might be benefit from introducing “Deep Learning “.  Governments could benefit, too. In the project, I would like to focus these areas as follows.

  • Digital marketing
  • Economic analysis

I would like to create a database to store the data to be analyzed, first. Once it is created,  I perform predictive analytics on “Digital marketing” and “Economic analysis”.  Best practices will be shared with you to reach our goal “Obtain knowledge and expertise of predictive analytics” here.  Deep Learning is relatively new to apply both of the problems.  So I expect new insight will be obtained. For digital marketing,  I would like to focus on social media and measurement of effectiveness of digital marketing strategies.  “Natural language processing” has been developed recently at astonishing speed.  So I believe there could be a good way to analyze text data.  If you have any suggestions on predictive analytics in digital marketing,  could you let me know?  It is always welcome!

 

I use open source software to create an environment of predictive analytics. Therefore, it is very easy for you to create a similar environment on your system/cloud. I believe open source is a key to develop superior predictive models as everyone can participate in the project.  You do not need to pay any fee to introduce tools which are used in the project as they are open source. Ownership of the problems should be ours, rather than software vendors.  Why don’t you join us and enjoy it! If you want to receive update the project, could you sing up here?

 

 

Notice: TOSHI STATS SDN. BHD. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. TOSHI STATS SDN. BHD. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on TOSHI STATS SDN. BHD. and me to correct any errors or defects in the codes and the software.