I have been attending deeplearning.ai’s specialization courses for a while, and I’ve completed the second part of the series this week. In this course, Prof Andrew Ng shares his experiences and best practices to build efficient machine learning pipelines. Although there is not any programming assignment in the course syllabus, there are 2 case studies that are very close to real-world experiments, and it is expected to give the right decisions on these scenarios. I strongly recommend practicing this short training.
Recently, I tried several products to extract demographic information from a profile image. My target was to obtain information about age, gender, and ethnicity. I found the prominent companies in the sector are Clarifai and Face++. I integrated my trial software with both products and I found Clarifai’s accuracy better than Face++. My reasons are:
- Clarifai provides the probability value of its predictions. (predicted gender is female with a probability %52) So, it is possible to eliminate the results having low prediction score. On the contrast, Face++ does not provide that value. This is an unwanted situation because, in binary classification technique, the prediction always has a result, even its score is not very high.
- Clarifai correctly predicted the ethnicity of the image below as “White”, while Face++ wrongly predicted it as “Black”. But on the other hand, Clarifai could not found the gender value correctly (female %51, male %49) while Face++ correctly marked it as male (we don’t know its probability).
- The disadvantage of Clarifai is its low quota for free usages. It permits only 2500 API calls per month for free accounts. But Face++ does not specify any upper limit for free accounts. It has only one single limitation, which is one single API call per second.
I hope my hands-on experience with these services will help you choose the right product.
Gender: feminine (prob. score: 0.510), masculine(prob. score: 0.490)
Age: 55 (prob. score: 0.356)
Ethnicity (Multicultural appearance): White: (prob. score: 0.981)
Ethnicity (Multicultural appearance): Black
A very inspiring research is made at the end of 2016. With the help of deep learning, now it is possible to generate images from given texts.
Here is the link to the news and here is the link to that research paper.
Could you imagine some use cases based on this technology? I found an interesting use case.. Imagine you are in a police station, about a robbery occurred in a bank… The thief could not be found and you explain the visual profile of thief as you are the unique eyewitness of this event. At that time, a computer automatically generates the image of thief based on the visual details you describe… At the same time, the computer increases the precision of that visual by matching it with other records of past robbery events.
Within the last month, the future of education was one of the main topics in Davos. There were very interesting debates, and in of them, Jack Ma (the founder of Alibaba) told that it is strongly and urgently needed to change the current education system due to the rising impact of robots. Since robots are able to obtain the knowledge, by learning from their past experiences, they will do most of the things people do today. In order to adapt ourselves to the modern world, we need to educate our children in a way that cannot be copied by robots. Rather than teaching mathematics or physics to our children, we should support their more humanistic skills such as music and art.
I agree with Jack Ma’s ideas and I think we need to think more about people’s main advantages and disadvantages over robots in the next 20 years. Today, our children start learning to code in primary school, in order to communicate better with the robots and understand their logic. But when the world will be dominated by robot activities, all the things will be changed and humans should be in a place where robots do not see them as a threat.
In these days, my research motivation is to find some insights by analyzing Twitter data to understand how English people react to Brexit referendum. There are various researches already made about this topic, and most of them are done by universities in England such as Imperial College London and the University of Bristol. I found it as a quite interesting research topic since social media is an important environment to present our ideas to the community and there is a need for more research to understand people’s opinions. I will give more detailed information about my study in the upcoming weeks. If you have any recommendation for me, please feel free to send me an email.
In our latest project at Vodafone R&D, I was working under the supervision of Istanbul Technical University professor Gulsen Eryigit. All of the academical community in Turkey believe that Prof. Eryigit is the most important professor in the domain of Natural Language Processing based on Turkish Language. I had the chance of studying with Prof. Eryigit. Within a month, I implemented a predictive model based on Word2Vecs using Python language and Gensim framework.
The Word2Vec approach has been one of the trend topics of Natural Language Processing since 2014. In this technique, a neural network-based model is trained with a large vocabulary in order to identify the words as multi-dimensional vectors. Within a month, I implemented the predictive model, and then we calculated the score of our model using the Mean Average Precision technique, which is a well-known approach when there are ordered results (the ranking of outputs are important) Now we are following the existing research in Semeval 2017 task 3.
It is a great pleasure to finalize my working experience at Vodafone that lasted for 5 years with such a meaningful project!
Nowadays, chatbots -namely, conversational agents- have become so popular for companies; almost every company has an ongoing chatbot project! In general, the companies prefer getting a cloud-based (SaaS) chatbot in order to quickly go live. However, when the conversation requires a domain-specific knowledge, it becomes inefficient to use a generic chatbot. In this situation, a custom in-house solution seems a better option.
As an example, at Vodafone, the challenge was to give relevant information about Telecom specific services in the Turkish language. Our in-house product is now capable of making a conversation in Vodafone terminology! Vodafone subscribers are able to get information about their tariff or take actions such as purchasing an add-on, changing tariff, etc.
Now, I contribute to this product to make it smarter and self-learner and actually, it is a great experience for me!
I suggest you to watch the TED video named as “A monkey that controls a robot with its thoughts”. I think this video is inspiring to understand how the world will be in the future, especially when we start to directly control robots using our brain from distance.