Trending – Devthon https://13.233.195.217 Decoding Innovation Wed, 19 May 2021 06:38:21 +0000 en-GB hourly 1 https://wordpress.org/?v=6.6.4 https://13.233.195.217/wp-content/uploads/2020/11/cropped-Devthon-Logo-Color400x400-32x32.png Trending – Devthon https://13.233.195.217 32 32 ‘What Women Want’ — A Saree Buyology https://13.233.195.217/what-women-want-a-saree-buyology/ https://13.233.195.217/what-women-want-a-saree-buyology/#respond Tue, 24 Sep 2019 09:17:39 +0000 https://18.224.111.186/?p=2849 Image for post
1928 Illustration of different styles of sari, gagra choli & shalwar kameez worn by women in the Indian subcontinent.

There are over 80 ways of draping a saree. These are mostly styles based on the region. And with the region come the designs and the material. So, the permutations and combinations that one has to go through to choose “one perfect saree” are numerous. So, we set out to understand ‘What Women Want’ while choosing a saree. This was to better understand the social, economical and cultural perceptions towards the saree today and to attempt a solution using new technologies like AI & Vision computing to make the experience of buying that “One perfect saree” memorable, social and hassle-free.

Sarees are one of the oldest clothing articles on the face of the earth. It pre-dates most of the clothing cultures we now have. This traces back to around 2000–1800 bc. Sarees have not just lasted so long but have also modernized in trends and design with time. Even the manufacturing and the sale of sarees has gotten more sophisticated over time. So, you might think choosing a saree should be just as simple as choosing a shirt/trouser. It’s just not that simple.

Understanding Who Wear Sarees Today

The NY times, claims saree draping to be a nationalist agenda (https://www.nytimes.com/2017/11/12/fashion/india-nationalism-sari.html) but as stated before, saree draping predates all of this. So, we decided to take the subcontinent into consideration, on multiple factors and with information from surveys and trends our findings on who wears sarees can be seen as below:

  • Religion divide
    A survey by the NSSO states that saree is not just a hindu attire but christians and muslim households spend considerable share of women’s clothing budget on sarees
  • Economic divide
    Saree breaches the class divide. The effluent class’s saree buying is at 77% which is only slightly higher than the bottom class’s 72%.

Understanding What Women Look For In A Saree

From our earlier understanding that most southern states in the subcontinent favor sarees than the northern ones, we conducted a survey with a small sample size of women, Majorly from tier 2 and tier 3 towns to understand what do they look for in a saree.

who is interested in sarees

Image for post

In the towns, it’s a growing trend that the majority of women in their 20’s are preferring to move to the cities for work and education. The women above 50’s in the towns are parents to the children who are moving to the cities. So, Majority of sarees are being purchased by women who are over 40 years old.

what design are they looking for

Image for post

The graph states that the majority of people looking to buy a saree are always looking for thread works on their sarees. This could be because it is easier to maintain than stones and is not as simple as checks and prints. The prints fall second but lag by a fair margin.

Is Design Everything

Image for post

In every other branding that we see for sarees, we see all the bells and whistles. The shiny rocks on the saree, the glossy silk, the simple prints, but what are people actually looking for?

Designs of the saree seem to be the least important when it comes to preferences, The feel, the material and the quality are what are looked for. In other terms, longevity and usability of the sarees are important than the design. Also, 80% of women choosing design fall between 20–35 years.

Apart from the above insights we have also discovered the following:

  • Gifting a saree is very common in india
  • Majority of women buying sarees buy it for everyday usage and the purchase of fancy sarees is for special occasions where all the classes tend to spend more than usual for that one saree.

So, now that we know what women look for in a saree, Lets look at their buying behavior.

Understanding How They Shop For Sarees

Image for post
From more surveys and interviews, we understand the general shopping patterns.

A process of buying starts in the minds of the consumer, which leads to the finding of alternatives between products that can be acquired with their relative advantages and disadvantages. From earlier, we know that the quality factor prevails in the first position, color and design, comfort and style and price are securing successive ranks respectively.

Image for post

The graph here shows how the market has been growing with larger name brands across all the classes scaling ethnic wear in india. This also shows how affordable ethnic brands are in comparison to western wear.

Image for post

The saree market in india is one of the largest apparel market in the country. There is a significant shift away from traditional sarees towards ethnic wear and western wear. Though the growth seems to be slower for sarees, it still would be the market leader in time to come.

The Influencers

  • Increasing number of occasions

With the growing social boundaries, the number of occasions have increased in india. Formal, informal and traditional occasions have made women increase their wardrobe.

  • Impulsive buying

With offers everywhere and the technology being present in your palm, attraction towards any commodity has fueled impulsive buying for the average indian.

  • Influence of media

Soaps, Movies, Ads, Social media, Personal messages. The visual format of content sharing is enabling users with millions of options and is contributing towards this change in behaviour

  • Increase in fashion sense

With the evolving fashion and media, people are not just looking for utility but for aesthetics too. And with larger brands spreading across the country with scale production, aesthetic clothing is affordable to everyone

  • Aspirational buying

Women today are empowered with the ability of higher spending. Along with it, good clothing is aspirational too. A memorable occasion needs aspirational clothing to complete it.

Where Do They Buy Sarees From

From sales of sarees by local vendors on instagram, facebook marketplace and amazon and flipkart to larger chains and stores, Women today are shopping for sarees in every vertical available. The online market is one amongst the most important reasons in the growth of sarees in India. Since the adoption of Sarees is majorly in rural areas where penetration on internet is increasing day by day, this may result in opening of a brand new revenue pockets for stockholders in Indian sari business The increasing penetration of

Image for post

While the online market and popup stores mostly takes care of the impulse buying and everyday needs of sarees, when it comes to shopping for occasions and events; women still prefer buying sarees in larger stores or from reputed brands. They dont mind the extra effort (and/or) the overhead cost that retail stores bear.

Internet, the increasing buying power of women, high brand consciousness and fashion sense has made e-commerce a crucial medium of shopping.

Customer saree shopping journey

From the customer’s shoes, Buying a saree is a very deeply embedded process with numerous points of friction and points of leverage. Customers interaction with the shopkeeper is only a part of a larger journey that they’re on.

Image for post

The above mentioned is just an outlier of the shopping experience. The nuances and the conditions they evaluate change with every customer.

Conclusion: (This Is) What Women Want.

  • Quality and assurance of the commodity plays a major factor on the buying
  • Emotionally, validation and feedback on what they wear plays a great role in the choice that women make while buying a product
  • Validation and feedback on a product are observed to be attained through conversation on the look and feel and the costing of a saree
  • Though buying a saree requires the evaluation of quality and feel, women prefer the design, work and other visual elements to look at a saree
  • Brand names play a major role
  • The idea behind fashionable clothing is to make someone look beautiful so the search is always for a saree that one looks beautiful in
]]>
https://13.233.195.217/what-women-want-a-saree-buyology/feed/ 0
Pose Estimation Benchmarks on intelligent edge https://13.233.195.217/pose-estimation-benchmarks-on-intelligent-edge/ https://13.233.195.217/pose-estimation-benchmarks-on-intelligent-edge/#respond Wed, 21 Aug 2019 09:04:27 +0000 https://18.224.111.186/?p=2846 Image for post
Photo by Emile Guillemot on Unsplash

Benchmarks on Google Coral, Movidius Neural Compute Stick, Raspberry Pi and others

Introduction

In an earlier article, we covered running PoseNet on Movidius. We saw that we were able to achieve 30FPS with acceptable accuracy. In this article we are going to evaluate PoseNet on the following mix of hardware:

  1. Raspberry Pi 3B
  2. Movidius NCS + RPi 3B
  3. Ryzen 3
  4. GTX1030 + Ryzen 3
  5. Movidius NCS + Ryzen 3
  6. Google Coral + RPi 3B
  7. Google Coral + Ryzen 3
  8. GTX1080 + i7 7th Gen

This is a comparison of PoseNet’s performance across hardware, to help decide which hardware to use for a specific use case, if optimizations can help. It also gives a glimpse into hardware capabilities in the wild. The hardware included a range from baseline prototyping platforms to tailored for edge to production-grade CPUs.

Hardware Choices

  1. Raspberry Pi: The board of choice for prototyping, although low powered, gives a good initial understanding of what to expect and what to choose for production. It may not be able to run the DNN models, but it sure is fun.
  2. Movidius NCS + RPi 3B: Movidius Neural Compute Stick is a promising candidate if the model is to be run on the edge. NCS has Vision Processing Units (VPU) which are optimized to run deep neural networks.
  3. Ryzen 3: AMD’s quad-core CPUs are not a conventional choice for neural networks, but it is worth checking how the networks perform on the platform.
  4. GTX1030 + Ryzen 3: Adding an Nvidia GPU to the rig (granted, it is comparatively old but it is cheap) allows us to benchmark what is possible on older cuDNN versions and GPUs.
  5. Movidius NCS + Ryzen 3: A desktop system allows for better and faster interfacing with the NCS. This setup is preferred during prototyping your edge application. Having a high performance CPU allows rapid application development while NCS gives the ability to run your models on your development laptop.
  6. Google Coral + RPi 3B: Google’s answer to on-edge ML is their Coral board which has TPUs. Tensor Processing Units are used by Google’s gigantic AI systems. Coral puts the compute power of TPUs on small form factor. It has native support for Raspberry Pi too.
  7. Google Coral + Ryzen 3: As we mentioned in Movidius NCS + Ryzen 3 section, it is going to be insightful to see how Coral interfaces with Ryzen 3 based computer.
  8. GTX1080 + i7 7th Gen: Top of the line system with GTX1080 and Intel i7 CPU. This is the highest performing combination in the list.

Repositories and models used:

  1. PoseNet — tfjs version
  • Based on MobileNetV1_050
  • Based on MobileNetV1_075
  • Based on MobileNetV1_100

2. PoseNet — Google Coral version

3. Read our previous blog post to get Movidius versions of PoseNet

Comparing Edge Compute Units

Google Coral’s PoseNet repository provides a model based on MobileNet 0.75 which is optimized specifically for Coral. At the time of writing, the details of the optimizations have not been provided and it is not possible to generate models for MobileNet 0.50 and 1.00.

Image for post
Google Coral vs Intel Movidius

The optimized Coral model gives an exceptional performance of 77FPS with Ryzen 3 system. However, the same model gives ~9FPS when running on Raspberry Pi.

Movidius shows differences in performance with RPi and Ryzen, with the general pattern being faster on the Ryzen 3 system

Comparing Desktop CPUs and GPUs

The results are aligning with expectations while comparing CPU with GTX 1030 and GTX 1080. The high-end GPU outperforms the other candidates by a huge margin. However, the competition between Ryzen 3 and GTX 1030 is close.

Image for post
Ryzen vs GTX 1030 vs GTX 1080

Final Thoughts

The following chart shows frames per second for a standard video input:

Image for post
Frames per second

Google Coral, when paired with a desktop computer outperforms every other platform — including GTX1080.

Other noteworthy results are:

  1. When paired with Raspberry Pi 3, Coral gives ~9FPS. The reason behind the result is not yet explained but is being looked into.
  2. GTX1080 performs almost equally regardless of the model size.
  3. Movidius NCS performs better than GTX1030.
  4. Raspberry Pi is not able to run the models at all.

Different hardware gives a different flavor of performance, and there is scope for model optimization (quantization for example). It may not always be necessary to go with a high-end GPU such as GTX 1080 if your use case allows for a good trade-off between accuracy and speed/latency.

Our analysis shows that choosing the right hardware coupling with a well-optimized neural network is essential and may require in-depth comparative analysis.

]]>
https://13.233.195.217/pose-estimation-benchmarks-on-intelligent-edge/feed/ 0
Car or Not a Car https://13.233.195.217/car-or-not-a-car/ https://13.233.195.217/car-or-not-a-car/#respond Wed, 03 Jul 2019 08:55:44 +0000 https://18.224.111.186/?p=2840 Lessons from Fine Tuning a Convolutional Binary Classifier
Image for post
Taken in a village Near Jaipur (RajasthanIndia) by Sanjay Kattimani http://sanjay-explores.blogspot.com

Fine tuning has been shown to be very effective in certain types of neural net based tasks such as image classification. Depending upon the dataset used to train the original model, the fine-tuned model can achieve a higher degree of accuracy with comparatively less data. Therefore, we have chosen to fine tune ResNet50 pre-trained on the ImageNet dataset provided by Google.

We are going to explore ways to train a neural network to detect cars, and optimise the model to achieve high accuracy. In technical terms, we are going to train a binary classifier which performs well under real-world conditions.

There are two possible approaches to train such a network:

  1. Train from scratch
  2. Fine-tune an existing network

To train from scratch, we need a lot of data — millions of positive and negative examples. The process doesn’t end at data acquisition. One has to spend a lot of time cleaning the data and making sure it contains enough examples of real world situations that the model is going to encounter practically. The feasibility of the task is directly determined by the background knowledge and time required to implement that.

Basic Setup

There are certain requisites that are going to be used throughout the exploration:

  1. Datasets
    a. Standford Cars for car images
    b. Caltech256 for non-car images
  2. Base Network
    ResNet — arXiv — fine-tuned on ImageNet
  3. Framework and APIs
    a. TensorFlow
    b. TF Keras API
  4. Hardware
    a. Intel i7 6th gen
    b. Nvidia GTX1080 with 8GB VRAM
    c. System RAM 16GB DDR4

Experiment 1

To start with a simple approach, we take ResNet50 without the top layer and add a fully connected (dense) layer on top of it. The dense layer contains 32 neurons which are activated with sigmoid activator. This gives approximately 65,000 trainable parameters which are plenty for the task at hand.

Image for post
Model Architecture for experiment 1

We then add the final output layer having a single neuron with sigmoid activation. This layer has a single neuron because we are performing binary classification. The neuron will output real values ranging from 0 to 1.

Data Preparation

We are randomly sampling 50% of images as the training dataset, 30% as validation and 20% as test sets. Although there is a huge gap between the number of car and non-car images in the training set, it should not skew our process too much because the datasets are comparatively clean and reliable.

Image for post

Hyper-parameters

Image for post

Results

As a trial run, we trained for one epoch. The graphs below illustrate that the model starts at high accuracy, and reaches near-perfect performance within the first epoch. The loss goes down as well.

Image for post
Epoch Accuracy for Experiment 1
Image for post
Epoch Loss for Experiment 1

However, validation accuracy does not seem very good compared to the training round, and neither does validation loss.

Image for post
Validation Accuracy for Experiment 1
Image for post
Validation Loss for Experiment 1

So, we ran for 4 epochs and were left with the following results:

Image for post
Accuracy and Loss for four epochs
Image for post
Validation accuracy and validation loss for four epochs
Image for post

The model performs relatively well, except for the high degree of separation between training and validation losses.

Experiment 2

We decided to keep the model architecture the same as the one we used in the first experiment, using the same ResNet50 without the top layer and adding a fully connected (dense) layer on top of it containing 32 neurons activated with sigmoid activator.

Image for post
Model Architecture for experiment 2

Data Preparation

This is where the problem lay in the previous experiment. The train/validation/test data splits were random. The hypothesis was that the randomness has added more images of some cars, and too little of others, causing the model to be biased.

So, we took the splits as given by the Cars dataset and added 3000 more images by scraping the good old Web.

Image for post

Hyper-parameters

Image for post

Results

These results signify substantial improvement in the validation accuracy when compared to the previous experiment.

Image for post
Epoch Accuracy for experiment 2
Image for post
Epoch Loss for experiment 2

Even though the accuracy matches fairly well, there is a big difference between the training loss and the validation loss.

Image for post
Validation Accuracy for experiment 2
Image for post
Validation Loss for experiment 2
Image for post

This network seems more stable than the previous one. The only observable difference is that of new data splits.

Experiment 3

Here we add an extra dropout layer which provides a 30% chance that a neuron will be dropped out of the training pass. The dropout layer has been known to normalize models, to prevent possible biases caused by interdependence of neurons.

Image for post
Model Architecture for experiment 3

Since we have a comparatively huge pre-trained network and smaller trainable network, we could add more dense layers to see the effects. We did that and the model ended up achieving saturation in fewer epochs. No other improvements were observed.

Data Preparation

Image for post

Just like in experiment 2, the default train/validation splits are taken.

Hyper-parameters

Image for post

Here, we have run the model on a single learning rate but the value can be experimented with. We will talk about the effects of batch size on this network in the results section.

Results

The results here are with the batch size of 32. As seen, in 3 epochs the network seems to saturate (although it might be a bit premature to judge this).

Image for post
Epoch accuracy for experiment 3
Image for post
Epoch Loss for experiment 3

At the same time validation accuracy and loss also seem to be performing well.

Image for post
Validation Accuracy for experiment 3
Image for post
Validation Loss for experiment 3

So, we increase the batch size to 128 hoping it would help the network find a better local minima and thereby giving a better overall performance. Here is what happened:

Image for post
Epoch Accuracy and Loss for batch size of 128
Image for post
Validation Accuracy and Loss for batch size of 128

The model now performs reasonably well on both training and validation sets. The losses between training and validation runs are not too far apart either.

Image for post

Model Drawbacks

Obviously, the model is not one hundred percent accurate. It does provide certain failed classifications as a result.

Conclusion

When we ran this model on the testing dataset, it failed on only 7 images out of car + non-car sets. This is a very high degree of performance accuracy and closer to production usage.

In conclusion, we can safely assert that dataset splits are crucial. Rigorous evaluations and experimentation with various hyper-parameters give us a better idea of the network. We should also think about modifying the original architecture based on the evidence provided by the various hyper-parameters.

]]>
https://13.233.195.217/car-or-not-a-car/feed/ 0