Adil Khan 9 months ago
AdiKhanOfficial #FYP Ideas

Virtual Try Room

Throughout the COVID-19 pandemic we have been living through a worldwide lockdown. People have been locked inside their home and outside their homes strict SOP?s have to be followed to ensure the safety of others and themselves. Throughout the pandemic there has been one main goal for all industries

Project Title

Virtual Try Room

Project Area of Specialization

Artificial Intelligence

Project Summary

Throughout the COVID-19 pandemic we have been living through a worldwide lockdown. People have been locked inside their home and outside their homes strict SOP’s have to be followed to ensure the safety of others and themselves. Throughout the pandemic there has been one main goal for all industries; try to keep everything working smoothly. This includes the clothing industry, specially the brands that have brand outlets where people were allowed to visit and try on clothes, but since the pandemic this has been challenging since multiple people trying on the same clothing article goes against the common SOP’s observed throughout the world. To fight this problem out project works on providing a solution; bring the try room experience available in the shops to your laptop/PC. The proposed solutions work as a web service that can be integrated with a clothing brand website. The web service based on REACT, Django and MongoDB, works with computer vision technology like Convolutional Neural Networks and Python’s libraries. The “virtual try room” works with mapping a 3D model of a clothing article (limited to the upper body region, below the neck), onto a 3D model of the user mapped by a single camera. Throughout our study of related work, we have seen a number of solutions that include; using augmented reality to tackle the problem with tools like ARKit, using DensePose to map pixels of an RGB image of a human to the 3D surface of a human body. Another study showed an attempt to create a Multi-pose guided virtual try on network that would map clothes off of human poses. In our proposed solution, the virtual try room will use Unity to generate a 3D model of the clothing article and use the Unity plugin for Media Pipe to connect to our Python source code that estimates the 3D pose. This estimation will be sent back to Unity using Media pipe, which will then generate a model of the article based on the estimation. Once a model is generated a Unity based function will map clothing article images to the 3D model and return it. This model will then be fitted to the Human body model, and is then returned to the user for display. Our solution also proposes a hand gesture module that will help in navigating the web service once a user is standing far away from the device. This module is developed using the Python OpenCV and Media Pipe. In short, the proposed solution will be a web service that can be integrated with a clothing brand website and will work with libraries of Python and will provide a virtual try room experience to the users. With the hand gesture navigation ability and ease of use UI design the web service is a solution unique to others on the market.

Project Objectives

- To promote the use of computer vision in the ecommerce industry.
- Provide users with a way to try multiple clothing articles virtually.
- Understanding and working on real time 3D mapping of the human body.
- Mapping of 2D images on a 3D rendered model of a clothing article.
- Successfully integrating the 3D model of clothing article with the 3D mapping of 
human body.
- Minimizing the processing time of real time 3D mapping using advance CV & ML 
concepts.
- Provide users with a user-friendly application interface using HCI concepts and UX 
design theories

Project Implementation Method

The project implementation method contains several libraries for different purposes. 

Web Platform: This module contains webpages connected to each other using routing. Each webpage consists of several components. We used React js for frond end development. All private pages are authenticated using Auth Guard. Child routing is used in order to implement hierarchy of pages. Django is used at backend. Rest Framework is used in order to build API which fetch data from Mongo Atlas Cloud and also saved data to it. This API then hit on front end so that user can interact all type of data in the form of UI. User login and registartion is also available in this part.

Hand Gesture Recognition: This module is responsible for recognizing the hand gesture. It has been implemented in python using OpenCV and mediapipe. In addition, our code is also capable to find palm and fist shape of hand. If user’s hand is in the form of palm our system will do different functions and in case of fist our system will perform differently. First we recognize the basic points of hand and extract their features using OpenCV. After that these features are combined to perform specific function with the help of mediapipe.

Select clothing Article: In this module we are using TensorFlow gesture recognition. We have limited gestures which are recognized by the TensorFlow library. The thumb gesture is used in this part in order to select the required the clothing article from the list.

Create 3D Model of Human: Here we are creating 3D human model from 2D image. Pytorch an open source python library is used to implement this. Pytorch first detects 2D pose of the user from a 2D image. It then creates a mesh of this 2D image and saves it in a .png format. Now the pose combined with the mesh creates a 3D model and gives a file in .obj format which can be easily downloaded for further process.

Transfer Texture from 2d Cloth to 3D Human: In this section we simple need to map 2D clothing image on 3D human model. But this work is not simple. It contains four stages. First we need to upload 3D human model along with its 2d image so that system can detect 2d pose and mesh from 2d image. In second phase we need to create mesh of 2d clothing article in black and white format. In third phase we convert our 3D model in UV so that we can segment our 3d model before texture transfer. At last we need to transfer the texture from mesh of cloth to upper body part of human model.

Connect Python Notebook with Web Platform: In last part we need to connect our python colab with web application with the help of flask. First we run our colab notebook on ngrok server and get an online port. Then use this port to connect it with web application.

Benefits of the Project

  • Minimizing the processing time of real time 3D mapping using advance CV & ML concepts.
  • Provide users with a user-friendly application interface using HCI concepts and UX design theories.
  • User can select any clothing article from list using hand gesture.
  • User can try cloth vitually on his/her smart device.
  • User can save the snapshot of his/her virtually tried cloth in his/her gallery.
  • User can view his/her snaps in gallery and delete accordingly.

Technical Details of Final Deliverable

The virtual try room takes the 3D model of the user, the 2D image of clothing articles and maps the image on the 3D model by using OpenGL library. This experiment has 3 different phases. The first phase takes 2D image of user and converts it into a 3D model. After that the second phase takes the 2D image of clothing article and creates a mesh with the help of python libraries. And in the last phase we map this 2D cloth mesh onto the 3D user model. Experiments tell us accuracy as well as mapping efficiency. Due to huge change in size of 3D model and clothing image texture, sometimes the transfer may be not as clean as we want. Mesh of cloth sometimes is not created as we expect it to be. These types of errors may cause delay and inefficiency in our work.

The second experiment is done using hand gestures. We have developed the system to only recognize horizontal hand gestures and ignore the vertical ones. A couple of experiments were done during the horizontal movement testing which gave different results. For example, at the end of list we may find that clothing articles images may move because of hand movement; we must ignore all movements that are unintentional. System should not allow more than one hand to navigate list at a time. With this analysis we hope to improve this model in the future.

Final Deliverable of the Project

HW/SW integrated system

Core Industry

Finance

Other Industries

Others

Core Technology

Artificial Intelligence(AI)

Other Technologies

Augmented & Virtual Reality, Cloud Infrastructure

Sustainable Development Goals

Industry, Innovation and Infrastructure, Partnerships to achieve the Goal

Required Resources

Item Name Type No. of Units Per Unit Cost (in Rs) Total (in Rs)
GPU Equipment13500035000
Kinect Camera Equipment12500025000
Cloud Data storage Equipment130003000
Hosting AWS Miscellaneous 160006000
Total in (Rs) 69000
If you need this project, please contact me on contact@adikhanofficial.com
Any one can talk(Automated tool for two way sign language and audio tr...

Sign language is a natural way for communication between normal and the deaf and dumb peop...

1675638330.png
Adil Khan
9 months ago
self balancing robot

Two wheeled balancing robot is a inverted pendulum type problem. To keep the rob...

1675638330.png
Adil Khan
9 months ago
Improving Ad Relevance and PLTs using Intelligent Tor Relay Selection

When users browse the internet via a VPN or a service such as Tor, the some ads that...

1675638330.png
Adil Khan
9 months ago
video

Introduction. What is Theory of Automata? Why We Study Autom...

AdiKhanOfficial
Adil Khan
7 years ago
Energy Generation from Organic Waste Under Bio-gas Method (Biogas Dige...

With the increasing rate of technology, where world is fulfilling the energy demand, their...

1675638330.png
Adil Khan
9 months ago