Students find it a very hectic task to draw the class diagram when assigned. They think it is useless because it does not help them. So, they directly move to the code portion. ?Sketch2Code? is a website based on this theory in which the user sketches the website design and it converts into HTML cod
Draw To Code
Students find it a very hectic task to draw the class diagram when assigned. They think it is useless because it does not help them. So, they directly move to the code portion. “Sketch2Code” is a website based on this theory in which the user sketches the website design and it converts into HTML code. And also if someone puts more effort into the UML diagram then it is very easy for him/her to code it down and it took less time
Students usually don’t bother about the design part of the program and most students skip the UML diagram part and directly jump to code. But, while doing code they face multiple issues so we are making a web-based application that converts the design part(class diagram) to code then they will focus on the class diagram rather than jumping on the implementation. In this way, they can just put effort into the class diagram and the template code will be ready for them.
We as a student explore things so while exploring we found a Microsoft website (sketch2code) in which the user uploads an image and that website converts that image into HTML Front End. So we thought to make a similar but unique platform thing that converts the UML into Python code there is a reason why we are doing this, as in today's era everything is done smartly and python is a smart language all the libraries are available you just have to include them and make your program workable in very few line of code. So we thought we should make a web-based application that helps to convert UML into Python code.
“Draw 2 Code” is a web-based application that can convert a class diagram to Python code. The User will upload a hand-drawn image of a class diagram and converts it into Python code. All the relations between the classes will also be implemented to code by seeing the UML diagram eg. (aggregation, composition, association). On BackEnd Deep learning will be used for testing and training the model and the FrontEnd will be based on Vue.js/ReactJs/Bootstrap CSS
The functionality of our project “Draw2Code” is:
Step 1: The user will open the website and there will be a button named “Upload Image”. The user will upload the image of the class diagram.
Step 2: The website will give some guidelines before uploading an image.
An image should be clear, try to upload an image without cutting. The class diagram should be on one page.
Step 3: If all the pathways went correct. It will segment out the boxes. In the first row, we have a class diagram. In the second row, we will have data members and in the third column, we have functions[3].
Step 4: The second phase of the backend machine learning algorithm will work and detect the symbols like +, - and # to convert into a keyword like public, private, and protected keywords.
Step 5: After analyzing the data and keywords, your class diagram will be converted into python code.
How our Project will be Completed?
The beneficiaries of this system are the students who consider code only and got stuck in the middle. It's an opportunity for all the students to get focused on UML diagrams first and the code will be automatically generated for them. From this tool, students can get a firm grip on UML class diagrams.
The other benefit will be that no such application is created. It is a total innovation and motivate other students to participate in these types of project.
CNN is used for image processing. Layers will be design first for CNN. OCR cannot be used because it does not use Deep learning. OCR only use simple ANN, and also feature extraction is manual. On the other hand, CNN is also used vastly, so CNN is best fit for our Project and Draw2Code will be trained on CNN and then deployed on the website.
Yolov3:
YOLO is a Deep Learning architecture suggested in the article 'You Only Look Once: Unified, Real-Time Object Detection by Joseph Redmon, Santosh Divola, Ross Girshick, and Ali Farhadi that takes a completely new approach. It is a smart convolutional neural network (CNN) for real-time object identification.
Furthermore, it is well-known for its great accuracy while also being able to operate in real-time or be utilized for real-time applications. The YOLO method "just looks once" at the input image, which means it only needs one forward propagation pass through the network to generate predictions.
Dataset:
There is no Dataset available right now, so we create our dataset with the help of students and teachers. The dataset contains about 500-1000 handwritten images from which the model will be trained.
Features:
Built in library of OCR will be used for text detection/Recognition.
Uniform aspect ratio:
One of the first tasks is to make sure the images are of the same size and aspect ratio. Most neural network models assume a square-shaped input picture, which implies that each image must be evaluated to see whether it is a square and cropped accordingly. Cropping, as demonstrated, may be used to choose a square portion of an image. When cropping, we normally focus on the center of the image.
Image Scaling:
After ensuring that all photos are square (or have a predefined aspect ratio), we may scale each image properly. We've opted to use photos with 100 pixel width and height. We'll need to double the width and height of each image by 0.4 (100/250). There are several approaches for up-scaling and down-scaling, and we normally utilize a library function to perform this for us.
Mean, Standard Deviation of input data:
It is sometimes beneficial to examine the ‘mean image,' which is generated by taking the mean values for each pixel across all training samples. Observing this might provide us with information about the pictures' underlying structure. The mean picture from the first 100 photographs in our data-set, for example, is presented to the left. Clearly, this gives the sense of a human face, leading us to believe that the faces are slightly aligned to the center and are of equal size. If we don't want our raw data to have this intrinsic structure, we can supplement it with perturbed photos. On the right, you can see the standard deviation of all photos.
| Item Name | Type | No. of Units | Per Unit Cost (in Rs) | Total (in Rs) |
|---|---|---|---|---|
| Graphic Card | Equipment | 1 | 30000 | 30000 |
| Deep Learning Specialization | Equipment | 5 | 7800 | 39000 |
| Total in (Rs) | 69000 |
Spark gap method is the most common method of measuring the peak values of high voltage. I...
According to a recent study published by the World Health Organization (WHO), it is estima...
Autonomous navigation in an unknown and dynamic environments is a central problem in many...
This project aims to improve the supply chain management process at Ittehad Chemicals Limi...
Health has been a very significant part of everyone?s lives, though it has changed the mea...