#bu androidstudio privacy encryption decryption nudity machinelearning csebennettuniversity
Explore tagged Tumblr posts
Text
Explicrypto
Explicrypto is a mobile application which supports Android platform and was built using Android Studio. It can encrypt all your explicit images in the gallery, thus protecting your privacy. Explicrypto can be one of the best applications to trust your privacy with, as it does not internet connection to work. The processing of the image which includes, nudity detection, encryption, and decryption, happens within your device with no contact with the outside world. Due to which, it is very safe to use. Explicryto works in two modes, one is manual mode and the other one is an automatic mode.
In manual mode, first the user needs to grant all the required permissions to the application, for the first time, then the user needs to click a picture using the application. Once the picture is clicked it will be displayed on the next screen. Now, the user can click on “Classify” button to check whether the image is safe or not. As soon as the user clicks on classify button, the image is sent to MobileNet model (which is converted into a TFLite format to support Android). The MobileNet model, then gives the output in three categories Safe, Unsafe, and other along with the confidence at which the model is classifying. If the model classifies an image in unsafe category, then it gives a message that the image is unsafe, and it should be encrypted. Based on those reading, it is up to the user to decide, whether to encrypt the image or not. If the user decides to encrypt the image, the image vanishes from the application and is successfully and safely encrypted. The encryption technique used bas basic and was based on a randomly generated hash key. For decryption, you need to select the path of encrypted files and need to click on “Decrypt” button to decrypt the image. The image is decrypted using the same hash key as used while encrypting the file. Once the image is decrypted, it will again display in the application window.
In automatic mode, it uses the same architecture as in the manual mode. It will also require permission to read and write files. In this mode, the application will work in the background and will scan though the entire gallery. Each image from the gallery will be sent as an input to the MobileNet model and it will give an output whether the image is safe or not with the confidence level. Based on confidence level, it will decide whether to encrypt the image or not. If the image falls under unsafe category, it will automatically encrypt it. To decrypt the image, user again needs to open the application and select the path of decrypted image, to decrypt it.
I have decided my project life cycle to follow Agile approach because I am developing my project in different phases so iterative and incremental approach would be best for my project. Since each phase of my project requires is independent from each other so it can be delivered individually and frequently, which can help me get feedback from the user so that I can enhance the application while delivering the next phase. The design and execution of the project is very simple, and it is not dealing with any kind of complex architecture which makes it well suited for agile approach.
Explicrypto requires a very basic setup. The application only runs on Android devices with Android version 5.0 or above. Currently it is not supported on Apple devices, but will be released for iOS in future updates. The application also requires permission from the user to access device's storage and permission to read and write files on the device which is necessary for encryption and decryption. To use the application on manual mode it is assumed that the camera is fully functional to click pictures. The processing (including detection of nudity inside a picture, encryption and decryption) solely depends on the hardware which your phone is using.
All the necessary information about the Explicrypto including its code, was kept in a public repository called Github. All the changes and updates that was made in code, starting from its initial phase was committed in the repository so that a proper track of the project could be maintained. Each commit in Github was well descripted about the changes made or about adding new modules into the code.
Explicrypto has one of the most user-friendly interfaces. It provides just four buttons for quick and easy interaction in manual mode. With just four buttons, it’s very straight forward to guess their functionality with minimum interaction. One button is to click the picture while the application is open and another one will be to scan the images in the background. Once the image is classified then the user can choose whether to encrypt the image or not by either pressing or not pressing the encrypt button. To decrypt the file, the user just needs to select the path of the file by pressing a decrypt button and the file will be decrypted back to its original form. It is very easy to use because, as soon as the user opens up the application, the application will bring the user directly to the page where there is a camera open which is ready to take the picture and three buttons on the bottom of the screen. As mentioned previously, those buttons will let the user decide whether they want to classify, encrypt, or decrypt the image. These four buttons will provide the easiness of using the application without having any knowledge about technologies.
The remaining work of my project only includes work from the automatic mode. As the data set which I have used to train my model is not huge and user can click any kind of pictures, due to which the model might not be 100% accurate, all the time. Which might lead to the faulty encryption of image files. And sense my gallery has thousands of images it would be a very hectic work manually decrypt all the images which word encrypted accidentally by the application. So, my remaining work is to expand my data set and retrain my model again and again so that it covers all the possible cases for accurate classification.
TIMELINE
Week 1
In the first week of the project, I spent most of the time discussing about the domain on which I should do my project on. I was not sure that whether it has to a project related to machine learning only or it could be something else. As per the instructions given to us in the first week by our mentor and HOD Dr. Deepak sir, it was not mandatory to make a machine learning project. He guided us that our capstone project may not need to be just about developing applications or doing research. Which instantiated wide range of ideas in my mind, and spent time deciding that only.
Week 2
During the second week of the project, I spent time talking to others about the domain only. But this time I was taking the suggestion and advice of my friends. They were giving feasible suggestions on how I should be more focused on getting a better job and equally dedicates the time to the project. As most of the them were prioritizing job over the project, it led me into confusion about the domain of the project. While having one-to-one interaction with faculty mentors in our project lab, they suggested me the domain of my project. They said that it should be Machine Learning as I am doing my specialization on Machine Learning and Data Analytics (MLDA).
Week 3
During the week three of my project, I was busy deciding on which sub-field of machine learning I should make my project on and whom should I approach for mentorship. In the previous semester Dr. Shridhar sir used to take or classes of Deep Learning, so I thought that it would be the best to approach him as he could guide me more efficiently on how I should move ahead with my project. Initially it was on me to bring the idea to him and get his opinion on that. After few days of research, I decided that I should work on the developing education content on genetic algorithm in machine learning as I was fascinated about the working of that algorithm.
Week 4
In week four, I presented my idea about developing education content to Shridhar sir, but he wasn’t much impressed as my idea was not feasible in the market. Then I came up with the idea of making an application which will tell you what to eat in your area depending on the reviews of people. This idea was again unsuccessful as there was no novelty in it. At last Dr. Shridhar sir suggested me to make a project to protect private pictures of people getting leaked on the internet. For which he told me to make an application which detects the explicit images and encrypt them automatically in the phone. So, I took that suggestion and finalized my project idea and domain and wrote my milestone one report on that.
Week 5
In the fifth week of the project, after submitting the report on my milestone one, the companies started coming for the placements. Now I had to manage the research work on the project and parallely manage the placements exam and interview. As I applied in most of the companies, I was not getting enough time to devote to my project. But during the entire week I read some online paper and searched though Github to get references from the related project. Also, I downloaded the required setups which were necessary for the development of the application, like Android Studio, Gitbash, etc.
Week 6
In the sixth week of the project, I bought some online courses related to the android development course and how to apply machine learning on android as android studio by default does not support machine learning. First, I started with the basics of the android development and learnt how to design basic application by adding functionalities like button, label, textfield, background color, etc. Once I gained the required knowledge, I started building the application. The application just has the basic layout. Initially, when I was preparing the android studio on supporting the machine learning algorithm, I was encountering many errors and had to look up on internet several times to tackle those errors and set up the application.
Week 7
In the seventh week of the project, and the sixth week as well, I had very less time to devote to the project as during the day time, I was busy with the interviews of different companies and I during the evening time, I had to attend extra classes that were arranged for the training of the placement process. Apart from the placement processes I spent time in deciding which model should I use for the training of my application. As Google’s Tensorflow offer many models which you can train to perform tasks like image classification and detection. So, I decided that it would better to use the model from the Tensorflow itself as only those models can be deployed to mobile device using Tensorflow library.
Week 8
In the eighth week of the project, I cloned few models from the Github and tried to understand how they are working buy running them on android studio. I was figuring out how they are providing input to the model. Also, I had to do documentation of milestone two with research work. The research work is still in progress. Apart from that I downloaded my dataset and trained MobileNet model from Tensorflow on my laptop. To test the working of the model I gave the image as an input to the model and it predicted the correct output. Then I downloaded a demo application from Github and imported my model into that and tested on live data and it predicted correct most of the time.
Week 9
During the next week, i.e. week nine, I continued with more research work and learning more about the Tensorflowlite library and about its uses and functionality. As it is the first time I am working on a mobile application, I went through the online courses and brush up my skills on developing and enhancing the user interface for my android application. As I still haven’t figured out how to give input to the model in android application, I looked for some more projects to find out the answer. Also, I referred to the YouTube videos for the same.
Week 10
In the week ten I implemented the model in my application. While implementing the model I checked for errors or bugs that might occur while feeding input to the model. Once the model is implemented and it is working perfectly fine on all the input images. I retrained the model on bigger and better dataset because it was trained on smaller dataset for the purpose of testing and compatibility. So, my primary focus on the week ten was to implement the model in the application and assuring that it is working fine and performing the required task in real-time.
Week 11
In the week eleven, I focused to implement the technique of encryption in my project. For that I again did the research work and checked for the proper encryption technique that would be feasible with my project. The proper encryption technique is essential so the application must run in the real time. It should not take lots of computation time while encrypting the images. I also referred to YouTube and GitHub, if some help is required. Along with the encryption technique I had to secure the file for decryption in the device itself so that hackers won’t get access to the file easily.
Week 12
During the last week of the project I focused on final testing and debugging of the project. I performed Whitebox testing on the project and investigate codes for errors and bugs. Once the application is free from systematic errors. I performed Blackbox testing which would be like beta testing. I allowed my friends and family member to use the application in whatever way they like and will record their feedback. After getting the feedback I did the require changes. Once the process of debugging and testing is done, I finalized my application for the final evaluation.
Github Repository link: https://github.com/anubhavanand12qw/Explicrypto
#bennettuniversity#bu androidstudio privacy encryption decryption nudity machinelearning csebennettuniversity
1 note
·
View note