Text
Image Classification
We have moved into neural networks for image classification. This is a natural extension from our dimensionality reduction techniques work, as reducing dimensionality can increase the accuracy of classification. To begin this transition we have been researching current uses of neural networks in computer vision. Neural networks are designed to replicate the neurons in the human brain by activating based on some input, and producing output based upon this input and activation. These networks are trained by providing input and then being told whether that input matches some activation criteria via labeling. Common uses that benefit from neural networks in computer vision are object tracking, image recognition, and image classification, and in other fields such as natural language processing, speech recognition, pattern analysis, and others.
Our goal is to take what we have learned about dimensionality reduction and Artificial Neural Networks to improve the accuracy of digit classification and recognition. We have been compiling large digit datasets to be used with neural networks. By using a large corpus to train our model, and reducing the dimensionality of the data to just the dimensions that are most significant in hand-written digit recognition via unsupervised processes, we hope to create a highly performant neural network which can accurately identify digits.
We are continuing our work on neural networks for image classification and natural language processing during the upcoming 2017 - 2018 year. We plan tosubmit a paper to the journal of Mathematics in Computer Science in 2018.
01_08_2017
0 notes
Text
Image Classification and Dimensionality Resources Recap
Aranian, Mohammad Javad, Moein Sarvaghad-Moghaddam, and Monireh Houshmand. “Feature Dimensionality Reduction for Recognition of Persian Hand-written Letters Using a Combination of Quantum Genetic Algorithm and Neural Network.” Majlesi Journal of Electrical Engineering 11.2 (2017): n. pag. mjee.org. Web. 19 July 2017.
Wang, W. et al. “Generalized Autoencoder: A Neural Network Framework 3for Dimensionality Reduction.” 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops. N.p., 2014. 496503. IEEE Xplore. Web.
Hinton, G.E., and R.R. Salakhutdinov. “Reducing the Dimensionality of Data with Neural Networks.” Science (New York, N.Y.) 313.5786 (20060728): 5047. Print.
Park, Soo Beom, Jae Won Lee, and Sang Kyoon Kim. “Content-Based Im- age Classification Using a Neural Network.” Pattern Recognition Letters 25.3 (2004): 287300. ScienceDirect. Web.
Dbrowski, Marek, Justyna Gromada, and Tomasz Michalik. “A Practical Study of Neural Network-Based Image Classification Model Trained with Trans- fer Learning Method.” N.p., 2016. 4956. CrossRef. Web. 21 July 2017.
Howard, Andrew G. “Some Improvements on Deep Convolutional Neural Network Based Image Classification.” arXiv:1312.5402 [cs] (2013): n. pag. arXiv.org. Web. 2 Aug. 2017.
Farhangi, M. M., M. Soryani, and M. Fathy. “Informative Visual Words Construction to Improve Bag of Words Image Representation.” IET Image Pro- cessing 8.5 (2014): 310318. IEEE Xplore. Web.
Nalisnick, Eric, and Sachin Ravi. “Learning the Dimensionality of Word Embeddings.” arXiv:1511.05392 [cs, stat] (2015): n. pag. arXiv.org. Web. 10 July 2017.
Verma, Yashaswi, and C. V. Jawahar. “A Support Vector Approach for Cross-Modal Search of Images and Texts.” Computer Vision and Image Under- standing 154 (2017): 4863. ScienceDirect. Web.
Melamud, Oren et al. “The Role of Context Types and Dimensionality in Learning Word Embeddings.” arXiv:1601.00893 [cs] (2016): n. pag. arXiv.org. Web. 23 July 2017.
“The WordSimilarity-353 Test Collection.” N.p., n.d. Web. 24 July 2017.
“GloVe: Global Vectors for Word Representation.” N.p., n.d. Web. 24 July 2017.
0 notes
Text
CV and NLP
We continued with Computer Vision and NLP this week. We made final preparations for our talk at MathFest 2017. Next week we will be presenting at MathFest 2017 and we will be working on our final report for the summer extension.
23_07_0217
0 notes
Text
Continuing with Computer Vision
This week we continued with Computer Vision. We also made final preparations for our conference presentations. This upcoming week we are aggregating multiple handwritten digit character datasets to compile a larger dataset to prepare for image classification.
16_07_2017
0 notes
Text
Continued work on Neural Networks for Image Classification and Introduction to NLP
This week we began work on Natural Language Processing. We continued prepping for an upcoming conference. We have found a potential Undergraduate Mathematics Journal to submit an article to next month.
We are looking into hyperparameter optimization this upcoming week. We will be working with a large handwritten dataset which will be extending our work in neural networks and computer vision. We will also be making final changes to our slides for the upcoming conference.
09_07_2017
0 notes
Text
Deep Learning, CV, and Conference Prep
This week we continued making progress in Deep Learning and CV. We also worked on preparing materials for an upcoming conference. We will continue our research and we will being working with a large handwritten dataset during the next two weeks.
02_07_2017
0 notes
Text
Continuing Neural Networks and CV
We are continuing our research into neural networks and computer vision. We attended the ACM A. M. Turing Awards which was very inspirational and included many relevant and enlightening panels. We will continue our work in neural networks and computer vision for image classification during the next several weeks.
25_06_2017
0 notes
Text
Neural Networks for Image Classification
We are continuing with the Udacity courses: Deep Learning and Intro to Computer Vision. This is providing a good foundation for how to bridge Computer Vision with Machine Learning. This week we will be using Python and OpenCV to work on Computer Vision.
18_06_2017
0 notes
Text
Deep Learning and Intro to Computer Vision
This week we had final exams.
We began the Udacity course Deep Learning by Google (https://www.udacity.com/course/deep-learning—ud730). We also began the Udacity course Introduction to Computer Vision by Georgia Tech (https://www.udacity.com/course/introduction-to-computer-vision—ud810). We continued with Deep Learning by Goodfellow and Bengio.
We will continue to research neural networks for image classification.
11_06_2017
0 notes
Text
Final Exam Prep
There is no progress to report for this week due to final exam prep.
We will continue with neural networks for image classification. We will be working with TensorFlow and Spark.
04_06_2017
0 notes
Text
Neural Networks for Image Classification
This week we began working on neural networks for image classification.
http://cs231n.github.io/classification/#reading
We are gathering research and we will post more details this upcoming week.
28_05_2017
0 notes
Text
Deep Learning for Image Recognition
This week we read: “Deep learning” by LeCun, Bengio, and Hinton. We also read “Very Deep Convolutional Networks for Large-Scale Image Recognition" by Simonyan and Zisserman. We are continuing our research into deep learning for image recognition. In the coming weeks, we will be running image data in a neural network.
We submitted a CREU application for 2017 – 2018 titled “Dimensionality Reduction Techniques in Convolutional Neural Networks and Deep Learning for Image Recognition and NLP.”
LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. "Deep learning." Nature News. May 27, 2015. Accessed May 10, 2017. http://www.nature.com/nature/journal/v521/n7553/abs/nature14539.html.
Simonyan, Karen, and Andrew Zisserman. "Very Deep Convolutional Networks for Large-Scale Image Recognition." [1409.1556] April 10, 2015. Accessed May 10, 2017.
21_05_2017
0 notes
Text
No progress to report this week
Due to midterms, no progress was made this week. This upcoming week, we will continue with researching deep learning.
14_05_2017
0 notes
Text
End of Year Final Report for CRA-W CREU 2016 – 2017
We have been granted a summer extension to continue our work, so there will continue to be updates on our progress here.
This past week we completed out final report. This report outlines our work on dimensionality reduction. You may find a copy here:
https://www.overleaf.com/read/mpqhkxttgrds
Thank you to everyone that has been involved in this project.
08_05_2017
0 notes
Text
Poster Presentation and Midterms
This week we presented at the CSU East Bay Student Research Symposium. The research was well received. We continued reading Deep Learning by Goodfellow but had to spend most of the week preparing for midterms.
This upcoming week we will be working on our final report for CRA-W CREU.
We will be continuing our work over the summer, so once again thank you to the CRA-W CREU for supporting our research.
Acknowledgments:
CREU is a project of CRA-W and supported by the National Science Foundation.
CSU East Bay Center for Student Research and LSAMP for supporting this project.
23_04_2017
0 notes
Text
Continued Deep Learning and Neural Networks, Updated Poster
We are still working on the book Deep Learning. We have begun the Coursera Course: Neural Networks for Machine Learning (https://www.coursera.org/learn/neural-networks/home/welcome).
Our updated poster may be found here: (https://www.overleaf.com/read/gyfyscgmgwyz#/31800304/). We will be presenting our work at the CSU East Bay Student Research Symposium.
We are working on our proposal for CREU for the 2017 -2018 cohort. We will be working with TensorFlow during the next two weeks.
0 notes
Text
Deep Learning and Presentation Preparation
This week we continued with “Deep Learning” by Goodfellow, Bengio, and Courvillle. We completed work on the abstract to submit for a math conference later this year. We began work on the paper that will be presented at the conference. We also worked on our verbal presentation for the upcoming research symposium at our university.
We will be submitting the abstract this coming week. We will be continuing with “Deep Learning” this week.
09_04_2017
0 notes