risher-randall-blog
risher-randall-blog
Risher Randall
10 posts
Following my passions and trying to make the world a better place!
Don't wanna be here? Send us removal request.
risher-randall-blog · 6 years ago
Text
Technical Interviews
When embarking upon a career in software engineering, there is one required step that all developers must take, the technical interview. Whether you have received a master from Stanford University or are a self taught coder, every company will require prospective employees to partake in an interview process which tests technical skills. Even after establishing yourself as a credible and productive engineer interviews are practically always required when transitioning to another company. The ability to successfully interview is simply required. As interviews vary substantially between occupations, it is important to understand how the software engineering are different and unique. Interviewing for a developer position is not your standard behavioral interview. Specific problem solving abilities and technical knowledge are tested and emphasized. As it may initially sound daunting and difficult, just know that with time and practice it is attainable and rewarding. Once the technical interview is mastered many exciting opportunities will follow.
What does a typical technical interview look like?
A typical technical interview will be conducted on the phone, in-person or on a computer. The phone interview would be similar to the in-person screen, but it would not be as in depth and personal. The computer test, typically includes 2 - 3 coding question which will be solved through some service such as HackerRank and MyVue. A coding challenge will typically include a typical algorithm question which may range in difficulty. Depending on the position you are applying for, specific softwares and skills may be tested. In regard to the phone interview and in-person interview, the process typically includes an introduction, technical testing and questions. During the introduction phase, the interviewer asks some general behavioral questions to gain a clearer understanding of your background and personality which may last 5 - 10 minutes. Then, technical questions will follow which generally include problem solving and knowledge based questions.  Problem solving questions are considered to most important part of the interview and require an algorithm to be designed to complete a specific task. The data structure, algorithm and conceptual knowledge bellow is extremely useful when solving these problems. 
Tumblr media
These questions may range from easy to extremely difficult. Almost always, at least 1 - 3 difficult questions will be required to answer. The design of these algorithms are expected to be primarily built by the interviewee, but collaboration is common and often times expected. Once a design is found, interviewees will more than likely be asked to analyze the pros and cons of their algorithm. Aside from problem solving question, general knowledge questions will also be asked. Questions may range from company specific technologies (specific knowledge) to general computer science concepts (proxy knowledge). The goal of these questions is to asses the candidates current knowledge base of software engineering. Once the technical portion of the interview is complete the interviewer opens the floor the the interviewee allowing him or her to ask questions. Depending on the duration of the first two sections of the interview, the time allotted to questions may vary from 5 - 15 minutes. When interviewing on site 2 - 5 technical assessments following this general structure will be conducted. Phone interview will take place prior to on-site interviews and typically are not as technically in depth as on-site interviews. 
What do the interviewees want to see from a candidate? 
The requirements for a successful software engineer can really be deduced down to two necessities; general cognitive aptitude and coding abilities. These requirements are what the interviewer is primarily looking for especially in the technical interview. Cognitive aptitude is tested primarily throughout algorithm questions while coding abilities are assessed through technical knowledge questions, algorithm design and past experience. Cognitive abilities more specifically are judged, by analyzing a candidates ability to jump from one realization to the next quickly, synthesize new information rapidly and sufficiently and developing effective and successful solution. Coding abilities more specifically are assessed by clear communication of key concepts in computer science, succinct, flexible, efficient and accurate algorithm design and specific knowledge required for a certain position. If the interviewers believe a candidate is smart and can code, a job offer is often time presented.
In addition to intelligence and coding skills, interviewers frequently expect candidates to also work well in teams and fit well into the company culturally. This issues are not as ofter addressed in the technical interview and are instead assessed in a behavioral interview conducted by a managing director or recruiter. For the purposes of this blog post I will not dive into these topics. 
Bellow are a few unconscious questions which are innately considered by interviewers.
Can I feel secure that this candidate will not fail me? Even better, may he or she be successful at our company?
Is this candidate smart and can he or she learn quickly and sufficiently?
Can this candidate write useful, quality code for our software?
Will this candidate work well in a team and will he or she conduct him/herself professionally for an extended period of time?
Will this candidate be a cultural fit at our company? Will I enjoy spending time around him or her?
How do I prepare for a technical interview?
Preparing for a technical interview requires hard work and practice. There is really no way around it especially if you have not taken Data and Algorithms in college. To begin I would recommend purchasing Cracking the Coding Interview by Gayle McDowell and reading all the content prior to attacking to practice questions. Cracking the Coding Interview is the most reliable and widely used book on the interview process. It may also be useful to read another book on the interview process in order to receive another perspective. The goal while reading should be to understand the ins and outs of the technical interviewing process and find out what information may need to be learned and clarified. 
Once you gain an understanding of the overall landscape of the technical interview process, it is important to fill any knowledge gaps that may prevent you from succeeding in an interview. It is important to pick wisely what concepts you decide to learn as there is an infinite amount of material to learn. Cracking the Coding Interview should provide reliable guidance on what to study and understand. 
After reading the interview books and learning the necessary material, the practice begins. I have heard software engineers recommend up to 100 - 200 practice problems prior to interviewing. This portion of preparation is the most import. You refining your problem solving and coding abilities which are the skills which will probably land you a job. It is important that these practice problems are done throughly and properly. It is easy to let yourself casually go through a problem without simulating a live environment. My recommendation to this is to take practice seriously. A quote that I enjoy and agree with is “Both good and bad practice makes permanent.”  Do you want to solidify good or bad habits which may naturally surface during an interview? Also, establishing a process for problem solving is useful and almost necessary. This process will certainly be iterative and improved upon as you continue to execute practice problems. Guidance on process can be found in Cracking the Coding Interview, online blogs + article and in other interviewing books. Practice problems can be found on HackerRank and in Cracking the Code to name a few. After executing a practice problems, take a few notes and chronicle the process. Keep all of these notes and one place and learn from them as you continue practicing. 
Regarding general technical knowledge questions a few steps may be taken. First, it is important to solidify all the concepts which you know to the best of your ability. You do NOT have to know every intercity of the technologies you have learned, but having a firm grasp on the corse concepts to the point where you can clearly and concisely communicate them is quite important. Second, knowing your technical project well can go a long way. Candidates should be prepared to answer question on topics such as technologies used and why, problems faced, teamwork and design structure. Lastly, you should learn any other technologies which you think will significantly help you in the recruitment process. There is an endless amount of information to learn, so deciding on the correct language, framework, concept or tool is quite concerning. Also, if you are not careful learning on your own can be quite time consuming. Be weary! 
Lastly, I recommend scheduling a practice interview prior to your real interview. Currently software developers who you may know and are willing to provide help, are a great resource for practice interviews. Also, Skilled is a new company which provides practice interviews as a service, but they are quite pricey. If done correctly, they could be very worth it. 
Finally, preparing with some friends and/or mentors can go a long way! The preparation process can get lonely and frustrating. Having a friend or mentor pushing you and holding you accountable will enable you stay discipline and on track. Also, the process will be more enjoyable as you will be able to connect, express frustrations and share laughs together. Lastly, you will be able to learn other prospective developers techniques, approaches and struggles. 
Conclusion
As you can tell, the preparation process is quite demanding. I have heard people recommend anywhere from 1 - 3 months to properly prepare for technical interviews. This is a good deal of work, but remember landing your first job is the hardest. Once completed, it may lead to life changing opportunities. Furthermore, you will have developed a skill that you may carry with yo moving forward. If you would like to switch jobs, you will more than likely have to complete a similar interview process 
0 notes
risher-randall-blog · 7 years ago
Text
GitHub Refresher
I recently have just graduated college and it has been nearly four months since I graduated from the Flatiron school. At school I was taking two work intensive engineering courses, a philosophy course and a neuroscience course. My workload was quite large and as a result I was unable to focus on my software engineering skills as much as I wanted to. Anyways, I now finished my engineering degree and am back to focusing on the code. To start I feel as if it is proper to refresh myself on the ever so important GitHub commands and workflow.
I will begin with the basic workflow of GitHub. All GitHub repositories all have a master branch. The master branch serves as the headquarters and final draft of the project. Any changes which are made to the master should be finalized and tested. In order to work on the repository patching bugs or adding features, the master branch is forked (copied) and edited are made on the copy. Once edits are completed, the are merged. During merging any conflicts (overlaps in changes) must be resolved and than the changes are included in the final product. This process continues multiple levels. For example, you may have a team that is working on a specific section of the software and that team is broken down into multiple sub teams. The project is split up into a tree and updates are made through multiple forks and mergers. 
In order to complete mergers, pushes and pulls must be made from and to the repository which is hosted online. To update your repository to the “master” repository which you are working on, a simple  - $ git pull - can be typed into the command line. In order to add your changes to the repository which is hosted online a few commands are required. First, you must type in - $ git add . - into the command line. This step transfers any changes into the staging area. Then you must commit those changes to the local repository through - $ git commit -m “message” - Now your changes have been saved, and in order to transfer changes from to the remote repository a simple - $ git push - command is needed. These steps should allow you to update your local repository from the online platform and update your online repository from the your local computer. Once this skill is perfected, you can then merge your branch with other branches using the - $ git merge <branch> - command. Bellow is a useful cheat sheet which contains a fair amount of GitHub commands.
Tumblr media
0 notes
risher-randall-blog · 7 years ago
Text
Front End Frameworks
Over the past five weeks I have been taking a deep dive into front end web development. I started with learning the fundamentals of Javascript, and then I moved into DOM interaction and AJAX requests.Once I gained a firm understanding of the basics, I moved on to the powerful front end framework React. To understand all that is available and to continue my front end development, I have decided to review and compare the most widely used frameworks today.
Tumblr media
React was built internally by Facebook and was first offered to the public in 2013. Since then three versions have been developed to go along with its central library: React-devtootls, ReactJS.net and React Native. Together these frameworks allow developers to the enhance organization, performance and compatibility of the front end of websites. Bellow are the central concepts used in Reach which enables it serve as a useful alternative to plain JavaScript.
Components - React uses interconnected components to provide exceptional organization and styling compatibility. To bring these components together data is stored and sent through props and state. These components combine behavior and templates rather than separating the two.
JSX - JSX allows developers to blend HTML with Javascript. By providing JSX syntax developers are able to seemly integrate HTML within a Javascript function or class.
Virtual DOM - React creates a virtual DOM which alters the actual DOM when there is a difference between the two. The changes made on the actual DOM is know as DOM reconciliation. By creating this virtual template, inefficient DOM specific action may be avoided thus boosting performance. 
Lifecycle - Each component will experience a lifecycle beginning when the page refreshes. Depending on wether the component is a functional or class based, specific built in methods will be provided to facilitate adding in specific behavior.
Compered to other front end frameworks the learning curve for React is considered moderate although difficulty does tend to escalate briefly through the beginning phases of the learning process. Looking into specific front end application, React is especially useful while working on Dynamic Applications, Single Page Apps and Native Mobile Apps. A few of the leading technology companies which use React includes Facebook, Airbnb, Uber, Netflix, Twitter, PayPal, Stripe, Tumblr and Walmart to name a few.
The bellow picture shows the general structure of a React component. 
Tumblr media Tumblr media
Angular is a TypeScript-based Javascript framework first developed by Google in October 2010. Angular also known as Angular 2+ or Angular 4 serves as the incompatable successor of AngularJS. Angular aims to provide similar performance and organizational benefits as React, but takes a different approach as it follows the Model View Controller architecture. Bellow are some of the distinguishing features of Angular.
Two-Way Data Binding - In two-way data binding any changes which are made within the view will trigger a change in the model and any changes made in the model will also change the view. Therefore data flows in both directions rather than in one direction. If used correctly this feature can provide increased efficiency 
Model-View-Controller - Angular follows the powerful and well-known MVC framework. The components and directives serves as controllers, the templates serves as views and the model placement depends on the project. This framework allows users organize and scale front end application well.
TypeScript -  TypeScript serves as the syntax to render HTML. TypeScript serves as a unique langue rather than the JavaScript, HTML blend found in JSX.
The learning curve for Angular is considered to be the steepest amongst front end frameworks. Learning the TypeScript’s unique syntax and understanding how how the MVC framework is applied to create renderable components typically takes some time and is hard to do on ones own. Angular structure is most useful when building cross-platform mobil apps and enterprise software. Some of the major technology companies that are currently using Angular are Google, Wix, weather.com and Forbes.
The code snippet bellow highlights shows the syntax seen in angular and shows how A component is built through directives and views rather than encapsulating everything within component function or class.
Tumblr media Tumblr media
Vue was first released by an ex-Google employee and is considered a spin off to Angular. Rather than being built by a team of developers, Vue was built by Evan You alone and now includes a lean team of twelve developers who update and refine the framework. Vue is considered as a “intuitive, fast and composable’ tool that uses a Model-View-ViewModel architecture. Vue is currently the fastest growing front-end framework outpacing both React and Angular. Bellow are some of the key concepts which make Vue such an effective tool.
Rendering - In React, when a parent component renders all of its children components must render as well. Vue takes an alternative approach. By tracking a components dependencies during a render, Vue is able to render components only when data changes thus removing unnecessary renders and improving performance. 
Templating / DSL - Vue be default uses templates to render views rather than JSX. The syntax for creating templates in Vue is built on HTML rather than JavaScript. By focusing on HTML, Vue is able to effectively integrate existing application.
Virtual DOM - Similarly to React, Vue also uses a virtual DOM to enhance performance. The reconciliation process certainly has some differences, but the overall objective of increasing processing speed by removing the DOM’s inefficiencies is the same.
The learning curve for Vue is considered to be certainly lower than Angular and even React. The paradigm shift in Vue is not as pronounced as the shift is in React and the syntax is easier to pick up than Angular’s TypeScript. Vue is considered a lightweight framework and with very clean code.
The bellow code snippet shows the basic structure of a component within Vue.
Tumblr media
 To gain a sense of the which frame work is the most widely used, I included a plot of the amount of downloads in the past two years.
Tumblr media
0 notes
risher-randall-blog · 7 years ago
Text
Raspberry Pi
In 2015 the Raspberry Pi Foundation offered their first cheap and portable, single board computer with the hopes of promoting eduction in schools and developing countries. Since then, the foundation has sold nearly 19 million computers making it the third most sold computer in history. This initial small scale project has evolved into a tool that is practically used in all electronics classrooms today and enables students to grow and learn in a way that was financially impossible before. Even experienced developers are using these devices to test their knowledge and prototype products. As a budding engineer, I wanted to become involved in this incredible project and learn how such a tool has helped so many others grow. 
Tumblr media
To start, I researched the specifications and requirements that are included for each model. Throughout my readings, I came across many new words and phrases which described a computer in a way that initially was foreign to me. As I continued digging, I was able to parse out the new vocabulary and use my previous knowledge to gain a general understanding of the inner workings of a computer and more specifically a raspberry pie. Bellow is a list of the primary specifications and requirements I came across.
Raspberry Pi 3+
SoC - Broadcom BCM2837B0
CPU - 1.4 GHz 64-bit quad-core ARM Cortex-A53 processor
GPU - Broadcom VideoCore IV @ 250 mHz
Memory - 1 GB
WiFi - 2.4 GHz, 802.11n (150 Mbit/s)
Bluetooth - 4.1 (24 Mbits/s)
Ethernet - port10/100 
The above specs enable you to gain a basic understanding of what the device is capable of. It is important to familiarize yourself with these specs as they may dictate the direction of your project. For example, the Raspberry Pi 3+ only contains 1 GB of memory. If you are hoping to store data on the device you must be weary of this limitation or include supplemental storage. 
After gaining a general understanding of the capabilities  and stipulations of the Raspberry Pi, I decided to learn how to configure the environment for the micro-controller and understand how the device may be used. To gain this knowledge, I read through multiple articles and watched the basic youtube videos. Bellow is a basic step by step list to configure your environment and prepare yourself to integrate software and hardware to your device. To complete this process you will need a Raspberry Pi, HDMI monitor, USB keyboard and mouse, 8GB MicroSD card and a power supply. 
Install the Raspbian library onto your SD card
Properly connect your Raspberry Pi to the monitor and power supply
Set up Raspbian on your device
Configure your Raspberry Pi in any way you please
Explore the following link for more detailed instructions. 
https://lifehacker.com/the-always-up-to-date-guide-to-setting-up-your-raspberr-1781419054
Now that the environment is set up, you should be able to access the operating system.The interface should look similar to the bellow image. If you have completed these steps you have set up your own computer. Thankfully due to, Raspberry Pi’s compatible environment, it is extremely easy to upload programs and configure your device to complete desired behavior.
Tumblr media
To inspire budding Raspberry Pi developers such as myself, I included a few projects which I thought are especially interesting and useful to pursue. 
A popular endeavor is to build your own robot. By using the Raspberry Pi micro-controller sensors may be integrated to created an autonomous and interactive robot.
Tumblr media
For the Harry Potter Fans a real-life Daily Prophet is within your reach.
Tumblr media
For those interested in video games the Raspberry pie can be used to create your own gaming console.
Tumblr media
Hopefully this draws some inspirations! As I continue to learn more about the Rasberry Pi, I will provide more information. 
0 notes
risher-randall-blog · 7 years ago
Text
The Fourier Transform Theorem
This weekend I was bold enough to dive into the Fourier Transform Theorem and try to understand why it is considered one of the most useful and powerful algorithms used in the world today. As I was researching this area, in a way I opened up a Pandora’s Box as I kept going down countless rabbit holes without gaining a complete understanding of topic. As a result, I received a valuable, big picture understanding of the area of data transmission by waves and the Fourier Transform theorem. I will provide a brief overview of some of the key concepts I learned so far. 
First, I want to dive into the Fourier Transform Theorem. This theorem is fairly abstract and takes some time to wrap your head around it so please bear with me. Before we get into the details, it is important to understand that the key principle of the Fourier Transform Theorem is that it takes a complex wave which oscillates over time and transforms the wave into a wave which is in the frequency domain. By completing this transformation, we are able to find the frequencies of all of the waves which are combined together to create the original, complex wave. Bellow is a great video which explains the basics of the  Fourier Transform Theorem. 
youtube
As shown in the video above, the frequency of the original complex wave which is measured in the time domain can be wrapped around a central point in the x-y plane. The amplitude of the wave is measured through the radial distance from the point of origin, and time is represented through the polar cordinates in the angular direction. When the wave is wrapped around a point of origin is this form, the tightness and looseness of the representation may be altered in order to provide useful information (To understand how this alteration occurs, I recommend watching the video above). As the wave is represented in this new form, the center of mass (the origin) hovers around the origin. When the cycles per second (rate at which the wave was wrapped around the origin) matches the frequency of one of the fundamental waves which was used to create the complex wave, the center of mass takes a serious deviation from the origin. As the deviation from the center of mass is recorded we are able to find the frequency of ALL of the waves which are complied together to create the complex, original wave. To accurately complete this analysis, the bellow formulas must be applied. Fourier was able to write these formulas by using the Euler’s formula for complex numbers. This formula not only allows the frequency to be recovered from the original wave, but also allows the amplitude to be found. 
Tumblr media
This theorem is extremely useful when transferring information through waves. For example, when audio is recorded a scientist can use the Fourier Transform Theorem to break down the complex audio wave into multiple sub waves of different frequencies and amplitudes. As these broken down sub waves are far simpler, a scientist can produce sound which represents them and by combing them all together the original audio may be replicated thus creating a synthesizer. Furthermore, certain waves may be excluded when bringing the sub waves together therefore filtering out any sound which may be preferred to be left out. This may be used to enhance sound or save space. 
Audio only serves as one example of when the Fourier Transform Theorem may be applied. Others include, MRIs, X-Ray, differential equations and signal processing.  
0 notes
risher-randall-blog · 7 years ago
Text
Wellness and Rails
Last week I along with Jarrett utilized our newly found understanding of Rails to build our first full-stack web application. We decided to build a Wellness application that tracks sleep, nutrition and exercise and then provides users with a wellness score based on the data provided in each wellness category. Our application certainly still has some improvements to be made to reach the point of a viable consumer product, but we were able to establish a basic framework that we can build off in the future. Furthermore, we learned how to use the ever-so-useful Rails framework and gained a solid understanding of RESTful routing and the web in general. 
Looking back on our week long journey, there were multiple learning moments. First, I personally learned how to balance dreams with reality. On Monday I vividly remember, myself talking about the possibilities of connecting our application to credit card bills and sleep and exercise applications to auto-populate data for our users making the user experience much more convenient. I also, explored the possibility of adding on initial questionnaires to the sign up section of the application. Lastly, I was hoping to utilize an extensive food API to enhance the nutrition section of the Wellness plan. All of these were exciting ideas that I still am not completely abandoning, but unfortunately none of them made it into my first iterations. As I reflect back on my experience I have learned to key lessons. First, there is an art to knowing how far you can push the boundaries when building a product with a time constraint (which will pretty much always be the case). One must know how to make a strong plan of attack while knowing when to draw the line when our hopes and dreams may be taking a step or two too far. Second, I learned that data and compatibility with other applications is not easy to come across. When working with strictly open sourced tools and information, one must be weary of these potential road block and know how to navigate around them if they come up. If one is not careful, they may spend days looking for data and tools that simply are non existent. These two lessons stuck out to me during my second week long project, but they certainly were accompanied by many syntactical, code related and collaborative lessons as well.
 That aside, bellow is a copy of the repository of my Wellness application. Maybe if the time comes up, I will be able to come back to the work and build out some of my dreams which time prevented me to address during my first iteration.
Note - Look into the Day_5 branch for the finalized code.
https://github.com/randallr18/final_project_2/tree/day_5
Tumblr media
0 notes
risher-randall-blog · 7 years ago
Text
Search Engines
Search engines are what transforms the web from a scattered, confusing communication tool to a seem-less, interconnected platform which comfortably hosts billions of users each day. Currently the third largest company in the world, Alphabet Inc, hosts the most used search engine, Google, receiving around 3.5 billion search queries each day. Other popular search engines include Bing, Yahoo and Baudi. Each one of these useful tools leverage similar algorithms and structures in order to make the internet more navigable. As I have recently been focusing on learning web development, I thought it would be worth while to understand how these powerful search engines operate. 
After doing some research, I found that search engine tasks can be split up into three separate categories: 1. Crawling 2. Indexing and 3. Querying. A very high level explanation of these three tasks is that crawling is utilizing tools to retrieve data from URLs on the web, indexing is taking the data received from the crawl and storing it into a database and querying is taking a user search input and returning a list of most relevant URLs. Nearly all search engines use this common structure. Now let’s take a deeper dive into each of these separate categories to see how they all come together.
Crawling
As mentioned above crawling is responsible for retrieving the necessary data from all of the websites on the web. First, before we get into how Crawling is implemented, we must establish that a Crawl must be performed whenever the developer deems necessary rather than for each individual search. Crawling takes a significant amount of time and bandwidth to complete and if the process were to be repeated for each search, the wait time would be unreasonable and the costs would become steep. So now that we understand that a crawl occurs each time the developer deems necessary and why, let’s get into how exactly this process is implemented. 
In order to set up a proper crawler multiple libraries should be leveraged to simplify the process. The bellow code provides a list of gems that were downloaded for a simple ruby based search engine. As we can see we have a class called Spider, Location, Word and Page. The most important and most extensive class for the crawl is the Spider class. What the Spider class is responsible for is taking a set of default seed URLs and inspect each page. When a page is inspected the methods within the Word, Location and Page class are used to retrieve the necessary data. As the URLs from the default list are inspected, each time an additional URL or link is found the process method within the Spider class stores the link within an additional list. Finally, once the initial list is completely parsed the secondary list is analyzed and the process keeps repeating itself until there are no more URLs to analyze. By continuously analyzing any links found the Spider’s process method should be able to provide data which serves as an appropriate representation of the web as long as the default list is selected properly. Now our crawl is complete, so the search engine must now index the data.
Tumblr media Tumblr media Tumblr media Tumblr media
Indexing
The primary role of indexing is taking the data gathered through the crawl and storing it in an efficient and manageable manner. There are many ways to structure your database, but a frequent technique that is used is the inverted index. To understand  the inverted index lets assume all the data was stored in an enormous table such as the one bellow. Let’s say a user wanted to search for documents with the word cow within it, the program would have to go through each document. As there typically are millions of documents within a search engine, this would take precisous time that the user would surely be unwilling to spare. 
Tumblr media
To get around this issue developers have harnessed the power of relational databases and utilized the inverted index. Instead of looking at each document, how about setting up the database by word and finding which documents are associated with that world as seen bellow. Now the program can go to a specific word’s column rather than go through the entire table saving valuable time.
Tumblr media
Databases will range in complexity and size for each search engine. The main point to understand here is that the database enables the search engine to save valuable time and bandwidth by storing the data in a personalized manner. Once the data is properly stored it is now time for the user to query the data based on his or her specific request.
Querying 
Querying is a form of taking a users input and using that data to retrieve the most relevant URL. To properly find the URL a ranking algorithm must be implemented by the developers. The algorithms can range significantly in scope and complexity, but for this example lets assume the search engine is ranking by word frequency, location and distance. The bellow code is all that is needed to implement a ranking system based on this criteria.
 In the Digger class the search method is the only method which needs to be called. This method is where the user will input their search request. Once search method is implemented it parses out the words in a usable manner and then hits the rank method. This method with then in turn his the frequeny_ranking, location_ranking and distance_ranking methods. Each of these methods will query the database and provide provide a ranking based on their specific criteria. An array in order of most relevant URLs will be provided with a score. These rankings / scores will then be merged in the rank method and finally a list of the most relevant URLs will be provided! 
Tumblr media Tumblr media
This is the basic structure for how search engines operate. As this blog has already been fairly extensive, I am going to end it here. A follow up blog on understanding word meanings, machine learning and page rank may come soon!
The bellow two sites significantly helped me understand these two concepts, and code snippets were included from their project. 
https://www.youtube.com/watch?v=LVV_93mBfSU
https://blog.saush.com/2009/03/17/write-an-internet-search-engine-with-200-lines-of-ruby-code/
0 notes
risher-randall-blog · 7 years ago
Text
Fifa World Cup
Last week I created my first application. In honor of the Fifa World Cup, I along with Mary Kate decided to build an application that can update users on different teams, stadiums and tournament statistics. This project was a big step for Mary Kate and myself as we had to apply our knowledge in such an open ended manner. We were able to take multiple different concepts and bring them together to create a full fledged application.
To make this happen we utilized multiple different Ruby gems such as Active Record, Sinatra, SQLite, tty-prompt and Pry. We were able to link our database to our models using Active Record and tty-prompt allowed us to provide a user friendly interface. Thanks to the incredible open source nature of programming, we were able to provide up to date information through multiple world cup APIs. It was certainly not easy to bring all of these parts together, but through determination and collaboration we were able to create a great final product! Bellow is a copy of a GitHub link that contains the repository with all of the necessary files to run the application.
As the tournament is far from over there is still time to build off what we have done. I recently have just added my brother as a collaborator. If any additions are made I will make sure to provide updates!
Tumblr media
https://github.com/randallr18/World_Cup
0 notes
risher-randall-blog · 7 years ago
Text
Understanding the Greatness of GitHub
Over the past few weeks, I have been working on GitHub practically every day and to be quite honest I was not completely sure how the platform truly operates. I understood how to complete the commands required to complete my labs but knew no more. I also, knew that GitHub was just purchased for over 7 billion dollars, has over 27 million users, contains over 50 million repositories and serves as the largest host for source code in the world! I felt as a budding programmer I must know how to properly navigate this powerful platform, so I decided to write my first technical blog on GitHub.
To start off I want to separate Git and Hub and explain the difference. Git serves as an open-source version control system which allows programmers to update a project seamlessly and permits teams to effectively collaborate on projects. We will get into more of the details of these two services soon. The Hub (in GitHub) serves as the social network established on the GitHub website. On the Hub users can create a repository that is either private or public. Users can also fork repositories provided by other users. When a user forks a repository they are essentially creating a copy of the code/files, so that he or she can build off that code or use it in any way he or she pleases. Then users can make a pull request. When a pull request is made, a user is requesting that the owner of repository look at the updated code (which began with a fork) and consider updating his or her repository with the changes provided. If a pull request is accepted and used, the user who made the changes receives credit. Lastly, the Hub serves as a way for programmers to connect and look at each-other’s work. Not bad!
Now that we understand the difference between Git and the Hub and the benefits that come with both, let’s dive into the specifics of Git. Once we understand Git we will see how Git and the Hub are inextricably connected. As we mentioned above Git is used to update a project seamlessly. To accomplish this service Git creates three separate areas of a repository: the working tree, the staging area and the history. The working tree serves as the location where we complete any necessary work, the staging area (also know as the index) serves as the place where commits are prepared and the history serves as a record of all the commits that have been made. The main benefit which this system provides is that the team receives a snapshot of the project as each commit is made. If any changes need to be made, any one of the past commits can be used. To fully benefit from this structure, it is essential to know how to seamlessly navigate through each sector. At the bottom of this post you can find some useful keys to navigate files from each area.
Tumblr media
The second area I wanted to explore was Git’s ability to allow programmer’s to effectively collaborate through branching and merging. As projects tend to require extensive work, we frequently see multiple programmers collaborating. To create an effective work flow, a team should use the branching and merging services provided through Git. To gain an understanding on how theses tools work, we must first know that when a repository is created it starts as the master branch by default. This can be verified by typing in -git checkout in the command line. For the sake of my explanation, let’s assume that another programmer is working on the same file. Once the programmer makes changes to his initial fork, he will send his changes to the owner of the repository hoping that he will accept the changes. If the owner who is working on the master branch accepts the changes he can create another branch of the repository on his file using -git branch (branch name). Once this branch is create he can upload the new files and then merge the branches. When he merges the branches, updates from both branches will be saved onto a new commit. Now the master branch has the changes he made and the changes made by the eager, helpful programmer! Merges can be made many different ways. This example just serves as a description about how the general process works. For example, an owner of a repository can merge changes through a pull request over GitHub.
Tumblr media
So with these functionalities programmers are able to collaborate and work in a far more effective way! The network of 27 million programmers on GitHub is entirely built upon this process. This overview only scratched the surface of understanding the power and capabilities of Git provides, but hopefully it should provide some guidance on how the process works and why it is so powerful. If this all still seems confusing and nebulous, don’t feel ashamed. This took me quite some time to understand. Bellow I provided some useful links and articles to clear up any confusion.
Tumblr media
Core Concepts  - https://www.youtube.com/watch?v=uR6G2v_WsRA
Branching and Merging - https://www.youtube.com/watch?v=FyAAIHHClqI&t=73s
General Information - https://www.howtogeek.com/180167/htg-explains-what-is-github-and-what-do-geeks-use-it-for/
0 notes
risher-randall-blog · 7 years ago
Text
Hello World!
My name is Risher Randall, and this will serve as my first blog ever. To start off, I wanted to provide some background information on who I am and how I came to programming. I was born and raised in Houston, Texas and attended St. John’s School for all four years of high school. After four years of actively participating in three sports and attempting to keep up in the classroom, I decided to pursue a degree in engineering at Washington and Lee University.
While completing my college degree, I constantly was thinking about where my future lies. I held internships in multiple different fields, but could never really find my true passion. Finally throughout my senior year and after talking to multiple graduates in different fields, I concluded that a career centered around technology was for me. I was drawn towards the innovative and creative thinking required and the impactful products and services provided by technology companies. As I have always been strong in the math and sciences, I decided the best way to enter the industry was as a software engineer. Over the past 3-4 weeks I have been introduced to object oriented programming, HTML, CSS, SQL and database management. I have been fascinated and passionate about learning all of these skills and the progress I have made so far has only made me want to learn more! I am starting this blog to chronicle my journey in becoming a software developer. I look forward to updating whoever may be interested!
Tumblr media
0 notes