yaritzarodriguez-nyc
24 posts
Don't wanna be here? Send us removal request.
Text
Stripping sensitive info from Github.
Ever have auth keys and other sensitive info accidentally pushed to a public projects repo? It happens, and when it does you should move swiftly to correct it as keys and their related accounts may be compromised. If left uncorrected, this may mean a big fat bill at the key owner's expense, or a disruption in your apps functionlity due to unauthorized excessive requests. Here is a step by step run down on how to erase all commits associated to files with sensitive data.
Note: Before continuing through the steps below, it is adviseable to save a copy of the compromised repo on your local machine in case of issues with stripping. Make sure to give it a discreet name like Old_app_name to distiguish this version from the future stripped app.
Now we can proceed: 1. Identify which files contain auth keys and the sensitive data to be corrected.
2. In the app's root directory run this multiline command:
git filter-branch --force --index-filter \ 'git rm --cached --ignore-unmatch your_file's_path.here' \ --prune-empty --tag-name-filter cat -- --all Note: Change "your_file's_path.here" to the actual path to the file to be stripped. It's important to make sure the commands are entered in multiple lines exactly as shown, otherwise it will produce an error.
3. Add the corresponding file to the .gitignore directory to protect it from being accidentally pushed.
echo "your_file's_path.here" >> .gitignore
4. Then, add and commit the .gitignore directory.
git add .gitignore git commit -m "Add file_you_are_stripping to .gitignore"
5. Repeat steps 2 through 4 for each file containing sensitive data.
6. Once that is done and you are certain all corresponding data has been removed, over-write the repo with git push origin --force --all
7. Finalize the process with
git push origin --force --tags
At this point, the repo on Github should be free of sensitive info and is ready to share. Your keys and sensitive data will now exist locally and can be referenced via ENV variables by the rest of your app.
Since stripping is specific to the repo these commands have been applied to, it does not affect any project forks before then. Hypothetically speaking, the means that if Mary of Joe forked the project before the sensitive data was stripped, Mary's and Joe's forks still have my keys on it. Also, even if the project has never been forked, you can consider keys compromised just for having been made publically available. For this reason, and to be safe, it is advised that one requests new keys from their corresponding issuer to protect accounts.
In my case, I have opted for creating a new stripped repo for our project. I asked collaborators to delete old forks/repos from Github and for the project once they've tested my new stripped repo to work. Everyone will also get a their own keys to work with.
0 notes
Text
Demystifying "the cloud."
The following is an adaptation of “How does it GET there?”— A presentation given by Rebecca Poulson and I at the Flatiron Presents meet up this summer.
We all use the Internet every day, -we work on the Internet. It is definitely not that black box. It��s something more like this:
The Internet is a network that facilitates the transmission of information amongst billions of devices worldwide. It’s a network of networks, but what exactly is it made out of, how does it transmit that information?
In order to understand these things, we decided to break down a GET request, the simplest of HTTP requests and follow it step by step, from the moment that an individual client opens up the browser on his/her computer, to the browser requesting a page from a server, to the server responding back with the requested page.
YOUR COMPUTER:
We’ll start by taking a look at how your request travels through the client side protocols on your home computer:
This stack above represents a few of the different protocols that take place inside your client computer when you click on a link to MAKE an HTTP request.This stack represents a few of the different protocols that take place inside your client computer when you click on a link to MAKE an HTTP request.
We start out in the application layer. This is the first layer of abstraction which includes the browser and the DNS. This is where the application code runs. Data from here is passed on to the next layer.
TCP layer breaks down data into data packets, which ARE more manageable, pieces of data. It attaches a header to each packet assigning it a destination port. This is the way we organize all the messages that are BEING sent out over the Internet. Ports are a way of multiplexing, or specifying different recipients at the same location. Ports allow your computer to use more than one network service at the same time.
Similarly to TCP, the Internet Protocol layer, above, packages data and tells it where to go. THIS time by means of an IP address. THE packets from the IP layer are called datagrams and they contain two pieces:
the payload which is your information,
and the header which determines the IP address your information will go to.
The main role of the PHYSICAL layer is to translate binary packet data into network signals. Those signals may be, electrical, radio waves, or light as in the case of optic fiber networks. THAT is mind blowing awesomeness right there!
And, all this is happening right in your very own computer :)
YOUR HOME NETWORK:
As we leave your personal computer, we connect to the home network with the following set of hardware: the switch, the firewall, the router, and modem. The four of which can exist as a single box.
(That Linksys might look familiar some of to you)
A switch creates a LAN, or a local area network. It acts as a controller which allows it’s networked computers to talk to each other.
The firewall acts as decision engine, deciding what data traffic to allow in and out of the LAN.
A router (“a hop”) serves as a dispatcher. It chooses the best path for the information to travel, by using dynamic routing protocol. The router makes sure that information doesn’t go where it isn’t needed.
The modem (MODulator-DEModulator), is responsible for creating the SIGNAL that ACTUALLY gets transmitted and then decoded to recreate the information at its new destination. It takes YOUR digital data and turns it into a modulated signal.
THE INTERNET SERVICE PROVIDER:
The modulated signal goes to an ISP (or internet service provider ). You actually contract with an ISP to access the Internet. While your home network isn’t directly connected to the Internet, your ISP owns a mesh network of routers which are. This redundancy of routers provides multiple paths for the information to travel. If there’s a problem with one router, the packet can navigate around it.
FINALLY, THE INTERNET!
Now, at this point we are on the Internet and we’re able to answer our initial question…The Internet is just a series of routers owned by different entities. They connect different LANS to each other.
Hoewever, we still don’t have our information. Here, data packets containing our request go from router to router, each time getting closer to it’s destination.
Of course, we still don’t have this information on our home computer, we’ve merely located where it is on the server, but to get back from here, the process is pretty much the same but in reverse.
We’re guided out of the server computer and through the same series of switches and routers back onto the Internet and eventually to the network of our home computer.
We then navigate our protocol stack in reverse until we finally end up at the application layer of our home machine—our browser, where we can see the information we requested.
Finally, we hop into the network of the company whose information we’re requesting. We then go through the company’s firewall and get routed onto the network that the server containing the information we requested is on.
Then our data packets go through a switch and finally get onto the individual server containing the information we want.
Within that server computer, we navigate the very same protocols that we talked about on the client side, this time proceeding from the bottom up.
We are routed by hardware up into the IP layer which guides us via IP address. Then our data is directed to the correct port and into the server’s application layer. Hooray! We’re here, we made it!
And what exactly was it that we were requesting? Well, it looks like it was this image, which is a much, much, simpler explanation of what the Internet is - it came out of Sir Tim Berners Lee’s head, and it’s full of cats.
1 note
·
View note
Link
Watching Heat Seek NYC go from Flatiron School project to awesome side project to NYC Big Apps winner has been really exciting from where we’re sitting.

Here’s their deal: New York City law says apartment temperatures must be kept at or above a certain level between October and May….
3 notes
·
View notes
Text
Pushing frontiers with Angular.js
Wanting to dive deeper into front-end work, I have decided to learn another framework this week: Angular.js. Since there is no better way to learn a new tool than to build something with it, I have challenged myself to make a simple looking single page apps which will allow me to share my credentials on demand at job fairs etc.
For the next few days I will be following the Angular.js track on codecademy and sharing snippets of my project code to explain the fundamental concepts of the framework.
Stay tuned ...
0 notes
Text
Post-mortem: What I learned from getting hacked.
A little preface:
Open Web Application Security Project (OWASP), is a non-profit international organization that helps provide guidelines and educational resources that are both open to the public, and meant to lead developers and companies into producing secure code.
The OWASP top 10 is a list of the most common and important web vulnerabilities found in web apps. After hearing our instructors make the case for the imperative use of safe params in our rails app when building forms, it is no surprise that the infamous SQL injections (A1-injections) top the list! The remainder of the list as featured below was acquired from the OWASP site. This article will focus on issues with two of the 10 vulnerabilities.
Fig.1 OWASP TOP 10
A1-Injection: Injection flaws, such as SQL, OS, and LDAP injection occur when untrusted data is sent to an interpreter as part of a command or query. The attacker’s hostile data can trick the interpreter into executing unintended commands or accessing data without proper authorization.
A2-Broken Authentication and Session Management: Application functions related to authentication and session management are often not implemented correctly, allowing attackers to compromise passwords, keys, or session tokens, or to exploit other implementation flaws to assume other users’ identities.
A3-Cross-Site Scripting (XSS): XSS flaws occur whenever an application takes untrusted data and sends it to a web browser without proper validation or escaping. XSS allows attackers to execute scripts in the victim’s browser which can hijack user sessions, deface web sites, or redirect the user to malicious sites.
A4-Insecure Direct Object References: A direct object reference occurs when a developer exposes a reference to an internal implementation object, such as a file, directory, or database key. Without an access control check or other protection, attackers can manipulate these references to access unauthorized data.
A5-Security Misconfiguration: Good security requires having a secure configuration defined and deployed for the application, frameworks, application server, web server, database server, and platform. Secure settings should be defined, implemented, and maintained, as defaults are often insecure. Additionally, software should be kept up to date.
A6-Sensitive Data Exposure: Many web applications do not properly protect sensitive data, such as credit cards, tax IDs, and authentication credentials. Attackers may steal or modify such weakly protected data to conduct credit card fraud, identity theft, or other crimes. Sensitive data deserves extra protection such as encryption at rest or in transit, as well as special precautions when exchanged with the browser.
A7-Missing Function Level Access Control: Most web applications verify function level access rights before making that functionality visible in the UI. However, applications need to perform the same access control checks on the server when each function is accessed. If requests are not verified, attackers will be able to forge requests in order to access functionality without proper authorization.
A8-Cross-Site Request Forgery (CSRF): A CSRF attack forces a logged-on victim’s browser to send a forged HTTP request, including the victim’s session cookie and any other automatically included authentication information, to a vulnerable web application. This allows the attacker to force the victim’s browser to generate requests the vulnerable application thinks are legitimate requests from the victim.
A9-Using Components with Known Vulnerabilities: Components, such as libraries, frameworks, and other software modules, almost always run with full privileges. If a vulnerable component is exploited, such an attack can facilitate serious data loss or server takeover. Applications using components with known vulnerabilities may undermine application defenses and enable a range of possible attacks and impacts.
A10-Unvalidated Redirects and Forwards: Web applications frequently redirect and forward users to other pages and websites, and use untrusted data to determine the destination pages. Without proper validation, attackers can redirect victims to phishing or malware sites, or use forwards to access unauthorized pages.
Speaking beyond SQL injections now, the MTA alerts web app that my teammates and I built was hacked a couple of days ago within 3 minutes by an attacker. Two vulnerabilities were exploited: ”A-2 Broken Authentication and Session Management” and ”A-7 Missing Function Level Access Control.”
First, our A-2 problem was that by simply modifying the recipient id in the url, the unauthenticated attacker was able to view and modify other user’s settings . That’s a tad embarassing alright, but rather than crawling under a rock in shame, let’s pick at the code, and checkout some patches implemented by my teammate, Yonatan Miller.
You will see from the Github code snippets below, that the problem code is highlighted in red, and it’s corresponding patch is in green. For right now we will be actually referring to the actual commit logs on Github, since for some reason, Tumblr is not allowing me to post any more screen shots to this blog. Bear with me and my apologies in advance.
This link shows a modification to the application and pages controller. In the application controller, a before_filter callback is added to require the user to be authenticated. The recipient controller (2nd to last file) has been modified to redirect the user to the home page if current_user is not authorized to view a particular recipient. (We will worry about adding an alert message later letting them know they are not authorized).
https://github.com/Sammykhaleel/mta/commit/316580009c429b80c13d3647499e45365d38c3ce
Here in the link below are the parts of the recipient and alert contollers, which have been modified so access to alerts and recipient info can only be accessed if these are associated with current_user. The method, current_user is built-in to our authentication gem, Devise, and it refers to the user currently in session. The modification helps prevent a brute force attack on the URL.
https://github.com/Sammykhaleel/mta/commit/c2037f8b53882170c27633b65ae1da3c037a3c3b
Our A-7 exploit was that as a logged in user, the attacker was able to modify other user’s alerts in the database by entering an alert id number in the URL That was another big no no, which is now in part resolved by the code previously described. One more thing was added to the patch, and that was to incorporate rescue ActiveRecord::RecordNotFound and redirect_to root_path when an illegal attempt to gain access is encoutered. Here is the code. Focus on the green.
https://github.com/Sammykhaleel/mta/commit/6a7c02d1241c8ed7085fb4a30b4412dab23e7539
Despite the security shortcomings (on what is thankfully a barely used app), this experience has been good in directing efforts to explore general app security. In the process, I got to learn about how use ActiveRecord callbacks, and rescues using ActiveRecord error messages as tools for implementing patches.As a team, we also know to keep the security problems we discussed in mind for future projects. Lastly, it is great to be introduced to the OWASP, since it is a valuable resource to the developer community. I expect to become even more familiarized with the guidelines they provide as I work towards becoming a more cognizant developer. I suggest you all check them out too!
1 note
·
View note
Link
I had a problem….How to showcase my code on GitHub without giving away my authentication parameters like my email account credentials or API keys ? Our instructor Steven Nunez told us to use Figaro. And then my investigation began…
Figaro is….
No wait…Figaro is a Ruby gem that parses a…
4 notes
·
View notes
Text
Testing your password's vulnerability to a brute force attack
Brute force attacks, also known as exhaustive key search, is a method of figuring encoded data and passwords by testing all character configuration possibilities until the code is cracked. The possible configurations/or combinations of characters possible given known password’s length can be calcualted using Shannon entropy. In layman’s term’s this measures unpredictability and chaos as defined by mathematician Claude Elwood Shannon.
The theory (whose mathematical complexity will not be explained here) was used in the script below to calculate the time it will take to crack a password (passed as an argument indicated by “ARGV” in the code) if solely through a brute force attack.
***** ***
Above, password is your pasword string, and word is an array of the characters in your password. letters is an empty hash.
The word array gets iterated through to calculate the frequency of each character in your password. This info is contained in the letters hash which now has keys that refer to characters, and values which indicate the frequency of each character.
****What proceeds that is the applied Shannon’s entropy math algorithim.***
This script can be implemented as an evalutator in an app’s signup and password reset features, as a way of alerting users of their choosen password’s strength. Of course the script only addresses password hacking via brute force, and does not account for other hacking methods like dictionary attacks, which would make passwords including commonly used English words more succeptible to cracking.
0 notes
Text
WebSockets!
Yesterday, while reviewing a particular concept to develop during project month, Blake (our teacher) suggested using web-sockets as a way of getting a sleeping laptop to wakeup at a programmed time. Well, that sounded like a really neat trick! Though my team’s approach to the project no longer required this feature, web sockets surely sparked my interest.
So what’s a web-socket anyway?
Web sockets allow for the bi-directional communication between a browser and server. WebSockets actually a persistant comunication between the browser and server in a way in which,once the connection between the two is made, either side can initiate the transmission of data to the other. When the browser sends a request to the server, but the server rather than reply with empty responses (when there is no new info for the client), it actually waits until it does have a response to push to the browser. The exchange appears instantaneous. Very fittingly, WebSockets are thus used for apps like instant messaging, and stock-tickers.
It's also notable to mention that because webSockets use a single TCP (Transmission Control Protocol) connection as described above, the communication between the clients and servers is faster, as bandwidth requirements are minimized for the interaction.
WebSockets are currently supported by latest Chrome, Firefox, IE, and Opera browsers.
For more information on actual code implementation (Javascript), see: https://developer.mozilla.org/en-US/docs/WebSockets/Writing_WebSocket_client_applications
0 notes
Photo

Thunderous golden sunset over the Flatiron School's Brooklyn Campus.
0 notes
Text
GET and POST requests.
GET is an HTTP request used to retrieve information already stored in our application or database. POST HTTP requests in the contrary are used to modify (add to the database) via the submission of a form or url. Best practices for using GET and POST request should consider the following as summarized in http://blog.teamtreehouse.com/the-definitive-guide-to-get-vs-post :
"Simply put, because GET requests are more useable:
GET requests can be cached
GET requests can remain in the browser history
GET requests can be bookmarked
GET requests can be distributed & shared
GET requests can be hacked (ask Jakob!)
Note: If you need the best of both worlds, an unsafe action can be made safe by making it idempotent, so that it makes no difference how many times it’s requested. You do this by giving the request a unique ID and using server-side validation to ensure that a request with that ID hasn’t already been processed. In fact, if you’re in search of excellence, all unsafe actions should be made idempotent as nothing can stop users from ignoring warnings.
GET VS POST EXTENDED
Rule #2: Use POST when dealing with sensitive data.
Because query strings are transferred openly in GET requests, we have to consider our security and that of our users when dealing with sensitive data like passwords or credit card numbers:
Our users… because they may not realise that they are sharing sensitive data when they share a URL or that it can be viewed in the browser history by other people using the same computer.*
Ourselves… because we may be breaking laws by unexpectedly storing data that we’re not allowed to (like credit card CV2s) in log files.
* This doesn’t apply when working within an AJAX environment.
Rule #3: Use POST when dealing with long requests.
Although the RFC doesn’t lay down any length-related guidelines, Internet Explorer – with its insistence on finding ways to make things difficult for us – enforces a maximum URL length of 2,048 characters.
Rule #4: Use GET in AJAX environments.
When using XMLHttpRequest, browsers implement POST as a two-step process (sending the headers first and then the data). This means that GET requests are more responsive – something you need in AJAX environments.”
0 notes
Text
MVC in Sinatra
MVC, or Models-Views-Controllers, is a prevalent architectural design pattern in web apps and a dominant organizational model which is is not to be resisted when it comes to Rails. The core of MVC lies in its separation of concerns between the business logic of a site and the information that is actually displayed to the user. The following explanations pertains to MVC as conventionally implemented in Sinatra.
M :
The models are .rb files that live in the application/models directory of the app.
A model is linked to it’s controller’s information via params. Models inherit and communicate with the database (db). It functions as a Ruby object, a class, which is able to prepare information to be received by db. They are a place to execute validations prior to adding anything to a table.
V:
Views are .erb files in the application/views directory. The view is a user interface that compared to a filter. It will display selected db attribute information based on the logic imparted by the controller and the model. The view receives info from the controller via instance variables. It also communicates info captured via user interaction in the form of params which are passed to the controller for handling. Params can be captured from the outside world (the user) via a URL, or a form. The contain standard html formatting with Ruby objects and operations mainly embedded using the <%= => tags to display the evaluation Ruby code , or the <% > (no equal sign) to evaluate without displaying to the screen.
C:
Controllers live in the application/controllers directory in the app.
It receives information via params from the views and translates it for the model. It also receives http requests (most commonly post and get requests). Another main function of the controller is to route to a corresponding view. A controller renders a view using the erb: file_name syntax, or redirecting to a path. Also, I have learned the hard way is that instance variables in the controller get lost if the controller uses a redirect method to a view instead of rendering it with an erb method. That's because unlike erb, a redirect initiates a new GET request.
Resources:
(retired content): http://msdn.microsoft.com/en-us/library/ff649643.aspx
0 notes
Text
The gist of Breadth First Search
Our first project week is here (and it feels like a 2.5 day hackathon with the option to go home at the end of the school day)! Our class was given a list of projects to choose from and develop. I choose “Maze-Solver,” which is the creation of a bot which solves and generates a path for the solution for a given maze.
Enter Breadth-first Search:
Working on the maze bot might be one of the first times I have imparted a logic-centritic focus on Breadth First Search (BSF) algorithims themselves. In the course of writing this blog, I realized that I have been inplementing this approach in the context of reading pedigrees (genetic - family trees) and calculating the probabilities for phenotypic inheritance. After the following explanation you might recognize how BFS has applied to your life as well.
Ok, so what is Breadth First Search Algorithm (BFS), anyway? BSF is an approach for getting around a graph between points, referenced here as nodes. The goal is to navigate from a starting node to a node or nodes of interest. In my case, my graph is the actual maze itself. My starting node and node of interest are the maze’s starting point and the finish line, respectively.
The following is a succint step-wise 4 min video demonstration of the concept shared as background for the maze project.
This excerpt from Wikipedia gives an overview of it:
"In graph theory, breadth-first search (BFS) is a strategy for searching in a graph when search is limited to essentially two operations: (a) visit and inspect a node of a graph; (b) gain access to visit the nodes that neighbor the currently visited node. The BFS begins at a root node and inspects all the neighboring nodes. Then for each of those neighbor nodes in turn, it inspects their neighbor nodes which were unvisited, and so on. Compare BFS with the equivalent, but more memory-efficient Iterative deepening depth-first search and contrast with depth-first search.”
Breaking down the framework to into pseudocode:
The basic set-up:
1.We start with a starting or “root” node. (That’s “a” in the gif above).
2.We create an empty queue to save adjacent nodes of interest
3. We create another queue for nodes visited. (This will be a place to mark off visited nodes).
The workflow:
4. The root node (initial node) is used as a reference to scan/identify contingent (children) nodes. Those children nodes are added to the node queue. (Adding children to the queue is represented by the greying of nodes on the gif).
5. The root node (which is also the first current node) is then added to the visited nodes queue. (A blackened node on the gif indicates if that node has been visited). 6. If the element sought is found in this node, stop scanning and yield a result. Chances are that you will have to search further than the root node for your answer.
7. If indeed nothing of interest was found at the current node, go to the first child in the node queue that is yet to be visited
8. Repeat/ loop through the scan (step 4) through step 7 until the node of interest is reached.
9. If the queue is empty and all nodes have been visited, meaning that these are contained in the visited node queue at this point, no solution has been found.
10. If that’s not the case, back track to the parent of the current node and keep repeating that until an ancestral node with contingent un-visited node(s) is found, restart at step 4 and so on.
Adding conditions before adding adjacent nodes to a queue (on step 4) is one way of making BFS customizeable for a project. In the case of the maze, the game board was drawn from a string of pound signs (“#”) and spaces. The #’s represent maze boundaries and it’s inner walls, and the spaces indicate visitable paths or nodes. The board looks something like this, with the arrow indicates the starting node.
As part of my conditions for identifying a visitable adjacent nodes I checked that the node is NOT equal to a #. I also built in a condition for my bot so that if an adjacent node has the value of “@,” this will indicate that the maze solution is complete. At which point, the scanning is stopped and the winning path is prompted to be drawn.
When completed I will be sharing my finalized code for the maze-bot to exemplify the application of BSF in action in this post.
It is also note-worthy that more interesting real world applications for BFS and it’s more efficient versions are used in GPS navigation (mapping routes from points A to B), web-crawlers (which search the internet from one link to another), and in building social networks, like Facebook (where each person in a friends network is algorithmically treated as a node).
0 notes
Link
0 notes
Text
"It's a feature. It's not a bug" - Flatiron School Instructor Steve Nunez on the subject of case sensitivity in regular expressions.
2 notes
·
View notes
Text
/Regular Expressions/
On a few occasions I have witnessed the power of regular expressions (a.k.a regex or regexp) and their usefulness in pattern recognition. However, as much as I admire their capability, I am just as confused as to how to implement them in my code. Coming across regex again while looking at a couple of my classmates' solution to a code challenge last night, I began to pick away at a small chunk of my regular coding confusion.
I played a little with the solution in order to produce the following slightly adapted version of their code. The assignment it was written for was to create an email parser that would take a single string containing multiple email addresses, and parse each email into it's own individual string.
Code overview:
Here, there is an EmailParser class that accepts one argument: a long single string containing the addresses. The parse method is doing a scan, or search, of all the elements in that string looking for the pattern /\w+@\w+\\.\w{2,3}/ . In this case the given regular expression corresponds to the generic format for email addresses: [email protected] . The final return of String.scan is a String array of all the matches.
After a closer look, here is dissection of the regex components. The bold letters in the generic email address sample indicate which portion of the email address the regex element is referring to at that particular point in my explanation.
Regex anatomy:
1. Like all regular expressions, this one is designated as such by including forward slashes "/" at the beginning and end of the expression.
/\w+@\w+\\.\w{2,3}/
2. Reading from left to right, the first \w indicates a word boundary, meaning a character not including punctuation marks or spaces.
/\w+@\w+\\.\w{2,3}/
3. The \w is followed by a plus sign, +, which is a quantifier indicating a search for one or more sequential elements of what precedes it -- in this case that means one or more word characters.
/\w+@\w+\\.\w{2,3}/ #[email protected]
4.That is followed by a literal "@" sign.
/\w+@\w+\\.\w{2,3}/ #[email protected]
5. The following \w+, as seen in parts 1 and 3 is yet another set of 1 or more word characters, this time following the @ character on the email address.
/\w+@\w+\\.\w{2,3}/ #[email protected]
6. The next landmark is the \. It is an escaped period. We escape characters using a back-slash, \. Otherwise, when not escaped the period is regex speak for any character except a new line,"\n".
/\w+@\w+\\.\w{2,3}/ #[email protected]
There are a few "meta-characters" in regexes that have special meaning and must be escaped when their literal meaning is intended. These include: ^, [, ], $, {, }, *, ( , ), |, \, +, ?, <, and >. The special meaning of these characters will be the subject of an upcoming blog.
7. Next, there is a last \w. This now familiar class is now accompanied by {2,3}. Like the + sign earlier, the {2,3} is also a quantifier. Again it quantifies the \w. It indicates a range of a minimum of 2 and up to a maximum of 3 sequential characters. I choose this range to include the .med, .gov, .ly, edu out there.
/\w+@\w+\\.\w{2,3}/ #[email protected]
I can only say that pulling apart this regex into its elements is only scratching the surface of my understanding. The process of explaining its workings in this blog has been a helpful excercise in making the concepts more digestible and memorable for myself -- and hopefully for you too!. I look forward to practice using regexes enough to become more comfortable in implementing them as the awesome tools they are.
1 note
·
View note
Text
Synopsis of Optimism, a blog by Mr. R. S. Braythwayt (Un-edited)
The following relates to reading assigned by our instructor Blake Johnson, as he primed us with some mindset perspectives on dealing with the challenges we'd be enduring during our rigorous training at Flatiron. The ideas conveyed during our class dialogue on that very afternoon are definitely worth revisiting. Considering the stress and workload as we progress in our Fellowship, it is not uncommon to loose perspective every so often.
Another excellent read was Malcolm Gladwell's Chapter 3 of "David and Goliath" - It does the mind good ;)
In the blog, Optimism, Reginald Brathway points out the difference between how, by the way we reason the world, we can have either let life happen to us or we can make life happen.
The blog explains Dr. Martin Seligman's research and his identified pattern for the way in which people explain things to themselves. He explains such psychological processes along the lines of the following three "axes":
Personal interpretations on happenings are perceived as on a scale of personalizing vs impersonalizing, assignment of blame or credit are weighed as general vs specific, and situations can either be construed as permanent vs temporary.
A person who explains how they see the world (including their set-backs or failures), in a personalized, general, and permanent way sets themselves up to feel like they have no control of the outcome of situations. This is the filter through which a pessimist sees the world. If something went wrong it is their fault, they are "bad," and things will always be like so, permanently. In this frame of mind there is a sense that life happens to the individual. Outcome is thus finite. The person perceives a lack of control and an inability to change or impact future outcomes.
In the contrary, an individual who explains experiences to themselves in an impersonalized, specific and temporary manner, is more likely to see that should a set-back come up, the struggles that they might be going through at the moment do not dictate their future outcomes. This general interpretation (or understanding) of events is thus considered part of the optimistic outlook. Since situations in this case are perceived as impersonal, separate from who they are as a person, and specific to a point in time... there is space to for the individual to shoot for change, as well as invest time in in becoming better with hopes of positively impacting future situations.
According to these views on optimistic and pessimistic perception, embodying the definition of optimism along the three axes appears to be the most constructive way of dealing with life's hurdles, and moving along the way to personal growth. I do not particularly think people fit either of the pure molds for optimism or pessimism entirely as defined. We are more likely to oscillate somewhere in between, in the grey areas where there is a mesh of qualities from both ends of the spectrum. However I absolutely agree that staying on the brighter side things is where the most good comes out of.
1 note
·
View note