Don't wanna be here? Send us removal request.
Text
Card Sorting, Information Architecture, and the MBTA
Spring of my freshman year, I decided to take a design studio class as an elective to try something new. The class I took, ARTG 1250: Design Process Context and Systems, challenged students to apply design thinking in various ways, and touched upon topics such as user interface design and graphic design. One project that we were faced with was improving the MBTA. It was a topic relevant to all of us, as we all had positive and negative experiences in MBTA stations or in the rail cars.
During the beginning stages of the project, our whole class participated in a brainstorming session to determine problems with the T. Our professor had us participate in a card sorting session. Every student was given a handful of sticky notes, and told to write down the problems they encountered. Afterward, our class sorted the problems into categories, such as noise, cleanliness, reliability, interaction, etc.
At the time, I didn’t realize that what we were doing was card sorting, a key first step for designing an information architecture.
However, card sorting doesn’t magically make your information architecture perfect. According to the article “Card Sorting: A Quick And Dirty Guide For Beginners” published on the blog Usability Geek, there are four main components of information architecture: navigation, search, classification (the way you organize items into categories, subpages into pages etc.) and labeling (the way you name your pages, categories, levels of navigation). Card sorting only improves classification and labeling, not navigation and search.
(https://usabilitygeek.com/card-sorting-quick-guide-for-beginners/)
As the article explains, “the navigation system itself has not that much to do with the way your content is organized. It is usually a matter of buttons that make it easy to navigate through the website, go back to previous pages or return to homepage.”
Good navigation and search are key to a proper information architecture. The article provides the following example:
“Users can find one thing they are looking for thanks to proper categorization but [with an IA developed through card sorting alone they] won’t be able to smoothly go to another page due to non-intuitive navigation.”
There are two main types of card sorting that information architects can use to develop strong classification and labeling. In an open card sorting session, participants are told to group the given cards and then create the categories themselves afterward. This version is more flexible but also more complex, because there is more variation in results assuming you test many participants. However, if you don’t currently have any architecture in place, or simply want to see if the current categories you have are sufficient, open card sorting is the better method to test that.
In a closed card sorting session, participants are given categories and cards and told to place the cards into the categories that fit best. Closed card sorting is less flexible but enables the company to test the existing information architecture. If the participants group the items in ways much different from the current structure, then the company likely has some major changes they should consider.
0 notes
Text
Three Common Data Visualization Mistakes
As a student in DS 4200: Information Presentation and Visualization, I’ve seen my fair share of visualizations, both good and bad. Although I wouldn’t exactly call myself an expert on making amazing visualizations, I’d say that I’m pretty good at recognizing when one isn’t so great. In this blog, I’m going to cover some of the more common data visualization mistakes that people make.
1. Using a pie chart when you don’t have to:
It’s no secret that a significant portion of the data science community hates pie charts, and for a good reason. Not only are pie charts constantly misused, but they are also very limited in the information they communicate. The only comparison that can be made using a pie chart is proportion. Some common mistakes people make when using pie charts include having a total percentage greater than 100 or having slices not in proportion with percentages.
However, pie charts are difficult to read and understand even when they���re used correctly, because the human brain has difficulty processing angles. The primary positive aspect of pie charts is that they are widely used, and therefore familiar and self explanatory to most people. They don’t require any axes either. In my personal opinion, however, bar charts are better because comparisons can be made more efficiently. Bar height is easier to interpret than areas and angles, and bar charts are just as familiar to the general population as pie charts. If you must use a pie chart, make sure it’s 2 dimensional. Never use a 3D pie chart. This brings me to my next mistake.
(https://www.businessinsider.com/pie-charts-are-the-worst-2013-6)
2. Having an unnecessary 3D encoding:
If you’ve ever opened an old textbook or business report, chances are you’ll see a 3D bar chart or piece chart that really doesn’t need that third dimension. People make this mistake because they believe that it will give the chart more visual appeal and draw people to it. The issue with unnecessary 3d however, is that data gets distorted and/or covered. Having a bar chart at an angle on a 3D plane will result in bars in the back getting covered by the bars in the front. Furthermore, a 3D chart will result in heights getting skewed and by extension, accidental misrepresentation of the data. The same issue is especially bad for pie charts, because area is one of their key encodings. If you make the pie chart 3D, the bottom slices will appear bigger and the top slices smaller.
(https://www.livestories.com/blog/five-ways-to-fail-data-visualization)
3. Misrepresenting the y-axis scale:
This mistake is specific to bar charts, line charts, and scatter plots. When making these types of charts, it's important not to misrepresent the scale of the y axis by having it start at a value other than 0. Many people mistakenly believe that if differences between values are minuscule, the y-axis should increment by a minuscule amount to make the differences clearer. However, the issue with this thought process is that it misleads the reader into believing that the differences are more profound or important than they really are. In general, it's good practice to have the y axis start at 0.
0 notes
Text
The Rise in Demand of Information Security Analysts
Every day, massive amounts of data are generated by the ever-growing presence of technology products that people and businesses consume and build. As a blog post on CIO Dive explains, “the amount of data created and copied annually more than doubles every two years, according to an EMC report by analysis firm IDC. In 2014, IDC anticipated that the digital world would grow from 4.4 trillion gigabytes in 2013 to 44 trillion gigabytes in 2020, growth by a factor of 10 within seven years.” (https://www.ciodive.com/news/the-rise-of-big-data-why-companies-are-amassing-information-to-drive-insig/445548/)
The age of information is upon us, and the economy is shifting because of it. One example of this shift is seen in the sudden increase in demand for information security analysts.
The massive increase in data at companies’ disposals has led to the rapid rise in demand for information security analysts that can help them protect their digital assets. According to the Bureau of Labor Statistics (BLS) website, the number of information security jobs in the United States in 2016 was 100,000. According to CyberSeek, 40,000 jobs for information security analysts go unfilled in the US every year. Employment is projected to grow 28 percent from 2016 to 2026, with roughly 28500 news jobs added to the economy.
(https://www.forbes.com/sites/jeffkauflin/2017/03/16/the-fast-growing-job-with-a-huge-skills-gap-cyber-security/#acacd165163a)
https://www.bls.gov/ooh/computer-and-information-technology/information-security-analysts.htm#tab-6
This is roughly four times the national average growth of all jobs, which is just 7 percent, and the high pay reflects this demand. The May 2018 median pay for an information security analyst was 98,350 per year, or $47.28 per hour, according to the BLS, with the highest ten percent earning $156,580 per year or more. They attribute the rise in job growth to these companies’ need for them “to create innovative solutions to prevent hackers from stealing critical information or causing problems for computer networks.” The increase in frequency of cyber attacks may also be a root cause.
Most information security analysts work for tech, consulting, and finance companies, and the typical entry level education is a bachelors degree. According to the BLS, most companies prefer candidates with a bachelors of science in computer science, information assurance, and related fields, and some want candidates who have a Master of Business Administration (MBA) with a concentration in information systems.
Many analysts have previously worked in IT in roles such as a network, database, or computer systems administrator. The BLS lists several responsibilities for information security analysts, including:
* Monitor their organization’s networks for security breaches and investigate a violation when one occurs
* Install and use software, such as firewalls and data encryption programs, to protect sensitive information
* Prepare reports that document security breaches and the extent of the damage caused by the breaches
* Conduct penetration testing, which is when analysts simulate attacks to look for vulnerabilities in their systems before they can be exploited
0 notes
Text
Learning to Derive Bayes’ Theorem through Venn Diagrams
As a data science student, having a strong foundation in probability and statistics isn’t just important to my success in this major, it’s a requirement. One of the most vital concepts for me to understand is Bayesian statistics. Despite not having taken MATH 3081: Probability and Statistics, I’m somewhat familiar with Bayes’ theorem. The topic was briefly covered my freshman year in CS 1800: Discrete Structures, and touched upon again in DS 4100: Data Collection, Integration, and Analysis, which I’m currently taking.
However, despite having memorized the Bayes’ theorem formula and having a basic understanding of what the components are, my actual knowledge of the intuition behind it is very weak. Furthermore, whenever I google the theorem I always see two different versions, with one having a different denominator than the other.
Version 1: P(B|A) = P(A|B)P(B) / P(A|B)P(B)+P(A|!B)P(!B)
Version 2: P(B|A) = P(A|B)P(B) / P(A)
My mission to understand the two different versions of Bayes’ theorem led me to this Stack Exchange thread:
https://math.stackexchange.com/questions/1404029/which-one-of-the-following-versions-of-bayes-theorem-is-correct
The answer to my confusion about the two different versions is that they are equivalent. This is due to the law of total probability, which states that:
P(A) = P(A and B) + P(A and !B)
Given that:
P(A and B) = P(A|B)P(B)
P(A and !B) = P(A|!B)P(!B)
we can conclude that:
P(A) = P(A|B)P(B) + P(A|!B)P(!B). Therefore, Version 1 = Version 2.
Although I had finally cleared up a big source of confusion for me about Bayes’ theorem, I still needed to understand the intuition behind it. After reading several articles and still feeling lost, I found a link to this article in a Stack Exchange thread: https://oscarbonilla.com/2009/05/visualizing-bayes-theorem/
The reason I like this article so much is because it explains the theorem using Venn diagrams, which appealed to me as a visual learner.
The way the author derives Bayes’ theorem is as follows:
Say you have a universe U which contains all possible outcomes of an event. In this example, U is all people who participate in a cancer study.
Lets say C is a specific subset of U. In the example, it’s all people who have cancer. P(C) = (C / U).
Lets say Pos is also a specific subset of U. In the example, it’s all people who test positive in the study. P(Pos) = (Pos / U)
In the Venn diagram, U is a large circle, while C and Pos are smaller, partially overlapping circles inside of it.
Lets call the overlapping part of circle C and circle Pos TruePos. This is the subset of people in the study who have cancer and tested positive for it. P(TruePos) = (TruePos / U)
This is where Bayes theorem suddenly clicked with me:
P(Pos) is the probability of a value in circle U also being inside circle Pos, which is the probability of a person in the study testing positive.
P(Pos|C) assumes C is true, so it’s the probability of a value in circle C also being inside circle Pos. In the example, this is people who have cancer also testing positive for it. C and Pos overlap, so this means that P(Pos|C) is really just the probability of a value inside C also being in TruePos, or the probability of a person who tests positive for cancer actually having it.
So P(Pos|C) = TruePos / C = P(TruePos) / P(C)
As stated earlier,
P(A and B) = P(A|B)P(B) or P(TruePos) = P(C|Pos)P(Pos)
Therefore, P(Pos|C) = P(C|Pos)P(Pos) / P(C)
We have officially derived Bayes’ theorem!
0 notes
Text
IS2000, DS4100, Base R, and Hadley Wickham’s Tidyverse
When I pulled up the lecture notes from the Information Processing in R unit, I wasn’t surprised to see syntax that I was well acquainted with. In my DS4100: Data Collection, Integration and Analysis class, R is the primary language we use. Manipulating XML and SQLite code in an R environment was also covered extensively. What did surprise me, however, was how much of the syntax I was unfamiliar with, such as the which() clause. While the code in the lecture notes is straightforward enough to understand without having seen before, the differences between it and my DS4100 class highlight one of the key characteristics of R: the seemingly infinite amount of libraries that allow users to carry out the same analysis in different ways. In this case, the difference is between base R and the tidyverse, a set of libraries created by Hadley Wickham to simplify the process of writing code in R. This caused me to ask myself, in which situations is one better to use than the other?
For me personally, I prefer the tidyverse, because the syntax makes sense and it’s easy to read. The pipe allows me to write code cleanly and libraries such as ggplot2 allow me to easily visualize data frames. I am a bit biased, however, because I haven’t been formally taught base R to the same extent that I have the tidyverse.
In his article titled “Teaching the tidyverse to R Novices”, digital librarian Jason Heppler lays out his reasoning behind teaching the tidyverse to R beginners instead of base R. (https://medium.com/@jaheppler/teaching-the-tidyverse-to-r-novices-7747e8ce14e)
His primary reason, which I agree with completely, is that tidyverse code is more readable and therefore simpler for novices to understand and pick up. Having been that beginner recently, (I arguably still am one) dplyr code is much easier for me to wrap my head around in terms of selecting columns and filtering rows than base R. I personally find it comparable to SQL in certain ways.
As Heppler explains, “[His R workshop involves] introducing students to messy government data, tidying that data, working with data to produce new data, and drawing conclusions. I’m able to teach these concepts relatively quickly thanks to the power behind dplyr and tidyr.” As a result of the simplicity of the tidyverse libraries, Heppler’s students were able to pick up key data science concepts, such as tidying and analyzing data, extremely quickly.
That isn’t to say that base R should be ignored completely. In my opinion, having a solid understanding of certain base R grammar is vital to success as an analyst, even if you primarily use Hadley Wickham’s libraries. Understanding how to create functions, filter columns through the $ sign operator, and use the various apply methods is extremely helpful. I haven’t even scratched the surface of base R myself, and I plan on exploring it in more depth when I’m not busy with classes.
0 notes
Text
Why SQL is Still Used After 45 Years
SQL, short for Structured Query Language, is a programming language that I’m no stranger to. It was covered extensively in CS 3200: Database Design, where we used PostgreSQL; in DS 4100: Data Collection, Integration, and Analysis, where we used SQLite in an R environment; and now in IS 2000. I worked with MySQL in a Python environment during my last summer internship, and I will be using Microsoft SQL Server this fall for the data architecture co-op that I recently accepted.
It’s very apparent why SQL became so popular: it’s very straightforward and fairly easy to learn as far as programming languages go. Furthermore, it’s the standard for relational database management, and relational databases are the most commonly used type of database management system. There’s also many educational resources available for SQL.
But as I recently learned, SQL first appeared in 1974, making it 45 years old. That’s an impressive age for a programming language, especially considering how rapidly trends in technology change. This made me ask myself, how did SQL become the standard for database management, and why is it still used today?
As Oracle’s website explains, SQL was developed in an IBM research laboratory in the early 1970′s by Donald D. Chamberlin and Raymond F. Boyce. It was based on a research paper published in 1970 by Dr. Edgar F. Codd titled “A Relational Model of Data for Large Shared Data Banks“. Oracle released the first commercially available version of SQL in 1979. SQL was first standardized in 1986 by the ANSI domestically and the ISO internationally. Since then, many new versions have been produced and the standard updated. (https://docs.oracle.com/cd/B13789_01/server.101/b10759/intro001.htm)
SQLizer, and online web app for converting files into SQL databases, published a blog post in 2017 that explains well why SQL is still so commonly used today. (https://blog.sqlizer.io/posts/sql-43/)
First, SQL excels as a data management language because it is based on relational algebra and tuple relational calculus.
SQL has been widely used by major companies for many years for various purposes and continues to be reliable as a DBMS. It’s age and popularity have caused a massive community to develop around it, resulting in an enormous amount of documentation and deep knowledge base that keeps the language alive.
SQL is also notoriously easy to learn on a basic level, enabling non-tech people in all areas of business and research to use it in their roles. Because many companies use SQL, talent is always available and skillets transfer easily, which further reinforce its popularity.
Most importantly, SQL is popular because it is extremely powerful. With a few lines of SQL code, a programmer can select, filter, and aggregate data with ease, and gain new insights about the data.
While SQL doesn’t scale as well as NoSQL and may not be the best solution for big data problems, most developers won’t to worry about this issue. As the article puts it,�� “if you’re going to get riled up about scale, realistically only a tiny percentage will ever need to worry about scaling a RDBMS - you’re not Facebook or Google. You can still have millions of users with a SQL database and have no issues.”
0 notes
Text
Data Warehouses vs Databases
During the spring semester of my freshman year, I was enrolled in CS 3200: Database design, where I became well acquainted with relational databases, ER diagrams, and SQL. I knew that having a comprehensive understanding of databases would be vital as a data science major, both in my future classes and my future co-ops.
This spring, while exploring NUCareers, I noticed that a lot of co-ops in the areas of business intelligence, data analytics and occasionally data science, had a common term located somewhere in their descriptions: “data warehousing”. Despite having taken Database Design, I didn’t have a strong understanding of what data warehouses are and how they differ from databases. That inspired me to write this blog post.
As explained in the lecture, databases are an organized collection of data. Relational databases are the most widely used type of database, with information organized into connected two-dimensional tables that are periodically indexed.
Data warehouses are systems used for data collection and analysis, in which data is pulled from multiple sources. As AWS explains on their website, “data flows into a data warehouse from transactional systems, relational databases, and other sources, typically on a regular cadence. Business analysts, data scientists, and decision makers access the data through business intelligence (BI) tools, SQL clients, and other analytics applications.” (https://aws.amazon.com/data-warehouse/)
The primary difference between databases and data warehouses is that databases are used for Online Transaction Processing (OLTP), while data warehouses are used for Online Analytical Processing (OLAP). OLTP encompasses day-to-day transactions and involves smaller queries such as INSERT, UPDATE, and DELETE statements. OLAP queries are more complex. Managers and analysts use them to select, extract, and aggregate data to identify tends and gain insights about their businesses.
Another difference between databases and data warehouses is that databases are typically normalized, while data warehouses are not. Relational databases are often structured in the 3rd normal form to remove redundancies and group related data together in tables. Data warehouses are willing to accept some redundancies in exchange for having fewer tables, because this allows for faster processing.
A third difference between data warehouses and databases are that data warehouses contain historical data while normalized databases usually only contain current data. As the data warehousing platform Panoply elaborates, “data warehouses typically store historical data by integrating copies of transaction data from disparate sources. Data warehouses can also use real-time data feeds for reports that use the most current, integrated information.” (https://panoply.io/data-warehouse-guide/the-difference-between-a-database-and-a-data-warehouse/)
A data lake is third type of data store, which differs greatly from both databases and data warehouses. According to the software and data solutions company Talend, there are four primary differences between a data warehouse and data lake. Data lakes can store structured, semi-structured, and most commonly, raw, unstructured data. Typically the data stored in a data lake does not yet have a defined purpose, while data warehouses store data used for analytics purposes. Data lakes are used more commonly by data scientists because of the raw nature of the data, while data warehouses can be used by any business professional. Data stored in lakes are also easier to access and change because they haven’t been processed, but the trade-off is that some processing will be required to make the data usable. (https://www.talend.com/resources/data-lake-vs-data-warehouse/)
0 notes
Text
The Importance and Context of XML
Throughout my time as a student in CCIS I have heard the terms ‘XML’ and ‘XML parsing’ thrown around frequently, though I never knew what they meant. I always assumed XML was some kind of complicated programming term. Imagine my surprise when I learned that XML, or Extensible Markup Language, is not only a widely used information store, but one that is incredibly easy to understand!
Although the term itself sounds intimidating, the similarities between XML and HTML became immediately clear as soon as I saw examples. XML really is just HTML with extensible tags. Or, if you flip that statement around, HTML is just XML with predefined tags specific to building websites.
The paragraph above isn’t exactly correct. While closely related, XML and HTML aren’t “basically the same.” As the Lifewire article “The Relationship Between SGML, HTML, and XML” explains, HTML is a child, or application, of another language called SGML, which stands for Standard Generalized Markup Language. XML is a subset of SGML, which gives it more rights than an application like HTML, such as the ability to create your own tags. SGML is more of a set of rules rather than an actual markup language. It’s purpose is to state “what some [markup] languages can or cannot do, what elements must be included, such as tags, and the basic structure of the language. As a parent passes on genetic traits to a child, SGML passes structure and format rules to markup languages. “
(https://www.lifewire.com/relationship-between-sgml-html-xml-3469454)
Despite understanding the syntax, it wasn’t clear to me after class why XML is so widely used. This lead me to do my own research, where I found another Lifewire article called “5 Basic Reasons You Should Use XML”.
(https://www.lifewire.com/reasons-to-use-xml-3471386)
The first reason it gives, which made a lot of sense to me, is that XML is simple and easy to understand. The syntax is very intuitive, especially if you have any familiarity at all with HTML.
The second reason to use XML, according to the article, is that it keeps the design process organized. As the author explains, “Data sits on one page, and formatting rules stay on another.“ Having data stored completely independent of formatting does seem like a useful feature in staying organized. Furthermore, all the data is stored on one page, completely independent of any formatting or other instructions, which means the user can easily find and update an entry when they need to.
XML is also a standard, used throughout the world. Anyone can access an XML document when they need to, from anywhere, with no special software required.
0 notes
Text
UML Diagrams
When Professor Schedlbauer first introduced UML (Unified Modeling Language) diagrams, I knew they looked familiar, but I wasn’t exactly sure why. My first thought was that I had learned them in CS3200, my Database Design class from two semesters prior, but something was off. Instead of crows-foot notation, which I had used in Database Design, the professor was using arrows and diamonds, which simply explained the type of relationship between classes instead of quantifying them. Furthermore, we hadn’t discussed relational databases, SQL, or primary and foreign keys in class yet. Clearly these diagrams served a different purpose, which confused me even more. Why was UML suddenly so different?
As it turns out, my initial thought was incorrect. I had, in fact, not learned UML in CS3200. What I had actually learned was ER (Entity Relationship) diagrams, which are similar to UML, but clearly have significant differences and serve a different purpose. Despite unraveling this mix-up, I still had many more questions, and the familiarity of UML was still nagging at the back of my mind.
I focused my attention on the diamonds and arrows. Despite having never used the notation before, I was well acquainted with the meanings behind them. The is-a and has-a relationship concepts were covered extensively in CS3500, my Object-Oriented Design class. More specifically, we discussed the usage of inheritance and composition, and when they were appropriate. Inheritance, the idea of having a class extend another class and all of its methods, is used when a class is by definition a type of the other class. Composition is the concept of making a class a field of another class. In order to access to that subclass’s methods, you have to create methods that in the parent class that manually call the subclass’ methods. Composition is used for has-a relationships.
At that moment, I realized that UML diagrams were familiar to me because I had learned a different flavor of them in OOD. The key differences between the CS3500 version of UML and the IS2000 version are that in OOD the class diagrams displayed the methods of each class, and used open and closed arrows instead of arrows and diamonds.
Although I managed to determine why UML looked so familiar, I couldn’t help but wonder about the difference between UML and ER diagrams. Which one is better?
Finding an answer to this on Google proved extremely difficult, and for a good reason - neither is “better”. Both are used for different purposes. As Lucidchart explains,
“UML is a combination of several object-oriented notations: Object-Oriented Design, Object Modeling Technique, and Object-Oriented Software Engineering... UML represents best practices for building and documenting different aspects of software and business system modeling.”
In short, UML is most commonly used in the domains of object-oriented programming and business analytics.
Lucidchart describes ERD as a type of flowchart used “to model and design relational databases, in terms of logic and business rules (in a logical data model) and in terms of the specific technology to be implemented (in a physical data model.)”
Both UML and ER diagrams are important information modeling tools that are vital for me to understand as a data science major. The former is more commonly used in business systems and software design, while the latter is used in database design and troubleshooting.
0 notes
Text
The SECI Model and OOD
According to the SECI model, four main methods of knowledge transfer exist: socialization (tacit to tacit), externalization (tacit to explicit), combination (explicit to explicit), and internalization (explicit to tacit). The class discussion about this model caused me to consider how it applies to classes I have previously taken; most notably, CS3500: Object Oriented Design
https://course.ccs.neu.edu/cs3500/
Whether or not the code I wrote in OOD can be considered externalized information from my brain is an interesting question. The computer understands the code, but often times, I could barely understand it myself. If someone else read it, they certainly wouldn’t understand any of it. For this reason, programmers write Javadoc comments, to clarify what a specific class or method does. However, calling this act of summarizing a chunk of code combination doesn’t seem right. While code is explicit information, as are the comments to that code, I never actually “combined” any documents into a single document. Instead, I simply “translated” the explicit information (the code) into more explicit information (the Javadoc) within the same document, through my temporary tacit knowledge of how the code works
https://en.wikipedia.org/wiki/Javadoc.
Thinking about this caused to think about translation in general and where that falls within the SECI model. If I translate a Spanish novel into English by utilizing my (hypothetical) tacit knowledge of Spanish, is that combination or externalization? It doesn’t really seem like either is being used in this case. To me, it seems like a flaw in the model.
I also considered the way that I internalized the knowledge taught to me in OOD. Attending lectures and reading the lecture notes were the two most common ways. However, the lecture notes were often complicated, unclear and missing key information, as were (with all due respect) the professor’s lectures. In order to actually understand what each Java Design Pattern was and when it was used, I had to look up tutorials on Youtube and study them myself. But that’s simply the explicit information taught in OOD. The reason the class is important, according to the professors and students who have taken it, is because it teaches students how to work with large code bases. I would consider this socialization, as this information is mostly tacit, and we are supposed to learn it through practice and guidance (the assignments and TAs). There are no explicit rules on writing large programs. Instead, you simply have to write them. I’m very bad at putting the tacit information I’ve learned into complete thoughts (externalizing), which may be why I struggle to tell people what I learned in OOD.
0 notes
Text
Welcome to my blog!
This is a blog created for my Information Science class. I’ll be relating what I learn in class to ideas/sources from outside of the classroom
1 note
·
View note