Don't wanna be here? Send us removal request.
Photo
Tumblr media Tumblr media Tumblr media
Photoset test
0 notes
Video
youtube
0 notes
Conversation
me: testing testing
also me: yep, testing!
0 notes
Link
This is a link test, also
0 notes
Quote
This is a test
me
0 notes
Text
Readmore test
Key QuestionsTechnology StackHypotheses
Do beginners do comments? Or answer quiz questions that show they understand the code? If the latter, we get around the automated comment evaluation. When we get to real comment evaluation it could be real people, i.e. pull requests.
Perhaps we’re having the users act as the compiler?
Start with React
Probably Node Backend
Add React Native Apps later
People can learn to understand code by reading snippets.
You do not need any prior knowledge of languages to learn in this fashion.
There is a way to automate checking comments.
People want to learn this way. AND/OR People want this skill.
0 notes
Text
CodeScribe: Science Hack Day Survey
These findings were gathered for CodeScribe (LINK) in response to a hypothesis formulated during Science Hack Day. It posits that “[people] do not need any prior knowledge of coding languages to develop code literacy through reading code.” This initial, cursory analysis helps to identify some of the similarities and differences in how different skill level groups respond to the different types of questions. From this, we can gain insights into how easy and intuitive it is for [English-literate] learners to read and understand code without prior experience as well as the types of foundational knowledge required.
Background on CodeScribe at Science Hack Day here: (LINK) You can find a copy of the survey here: goo.gl/forms/VUaKc91pYBupJ7QI3 See the data for yourself here: goo.gl/CPvsaN
Tumblr media
Overall, the “choose the best comment” format questions were easiest for folks to answer correctly. I’m guessing that this is because they read more naturally and (thanks to the limited answer options) could be easier to “guess and check.” Given that one of the fundamental goals of CodeScribe is to help folks develop reading comprehension for code, this is very promising. The comments represent comprehension of the “why” of code above the “how.”
Although there were two questions of this “choose the best comment” type, the error rates varied. I’m understanding this to be due to the somewhat subjective nature of comments. This is to say, the options provided may not have been equally ambiguous. The code excerpts being commented upon may have included different levels of complexity, as well. What is interesting is that for question 1, (both across experience level and across learning method) the incorrect answers were consistent. For question 2, those with less experience and more informal learning tended toward different types of errors than their more experienced, formally-instructed counterparts. This provides helpful insight into the types of disambiguation that should be the subject of future lessons.
Tumblr media
Interestingly, question 4 presented a source of confusion that may have come primarily from the question wording. The majority of answers (over 85%!) were distributed between two very similar versions of the answer. This is likely more than confusion between two choices, but in fact may be a semantic distinction in the interpretation of the question. The question asks: “In which of the following orders does the code run when the endTurn (defined under COMMENT 2) function is called?” Here, the ambiguity is in the word “run.” Some answerers treated “running” as being called (which was considered the correct answer here), while others interpreted it as completing.
Tumblr media
Over 70% of the incorrect answers came in the form of the answer variant which corresponds to the completion interpretation of the word “run.” If we amended the question 4 wording to say “In which of the following orders does is the code called when...?” there would almost certainly be a vast improvement in performance. If even half of the respondents who had chosen the variant had clarity from the improved wording, this question would have gone from least- to the second-most-accurately answered in the group. In the future, if presenting similar questions or ordering challenges, I will use less ambiguous language or clarify that run is equivalent to being called.
Tumblr media
Given the wording confusion in question 4, I think it’s fair to say that—content-wise—question 3 was the most difficult. This was anticipated because the formatting in the provided code was actually quite advanced. Even folks with more advanced skills were more likely to miss it. Across the board, this question type was considered the hardest and most frustrating of the questions, and it may have been a poor choice to include it in this iteration because of the stress it caused. One pretty darned experienced respondent even remarked, “I feel stupid and I’m an expert.”
Because the goal of CodeScribe is to promote reading comprehension for code, questions like this would ideally only be used as teaching moments; they are opportunities to highlight syntax equivalency where it may not be apparent. The lack of feedback in this particular form factor (the Google Form) means that, although helpful for gathering initial insights, it is a far-from-ideal mechanism for teaching. CodeScribe must be interactive and responsive to be effective for learning.
One promising takeaway is that—at least on average—people can sense when they’re getting something wrong. In almost every case, average confidence levels were lower for those who answered incorrectly compared to those who answered correctly. (The main exception to this was for the “Pretty darned experienced” folks on question 3, for which some participants were so confident in their incorrect answers that they outpaced the correct answers in average confidence.)
Tumblr media
In the “Accuracy vs Confidence” rows in grey, we see that, despite having less confidence in wrong answers more often than not, the perceived accuracy or confidence was greater than actual accuracy. The one notable exception to this pattern was for the group who learned in a formal teaching setting. On average, these respondents rated their confidence lower than their performed accuracy. I wonder if this is a result of the formal learning environment. As a trend, the difference between confidence and accuracy decreased in relationship to skill level. Both the most experienced and the formal learners had the smallest difference (~5%) but the formal learners’ accuracy was nearly 20% lower as a group than the self-identified experienced coders.
Ultimately, I found these results encouraging. Although those with little to no code education did perform worse, as would be expected, they did better than random guessing (which would have been an average error of 68.75%). I believe with some word changes to question 4, they would actually have performed relatively well. At the end of the day, CodeScribe is intended as a learning tool, not a quizzing tool. As such, the next iteration will be much more interactive and attempt to compare before/after performance.
Find more background on this study and learn about CodeScribe at Science Hack Day here: (LINK)
0 notes
Text
CodeScribe: 2017 Science Hack Day Project
On the weekend of October 14th & 15th, 2017, I joined 250 creative and scientific folks at Github HQ for the 8th annual Science Hack Day. It was an extraordinary experience. Not only was this volunteer-organized and -run event exceptionally enjoyable, it was also a masterclass in inclusive event design. Massive thank you to Ariel Waldman and everyone on the Science Hack Day team!
Tumblr media
On day one of the event, I addressed the group to propose an idea: a reading comprehension approach to learning code. The idea had come to me only days earlier when I was expressing my frustration at coding education tools. Although there are many freely available online, I’ve never managed to follow through to completion on any of their lessons. “I don’t want to be a coder,” I lamented. “I want to be code literate! Why can’t I read other people’s code and learn that way?” Exploring this thought further, I realized that I had been learning this way. This (along with a bunch googling and helpful CS-savvy friends) is how I learned to build my website, perform statistical analyses, compete in CTFs, and more.
I’m not a coder in my day-to-day activities, but I have developed a level of code literacy that enables me to have meaningful conversations about coding problems and translate between different stakeholders on projects. There are many people like me. People who do not need or want to develop a deep syntactic knowledge of code, but do want literacy: the ability to look at code and understand what it means.
The idea was to combine two needs (that for code literacy and documentation) into one solution. “Duolingo for code,” I pitched it in the pithy this-for-that fashion that has come to define Silicon Valley. Learners would review real code snippets and respond with code comments, collectively crowdsourcing documentation for open source projects. My Science Hack Day cohort responded with enthusiasm.
The first day of the hackathon was primarily used for information gathering, relationship development, and to fail fast on a variety of attempts at bringing this idea to fruition. I connected with educators, students, parents, and professional programmers. My goal was to both better understand the ecosystem of existing resources and to identify patterns in learning styles or barriers to learning. I also talked with the Github team to better understand the options for using their API to make pull requests for comment contributions and to source the code snippets.
Toward the end of the first day, a team had come together. I was joined by Jordan Hart, Erik Danford, and Sanford Barr as core members of our team. We dubbed our project CodeScribe for users’ role as narrators of code’s meaning (with a pleasant double entendre for “co-describe” the process of crowdsourcing documentation). Together, we honed in on some fundamental hypotheses to test to inform our development:
People can learn to understand code by reading snippets.
You do not need any prior knowledge of coding languages to learn in this fashion.
There is a way to automate checking comments.
People want to learn this way. AND/OR People want this skill.
Through a series of thought experiments as well as a few quick-and-dirty prototypes, we arrived at some early discoveries. We challenged ourselves to perform the tasks we would be asking of our learners, to read and comment foreign code. Our initial learning was regarding hypothesis #3. We found that, beyond creating an automated way of checking quality of comments (or “translations”), we first had to define what a good comment was. We arrived at the conclusion that a good comment communicates the “why” of the code, rather than the “how.” While, I stand by this conclusion, it did present challenges for other aspects of our plan.
My initial vision for CodeScribe had been very much like the language-learning app Duolingo: short snippets to be translated into natural language. We were enthusiastic believing that writing comments would make the mobile app interface easy because we wouldn’t run into the issues around spellcheck for typing code. Our revelation about comments, however, meant that the length of code snippets presented to the user needed to be much longer. Determining the purpose of the code relies on context.
Tumblr media
With continuous feedback from other Science Hack Day participants, we felt relatively confident in our fourth hypothesis. At the very least, the concept appealed to people. So, with that, it came time to test my boldest, most controversial hypothesis: that people do not need any prior knowledge of coding languages to develop code literacy through reading code. My compatriots (all of whom had studied, practiced, or even taught computer programming) were unsettled by this idea. My theory was that code was, after all, created to be useful to humans and therefore is arranged and named in a somewhat logical fashion. “For English speakers,” I proposed, “many of the terms should be familiar and may even read naturally.”
To actually test this, however, we would need to try it out with actual people! This became the focus of the rest of our time at Science Hack Day. Our final project resulted in a Google Form. We used a modified program from Jordan’s own lesson plan as the sample code and wrote up four different questions to evaluate understanding. All were multiple choice, and they represented different interaction types we envisioned for the CodeScribe: comment selection, code comparison, and function ordering.
You can find a copy of the Google Form survey here: https://goo.gl/forms/VUaKc91pYBupJ7QI3
Ultimately, I was terrifically pleased with the results (LINK). After three weeks, the survey had accumulated 55 responses from people distributed across all the defined learning methods and experience levels. The findings helped to confirm that those with absolutely no code education could extract some meaning from the code. CodeScribe certainly is not intended to throw people into plain code without guidance, so to me this was a big assurance that I’m headed in the right direction. From the survey, I also learned a lot about lesson design. I look forward to sharing the next CodeScribe prototype with you soon!
Read the analysis and see the data for yourself: (LINK)
Tumblr media
0 notes
Text
A Reflection on Penelope
Note: This started out as a short personal reflection on a scene from The Royal Romance Book 2 Chapter 8 and quickly evolved into a long, rather rambling essay of sorts. This is just my personal opinion/view on this topic and I would love to hear the opinions of anyone willing to share them. 
Chapter 8 of The Royal Romance Book 2 is packed full of interesting and fun plot points and character development. While I loved so many of the scenes in the chapter (as a Drake fan I personally loved his diamond scene), I find myself thinking most about the one in which Penelope discusses with Hana and MC how her love of poodles originated.
Tumblr media
As someone who personally suffers from anxiety,including social anxiety, I instantly related with Penelope during this scene in a way I never have before in the series. So much about how she acts and reacts suddenly makes sense now, particularly the constant mentioning of her poodles. She’s not just someone who’s oddly overly obsessed with poodles, but a person who relies on them for emotional support. I can relate to this also, as I too find animals (especially my two cats)  very relaxing and helpful for reliving some of my stress and anxiety. 
Keep reading
74 notes · View notes
Photo
Tumblr media
October 29 is National Cat Day.
When the Fords moved into the White House in August 1975, their daughter Susan’s cat Shan came with them. Here Shan, a seal point Siamese, peeks out from behind the curtains while exploring her new home.
(White House photograph A0357-09A)
228 notes · View notes
Text
Why did Vikings have 'Allah' embroidered into funeral clothes?
Tumblr media
Researchers in Sweden have found Arabic characters woven into burial costumes from Viking boat graves. The discovery raises new questions about the influence of Islam in Scandinavia, writes journalist Tharik Hussain.
They were kept in storage for more than 100 years, dismissed as typical examples of Viking Age funeral clothes.
But a new investigation into the garments - found in 9th and 10th Century graves - has thrown up groundbreaking insights into contact between the Viking and Muslim worlds.
Patterns woven with silk and silver thread have been found to spell the words “Allah” and “Ali”.
The breakthrough was made by textile archaeologist Annika Larsson of Uppsala University while re-examining the remnants of burial costumes from male and female boat and chamber graves originally excavated in Birka and Gamla Uppsala in Sweden in the late 19th and mid-20th centuries. Read more.
5K notes · View notes