Don't wanna be here? Send us removal request.
Text
Interactions with OpenSource Communities
In reflecting on the last year of working on a free open source project, and an HFOSS community I feel like I much better understand the uses of community-based learning, and why it is important for a school such as Dickinson College to help young computer scientists to contextualize their education within service to a community.
For my HFOSS project, I worked with OpenMRS, an open source medical record system used in many hospitals throughout developing countries. This was an incredible opportunity to take the skills I had been learning over the past four years and apply it to a specific project which needed volunteers, and also to move away from trivial homework assignments, and into a product which serves real people throughout the world.
If I was asked to summarize my experience in one word, I think it would be “challenging”. My teammates and I were new to working on these types of projects and picked OpenMRS because we felt a strong connection to their mission statement. After nine months of work on it however, we now understand how difficult this process was, and what a steep learning curve we attempted to scale. I am thankful for everything I learned, the ability to persevere through difficult and hard to understand technologies and problems, and most of all I learned to organize an manage a project with no true structure or steps. Thanks to professor Brought and my three teammates for their work, advice, mentorship, and support.
0 notes
Text
Social Media: Privacy and Personal Data
For this topic, I looked at the idea of Homeland Security collecting social media data of immigrants attempting to enter the United States. The idea of Homeland Security collecting and storing this data from something so unofficial like twitter or facebook of immigrants and naturalized citizens is very interesting. The most interesting part of this to me is that the DHS will collect and retain information from naturalized citizens, as opposed to citizens who were born in the US. The distinction between these citizens is interesting and problematic. Additionally, I think it is very important for discussions about how the federal government uses social media data to take place. I think it would be useful to understand the implications of social media on our place in society, and how those around us use it.
I find it interesting to look at how there is never any guidance provided to people about how to use social media, that it is offered as a free tool for them to express themselves with, and that the idea of it having consequences is never discussed. This proposal is a very real first instance of social media having true consequences on the life of a person, and their eligibility to enter this country. Additionally, what are the matrices which are being analyzed? Are there certain criteria the social media of an immigrant must reach in order for them to be identified as a good candidate for citizenship? Until we can be provided with clear answers to these questions, and our society as a whole better understands the implications of social media I feel it is unjust and unethical to utilize it as a potential screening tool.
0 notes
Text
AI and Warfare, Judicial Systems & Beauty Contests
It is interesting to consider the application of AI technology to something such a beauty contest. I think this is especially interesting because the idea of beauty is a relative idea which varies between people, time, and geographic region. This idea was tested in the AI beauty contest, where contestants submitted photos and were judged by a machine rather than a person. Surprisingly, instead of eliminating bias, the algorithm showed racial prejudice against people with darker skin tone.
This instance of algorithmic racism is interesting to me because I think it highlights the institutionalized injustices many people have become blind to. During our conversation about this, I brought up the idea of how wealth is distributed in a country such as the United States. With a disproportionate amount of White people holding a disproportionately high amount of our countries wealth.
What does this have to do with the AI who preferred light skin over dark in the beauty contest? I think it has to do with the inherent advantage that white people have in our society. The AI is only as good as the data it is being fed, and if the majority of the data which exists was produced by and for white people who have greater access to technology, then why does it surprise us that that the AI would be affected by that trend? This problem becomes more interesting when you think of how a technologist of the future might attempt to adjust this bias. Would we, for example, train the AI to make certain adjustments to the data set? I fear that doing that would be to force today's views on inclusivity on the machine, and to bias it even further. This is dangerous because human society has always been open to change, take for example the progress made for the LGBTQ community in the last 50 years. If we had hardcoded the perspective on inclusivity into our society 50 years ago I would not have many of the protections queer people have fought for. Until we can understand how to create a system which can adjust bias, and open itself to adaptation I feel it is irresponsible to apply AI technologies to relative problems such as beauty contests.
0 notes
Text
Government Surveillance
An interesting argument to consider within the topic of government surveillance is that idea that "you shouldn't worry about government surveillance unless you have something to hide". This argument aims to pacify law-abiding citizens and create an environment where it is permissible for the government to collect data, which we would currently see as private. There are many ways in which this argument could be countered, but one of the most compelling ways I could think to do so is to point out the relativity of what it means to have "something to hide". To best demonstrate this point I will use the example of a gay man in the United States of America and in Saudi Arabia. In the US, being gay is largely accepted, and legally permitted as a socially acceptable lifestyle. While someone might want to keep information related to their sexual orientation private, there is no true fear of repercussion from the government. This same case does not apply, however, to a country such as Saudi Arabia, where information related to someone's sexual orientation can be dangerous, and potentially life-threatening. In a world where personal information can have different implications depending on the environment, we are in it would be irresponsible to have anything but a conservative approach to maintaining users privacy on the internet.
0 notes
Text
BlockChain
Blochain in a complex technology with applications in a wide range of fields, especially the idea of cryptocurrency. Cryptocurrency, by its very nature, seems to be an appealing application for the future, and blockchain seems to be an ideal technology for it to be implemented through primarily because of the idea of a universal ledger which cannot be forged or doctored. I am interested in analyzing additional considerations for the implementation of blockchain for a universal currency.
An advantage is that the use of blockchain offers a decentralized and possibly more secure mechanism for these financial transactions to be processed. If the entire economic system were to shift and embrace a whole scale implementation of the technology, the decentralized nature would work to create an immutable ledger which would be hard to falsify or destroy. It also shifted the dynamic of power which has been unjustly wielded by certain groups throughout human history, and offer a true technologically democratized system for finance.
A disadvantage is that blockchain is a process which is very energy intensive, and as we move into a world where sustainability becomes an increasingly important consideration could this be supported? While current financial systems are undoubtedly reflective of a system of injustice, altering this system to suit a technology which has such high power consumptions needs could be catastrophic for the planet, and our ability to properly steward it.
0 notes
Text
Internet of Things
Internet of things describes the increasingly common practice of adding technical sophistication to objects, which are part of everyday life. Examples of this include smart thermometers, smart refrigerators, smart lights, and many other objects which are now equipped with sensors and network connectivity to make them “smart”. Internet of things devices are becoming much more commonplace, with most manufacturers adding network connectivity and sensors to even the more affordable models in their product lines, and companies like Google and Apple now offer services to connect product together to provide consumers with a more fluid and enjoyable experience associated with their “smart” homes.
This is a particularly interesting topic for me, as I can clearly see distinct advantages and disadvantages to our adoption of this technology in our personal and professional lives. I will discuss the advantages and disadvantages to consumers specifically, as I think they are the most pronounced, and warrant the most attention as this technology continues to pervade into homes and the quality of human life.
What are some of the advantages of IoT? This is an easy question to answer. IoT offers the chance to reduce frustration associated with everyday life, and frustration is something that is felt by nearly every person multiple times a day. Say you commonly use your refrigerator to drink your tap water, and you rely on the refrigerator to filter your water. New LG model refrigerators not only offer this service but will also sync with your Amazon account to order the correct filter so that it arrives right when the current filter has expired. This service reduces the frustration of having to do this manually, and possibly having a time when there is no filter. In addition, the reducing frustration these products offer the chance to increase the quality of life for everyone, especially people who are in need of assisted living. Applications of this technology to retirement homes and people who are unable to complete all routine tasks would be life-changing, as it gives the freedom for someone to live with these tasks completed seamlessly in the background.
What are some of the disadvantages? The disadvantages are just as easy to articulate and are mostly related to the security of data and how it is transmitted. A discussion about the practices and standards used by different companies to secure their data would be lengthy and warrant its own post. I will, however, discuss the idea that different IoT devices can be manufactured by different companies and have different levels of security and sophistication. While it might seem harmless to know that the temperature of your house is insecure, but this changes in a system where someone is using many different IoT devices. If patterns were detected and used to infer aspects of someone lifestyle based on an accumulation of smaller data such as temperature, lighting patterns, and refrigerator configurations the use of the data becomes dangerous.
I think that overall, IoT is the way of the future, and it represents the ease of use, and improved quality of life many people yearn for. I do, however, think, that like many other technologies we have embraced quickly there are many dangers associated with IoT and that we as a society and individuals would do better to be more thoughtful in the ways we let it into our lives, and how what we as a society view as acceptable boundaries for our privacy.
0 notes
Text
Social Media and Fake News
I feel that one of the most important aspect to consider when reflecting on social media and fake news is the idea of social distortion. When assessing complex issues such as fake news in social media you must marry two very discrete ideas: internet technology and human interaction. While social media is clearly a point of convergence between these two concepts, I wonder how intentionally these things were connected upon inception and development? Further, I wonder about the process used by engineers of technologies such as twitter and facebook used to develop these complex networks of human interaction. Did they think of the impact it would have on society? About the need for responsible transmission of information?
My thought is that many of the decisions used to construct these networks were driven by marketable features that would enhance the user experience and not with consideration about the responsibility these sites would bear in an increasingly digital age.
It can be understood by an analyst that the power of social media lies in the way certain stories can have far greater reach than previously imagined. How something can go “viral”. We do not, however, force the average user to learn about the ideas of responsible sharing, about the responsibility they have by utilizing these services. To post about a controversial topic on Facebook is to accept partial responsibility for the social movement I have weighed in on, to understand that by sharing a story I have helped to amplify a message. We must be sure that people understand the part they play in the propagation of certain "truths".
I feel that if the dissemination of fake news on social media is something we want to combat then we must start by educating people on issues related to social distortion. For example, if Susan knows how something she shares can affect those who read her newsfeed, would it alter what she shares? And if I understood the way Susan’s post affected my beliefs, and the social dialogue around me would I be more objective in my analysis of these posts, and my diligence to follow up? I think the answer to both of these questions is yes. And further, I believe if we begin conversations about social distortion, and its effect on the propagation of fake news we could see a real initiative by users to make a change.
0 notes
Text
Autonomous Vehicles
Autonomous vehicles are the idea through which we will have self-driving cars. These are cars which will ideally be able to drive independently of a human pilot. This is a broad topic, in which there are still many uncertainties. Our conversation was framed around the ethics of these cars, particularly the software decisions that will be made by engineers which could in certain situations decided between life and death. A driver makes on average 200 decisions per minute. Speaking broadly these decisions should be easy for a sophisticated enough technology to make, using analysis and measurement to decide where there is open space, and how fast the car should move. There is however, a class of decisions which we as humans become uncomfortable allowing a computer to make. One of the most concrete examples of this is a decisions where a car is set to collide with two bystanders, an elderly woman, and a young girl. If the car continues it will kill both, alternatively the car can decide to move one way or another an select who of the two will live. How do we create a system that can fairly assess this situation and come to a conclusion? And how we make these decisions without the danger of creating software capable of discrimination?
Both of these questions are interesting, and I do not have any answers. I do think however it is useful to reflect on the tenant of human trust, and how often we engage it every day on the road when dealing with other drivers. This idea does not naturally apply to technology such as automated vehicles. Many people in the class were certain that if they were to use a self-driving Uber they would need a report given to them in advanced which told them how the car would react to certain situations. Answers to questions about if the car would prioritize their life over others or its own safety. The most interesting component of this thought to me is that we have no such system for human Uber drivers. When we get in the car we assume that they will make careful and informed decisions to safeguard our collective well being. Why, when we eliminate the human component (who is a stranger most likely) do we feel entitled to such a wide array of information. Is human trust that trustworthy?
0 notes
Text
Ethical Ethics
It is interesting to consider the ethics of invading someone’s privacy, even with the intention of improving their services. When considering whether it would be ethical to infect someone’s computer with a virus even with the intent to counteract another more detrimental virus, it is important to consider Kant’s categorical imperative. Kant states that one should act in a way that your action should become a universal law. When considering this, intention cannot excuse an action. Even if one were to justify the act of infecting someone’s computer by stating that it will fix another virus, that action should be morally permissible for everyone according to Kant’s categorical imperative.
This means that by infecting someone’s computer, you are asserting that it is ethical for everyone to do the same thing. This could obviously become an issue because if everyone is operating under the assumption that they are allowed to infect people’s computers and invade their privacy if they feel it is warranted, then it would negatively impact the user’s experience. According to Utilitarian ethics, an action is ethical wherein it promotes happiness and reduces suffering for the greatest amount of people. In this hypothetical situation, the individuals hacking computers are the only ones that are maximizing their happiness. The users, however, are suffering as their privacy is being invaded and their control over their systems is being lost. It is through this that it becomes clear that the amount of users that will suffer greatly outweighs the few who are doing finding happiness doing what they want.
0 notes
Text
Reflection on Civic Learning
Upon the completion of the first semester of my Computer Science Senior Seminar, I was asked to reflect on how my involvement with Open Source communities has affected my perspective on civic learning. I think that in today's rapidly evolving world it is the duty of institutions of higher learning to contextualize fields of study within the liberal arts. It has been argued that this is explicitly addressed through distribution requirements and the encouragement provided for students to pursue classes outside their major. I think that this is an antiquated perspective, which is not sufficient for today's world, and certainly will not address the problems of tomorrow.
It is important for institutions of higher learning to addresses civic learning directly through the curriculum of their courses, and to produce students who are not only knowledgeable within their major but can understand the ethical implications of their field. For majors such as computer science, this is paramount. We live in a world in which technology has developed faster than our ability to understand its social implications, and we see examples of this all the time with compromised personal information, and confusion around a person's right to privacy and anonymity. We live in a world where a majority of people use tools developed and understood by only a small number of people. It is the responsibility of the institutions that produce these developers to instill an understanding of ethics in these scholars.
Participation in a project which focuses on service to Open Source communities is an innovative technique to instill this understanding of civic learning. Students learn a type of accountability which is very different from what they have been cultivated to understand, rather than working to the standard of one faculty, they work to address real issues relevant to a real community which works in service of a need. I feel that the development of courses which cater to this style of civic learning are the key to the development of computer scientists who are educated wholistically in the liberal arts.
0 notes
Text
H/FOSS Success Stories
When Reflecting on the history of H/FOSS you will quickly realize it is nothing new. You will learn that the opposite is actually true. H/FOSS can be found at the core of computing's modern history. Many aspects of the world wide web and other "early technologies" were largely open source, and depended on the contribution of users and developers from many backgrounds. When thinking about an H/FOSS success in terms of how lucrative an organization might be it is a short jump to RedHat. RedHat was able to successfully monetize a packaging of Linux based on a subscription model while keeping all code open source, and maintaining a community of volunteer developers.
The question I am hoping to address is why aren't there more stories such as RedHat? My answer is that as the world of technology changes, and the models of successful corporations adapt, RedHat and attempt like it will be left in the dust. The absorption of RedHat into IBM if nothing further evidences this point. The rationale behind this is that comparatively, RedHat is not nearly as lucrative as Microsoft, Oracle, or Amazon; but it subsists due to its age, and the advantage that it gained inherently. Take for example a new community which hopes to market an open source project which could compete with company X. There are two potential outcomes: (a) the model flops because the service is not on par with company X, or (b) the service is so unique and successful that company X quickly absorbs the young project, and identifies a way to incorporate it into their proprietary software. In short, the series of events that have led to today's culture of technology is inhospitable to the idea of lucrative open source business models without the support of a large parent company.
0 notes
Text
H/FOSS Motivations
Why is it important for higher education to incorporate civic learning? This question is becoming increasingly relevant as small liberal arts colleges struggle to defend the value of their education with tuition price hovering around $68,000 annually. College's need to adapt and define a 21st-century skill set. I think this question is especially relevant to a field such as a computer science. Computer science can be seen as a rigid and inflexible pedagogy, which teaches core techniques and algorithms central to understanding today's state of the art, and hopefully contributing to tomorrow's. Incorporation of projects such as H/FOSS and digital humanities turn this concept on its head. The value of studying computer science at a school such as Dickinson College in Carlisle, PA is to contextualize this abstract and scientific field within the liberal arts. To empower students to emerge from their courses with a deeper understanding of the science, as well as its applications in the world.
In many ways, H/FOSS embodies the values at the core of the liberal arts, in a new and innovative way. Using your technical training and skills to volunteer towards a greater cause, and to the benefit of others around you. Students engage with communities of volunteers around the world to work on projects which have applications such as teaching people to code, acting as a resource for humanitarian aid, and providing hospitals in developing countries with innovative medical record keeping software. H/FOSS goes beyond the shallow definition of community service and volunteering; it enables students to assist others in the development of software, and encourage agency in vast networks of volunteers around the world. TO incorporate H/FOSS into a curriculum is to embrace the future, and to understand how Liberal Arts colleges will need to adapt to survive in today's increasing accessible world.
0 notes
Text
Software Development Process
There are many different models for software development such as waterfall, agile, and spiral. These methods have been tested and adapted for use in software design over many years and can be found in practice in many instances within the industry. An interesting question would be to analyze how these methods are implemented in FOSS projects, where the community may not be as structured as a company, and there may not be such a clear-cut hierarchy.
After more research, I have found that as you would expect, the development model for FOSS projects is highly variable, and depends largely on the needs of the project and the motivation of the community. Some of the models I have seen are rapid-prototyping, extreme programming, and agile development. To me, rapid-prototyping seems to be a common practice in instances I have investigated. This method is very similar to the ideas of code and fix, where different developers will work on branches of the project with less guidance and direction than a strict model such as waterfall would provide.
I think that this would be caused by the idea that members of a community are volunteers, and there is likely very little stake/ motivation to follow strict development models that have been imposed. Take OpenMRS for example, I have had a very hard time identifying any guidance on development model or methodology that they recommend for their community members.
An interesting follow-up question to the model followed would be to ask how this affects the productivity of a project. I doubt that the models followed by projects such as the one mentioned above are not the most efficient and that more strict and concrete models would lead to faster development, and perhaps more solid structure; so why do communities continue to function in the way they do? One answer would be that members are volunteers, and if you impose too much structure they may be discouraged from contributing and spend time on other projects where they have more freedom. but the largest factor is likely that the volume of contributors to these projects is so high that it makes up for the inefficiencies in the development model.
0 notes
Text
Software Testing
Verification and Validation are two methods within the field of software testing used to inquire if the software meets the technical requirements stipulated, and also the needs of clients and end-users. These two concepts are integral to the idea of software testing, but can often be confused. They represent two important, but radically different questions, one addresses whether the actual components and software are functionally operational, and the other addresses whether the software and components are functionally useful, and meet the needs of the consumer.
Verification can be framed as the question "did we build the software right?" or "does it meet the specifications?". It is the process of evaluating a system or components to determine it has met the conditions set to guarantee successful completion. Validation can be framed as the question "did we build the right software?" or "is this what the customer wants?". It is the process of determining if software satisfies requirements imposed by the customer. It is interesting to think that concepts that are this important can presented in ways which make them so difficult to understand, even to veteran programmers and computer scientists. This could also, perhaps be linked to their relatively theoretical existence, in a field where many concepts and practices are grounded in clear real-world applications. A question that I have, is about the utility of considering these ideas separately. Does it make sense to ever just verify or validate a software? I argue that it does not. Software that functions perfectly, but has no application to the goal is equally unusable as software that is non-functional. With that in mind, I would ask why not leave these confusing concepts behind and adapt to a new method of considering them one interconnecting means of verification?
0 notes
Text
Software Design Patterns
Software design patterns are similar to a gameplan used to determine the layout of a software project. If you refer back to an earlier post you will remember I discussed architecture patterns and implementation details. To me, software pattern is the point between these two ideas that helps to lay the practical foundation for a project. A software design pattern could be represented as the way classes interact, the use of an interface, of superclasses, the means by which the basic architecture is guided to the implementation details. It is interesting to retrospectively look back at the discussion of the importance of implementation details over software architecture because with this added insight I would say both are less important in practice than the software pattern that is selected because that will lay the groundwork for both the skeleton of the project, but also the way in which minute details interact with one another.
This begs an interesting question related to their importance and intentionality. When are software patterns decided? The patterns which are popular today exist because they are time-tested standards which have continually proved to be valuable. What I wonder is, if many developers walk into a project with a set architecture, software pattern, and planned implementation details. I am going to assume that the implementation details and software patterns organically develop in the projects early stages, while most of its structures are still very malleable and impermanent.
0 notes
Text
Software Flaws
In a series of videos related to software flaw Jeremy Walker & Max Kanat-Alexander discuss basic rules, and methodology to avoid common flaws in code, and projects. These flaws, which can be summed into three basic concepts of "Writing code that isn't needed", "not making the code easy to change", and "Incremental design and development". These types of flaws reflect common types of errors seen in a variety of code and project which are explored in the book Code Simply by Kanat-Alexander.
These flaws each cover their own type of code or design but are closely interconnected. For example, writing code that isnt needed for the initial part of the project likely won't be easy to change in the future and will require a larger investment of time at that time. Which in a large project, could reflect significant amounts of wasted man-hours. The interconnectedness of these problems makes the methods to avoid them incredibly relevant for FOSS projects such as OpenMRS. The software of these large applications is increasingly complex, with many different users contributing to a variety of areas and functionality. Following these simple rules would help to mitigate the time that would likely be wasted adjusting code to meet new needs, and fit the general style of the project.
I wonder how the style formatting and guidelines for software contributions in these large projects might help to mitigate the prevalence of these flaws. by proving clear documentation and listing expectation could a community control the type of code being contributed? How effective are the methods used to standardize this process? I believe that if these rules to avoid flaws were clearly presented to contributors at all levels, that they could be an incredibly effective means to avoid flaw, and streamline development.
0 notes
Text
FOSS Licensing
Licensing is an important part of many projects, especially online projects such as FOSS. Licensing is related to the distribution of copyrighted material and ensuring that you have adequate legal protections for the distribution of your work. These legal protections are integral for the maintenance of FOSS projects, and their ability to exist among similar proprietary products with the support of large corporations behind them. For a large software company legal protections of their patents and copyrights is a routine component of the organization; how else would a company ensure that competitors don't just imitate their work. FOSS projects shouldn't be considered any different; someone starts a FOSS project because they believe there is a need for software that fulfills a certain purpose to be freely available. If FOSS projects didn't take adequate legal precautions to protect themselves there would be nothing to stop a company which sells similar proprietary software from acquiring the project and incorporating into their software. To prevent this, success FOSS projects use licensing as a source of legal protection.
Two typical forms of licensing used by FOSS projects are MIT licensing, and GPL licensing. Both of these options offer simple legal protections for the distribution of the product which legally protect the author and contributors but also help the project to continue to exist without fer that they will be absorbed into a larger company. An important part of both MIT and GPL licenses is that they are relatively easy to use, and thus accessible to non-lawyers, which is what allows communities of volunteers to create licensed products without the cost of legal teams. An important thought, however, is to consider if licensing is sufficient to protect a FOSS project.
FOSS projects by their very nature have all their components out in the open, available for all to look at, edit, and utilize. A license such as a GPL license works to protect the project by asserting that all work which is contributed to the project will continue under the GPL license. This idea, however, could be challenged by the narrow scope of copyright. Say, a large corporation created additions to their software which have many similarities to a popular FOSS project, how does the scope of GPL affect that situation? And given the difference in legal capabilities between a large corporation and a volunteer-based community does it even matter?
0 notes