infoscavenger
infoscavenger
infoScavenger
18 posts
Don't wanna be here? Send us removal request.
infoscavenger · 5 years ago
Text
Senior Seminar Reflection
5/11/2020 Monday
My senior seminar classes for which I was writing these blog posts have come to an end. The class was eventful and, I feel, it prepared me well for a career in technology. I was able to learn about several industry concepts and procedures, have nuanced discussions about relevant tech issues and could start to contribute to the open source community.
In the class I was introduced to concepts which are essential for anyone looking into programming as a career. I learned about version control systems and was also able to use several functions like merge, pull and fork that are present in the popular VCS git. We also learned about and used batch files in LINUX and became more familiar with the open source operating system Ubuntu. Agile development was also one of the focuses of the class and we would have scrum meetings to discuss the progress made in our research project. Having been introduced to these technologies in class should help in my transition to a career in software development.
Every week we would have discussions which ranged in topic from the ethics of driverless cars, societal impacts of automation to privacy/convenience tradeoff of data collection among others. While I had previously taken an ethics class before, I had not thought of applying ethical principles like Kantian ethics and utilitarianism to decide the morality of these actions. Question such as whether an autonomous vehicle should prioritize saving the life of its passenger over pedestrians were pondered upon in class. I also got a chance listen to what other students had to say about these topics. Sometimes these opinions were different from the opinions that I had which mad me reconsider my position on the issue.
Because of the seminar I have a better understanding of open source projects and I am also considering contributing to some open source projects. Through the presentations of some of my classmates, I came across a few exciting open source projects like Jenkins, Godot and Free Code Camp which made me want to pursue these projects. I on the other hand was working on a research project investigating the viability of certain handover algorithms. I definitely learnt a lot about telecommunications and using the software ns3 but I wasn’t able to come up with any conclusions that would advance the field. However, I remain optimistic that someone might find my work to be helpful in their own investigations.
All in all, I had a wonderful time taking the class. As most computer science seniors who are graduating with me are also taking the class, the seminar also served as a piece of memorabilia which will help me recall and cherish my time at Dickinson.
0 notes
infoscavenger · 5 years ago
Text
Recognizing Evils of Facial Recognition
Tuesday, 2/11/20 Facial detection can be used to identify people in a group and can increase convenience as with Amazon go stores -where shoppers can just grab the goods they need and leave without having to wait in lines, validate identity as with facial unlock in new phones, enforce laws like that against jaywalking by sending tickets to perpetrators residence as in Shenzhen and identify and rescue lost and kidnapped children.
Facial detection has been used by Amazon to improve convenience by allowing customers to just pick up the item they want to purchase and walk away from the store, without having to wait in lines. The customers are then charged after exiting the store. Amazon opened its first cashier-less grocery store in Seattle in 2018 and has plans to open similar stores throughout the country.
After iPhone X introduced Face ID, that employs facial recognition to unlock phones, the feature has become ubiquitous on most phones. This has made it possible to unlock devices just by looking at it and enabled phone manufacturers to get rid of the cumbersome fingerprint scanner altogether.
In places like Shenzhen, facial detection is being used to send tickets to jaywalkers instantly thereby reducing the number of instances offenses like this occur. In India, 3000 missing children were located in 4 days by using facial recognition technology.  
At the same time, facial recognition can be used by the state to act against government whistleblowers and invade people’s privacy. There also exist instances where facial recognition can wrongfully classify people especially those who are not of white ethnicity which raises the question of whether facial recognition should be used in policing and law enforcement
During the Hong Kong protests, the government used facial recognition to crackdown against protesters forcing the protestors to wear face masks. Some face masks even projected images onto a person’s face to fool these facial recognition algorithms.
I think that facial recognition is not morally right or wrong in itself but it depends on the usage of facial recognition. If facial recognition is used to wrongfully convict someone then it is wrong but if it increases convenience or quality of life then it is right. Having said that under Act or Rule Utilitarianism facial recognition appears right but according to Kantianism, facial recognition is morally questionable.
https://www.nytimes.com/2019/07/26/technology/hong-kong-protests-facial-recognition-surveillance.html
https://www.independent.co.uk/life-style/gadgets-and-tech/news/india-police-missing-children-facial-recognition-tech-trace-find-reunite-a8320406.html
https://www.independent.co.uk/news/world/asia/china-police-facial-recognition-technology-ai-jaywalkers-fines-text-wechat-weibo-cctv-a8279531.html
https://www.theverge.com/2018/9/17/17869294/amazon-go-store-chicago-cashier-less
0 notes
infoscavenger · 5 years ago
Text
Autonomous Vehicles
Morality of Self Driving Cars Tuesday, February 4, 2020 Autonomous vehicles will need to constantly make time-sensitive decisions and might even be asked to prioritize someone’s life over another person’s life. In essence, they might even be asked to decide the worth of a life. What ethical model should be used, whose life is to be saved and who is to make these decisions become important questions that need to be answered before autonomous vehicles are ubiquitous? When deciding on what ethical model should be used to govern AVs, given the complexity of models like Rule Utilitarianism and Kantianism -where results of actions need to be extrapolated to the entire population, it is only conceivable that AVs use act utilitarianism to make their decisions. Even if Act Utilitarianism is employed dilemmas like whether to save a baby over an old man, criminal over an upstanding citizen and passenger over pedestrians naturally arise.
During class discussion, our group decided that factors like age, criminal history or medical records should not be considered by the AVs to make decisions. Additionally, it was pointed out that the AV would be wasting valuable time if it were to look up data about someone against a registry. One possible factor that should be considered is the survival odds i.e. what maneuver would ensure that most lives are saved. However, if the AV still needs to decide whose life to save over another then I think the decision should be decided by legislation, rather than being decided by the manufacturer or the user.
If the manufacturer were to make the decision, then they would most likely act in their self-interest and program the AV to make the most profitable decision even if it results in greater loss of lives. If the user were allowed to change these settings, then these decisions would not be universal, and it could result in more accidents due to the unpredictability. By having these settings dictated by legislation, a universality is ensured and more people are likely to agree with the proposed actions.
0 notes
infoscavenger · 5 years ago
Text
Ethics in Technology
Examining the ethics of Nachi worm under Utilitarian and Kantian lens  Wednesday, Jan 29, 2020
Ethical theories can be used to evaluate the morality of actions and decisions made in the tech sector. Based on the theory employed an action, like the deployment of a neutralizing worm in response to a malicious worm, can be viewed as either ethical or unjust. These ethical theories include but are not limited to Act Utilitarianism, Rule Utilitarianism, and Kantianism.
The Blaster worm had propagated through the internet and infected numerous devices, forcing them to reboot every few seconds. The Nachi worm which was used to counteract the Blaster worm would infect the same computers and disable the Blaster worm. However, an unintended consequence of the release of the Nachi worm was that it slowed down and even disabled some networks like that of Diebold ATM machines at two financial institutions. In most networks, it was able to locate and disable the Blaster worm.
The theory of Act Utilitarianism judges an action based on the net utility it produces. The ethicality of action has to do with the results and consequences. Utility in Act Utilitarianism is an attempt to quantify happiness and is correlates to the number of people that the action makes happy and the number of people it brings sadness to. A positive net utility would imply that the action does more good than harm and is therefore ethical. It can be argued that the Blaster worm was causing several people discomfort and because not many computers were inadvertently harmed by the Nachi worm, from an act utilitarian point of view to the deployment of the Nachi worm was ethical.
Rule Utilitarianism is similar to Act Utilitarianism except it considers the net utility generated by the extrapolated universal adoption of the action i.e. what if the action was performed by everyone or every time something else happens. If a worm-like the Nachi worm were to be deployed every time a malicious worm infected a network, the network would be clogged with traffic and that would cause harm to people leading to negative net utility. Under Rule Utilitarianism the deployment of the Nachi worm would be considered unethical.
Kantian ethics not only relates to the consequences of an action but also its intentions. To even begin to be considered moral an action has to be motivated by good intentions. Secondly, the action has to be consistent with the first imperative which consists of two formulations. The first formulation states that for an action to be moral it should be possible for that action to be performed universally. The second formulation asks that people be treated as ends and not means to an end. While the intentions of the deployment of the Nachi worm were pure, the action violates the two formulations and is thus unethical by Kantian standards. Like the example presented for the Rule Utilitarianism case if a worm was deployed against every malicious worm, the networks would be congested to the point where no new worms could be deployed. The universal adoption of the action would prevent the action from being performed violating the first formulation. Secondly, as network users were not notified of their computers receiving the Nachi worm, they were being treated as means to an end which violates the second formulation.
0 notes
infoscavenger · 6 years ago
Text
Senior Seminar Part 1: A Reflection
Friday, 12 December 2019
Learning about free open source software (FOSS) has definitely cultivated in me a desire to help the community. From the surface, participating in an open-source project might seem like an altruistic activity from which participants don’t really have a lot to gain. Through taking this course I learned that involvement in FOSS can be very fulfilling and a great learning opportunity. Honing your coding skills by participating in a useful projecta project that is useful, while receiving feedback and comments from experienced developers, can be more effective than learning on your own or in a classroom setting. In a classroom setting, some of the assignments might be disengaging and the payoff for your work might not be immediate. But FOSS allows contributors to work on the parts of the software that they genuinely find useful. The class has also made me reconsider my position on starting Open Source projects. Not only did the class clearly outline the steps to create a FOSS project but it made me see the benefits of OSS from the perspective of businesses. I was under the impression that launching FOSS projects would surely mean that your project would be copied by another company that would make money off your labor. But by making your software FOSS you are reducing the number of bugs in your code, opening your project to new ideas and providing new programmers with a learning opportunity.
The class has also instilled in me, intellectual curiosity. In the class, we learned about several practices and skills like version control, design paradigms and project management structures, employed in the software industry. This has made me more excited about pursuing a career in computer science. The research project that I am working on has also made me think about the work I’ll be doing after undergrad. I am considering going to graduate school and working in academia which the honors project has encouraged. I have always been interested in tech news and discussing the implications of technological advancements. Talking to Tim Hwang who has followed the issue of Deepfakes closely and listening to his opinions on the issue was the highlight of my semester. I am looking forward to discussing issues like the ethics of self-driving cars, impacts of automation, among other topics in the Spring semester.
0 notes
infoscavenger · 6 years ago
Text
FOSS Tools, Structures and Communications
September 17, 2019 When contributors decide on which open source project to work on, probably the most important motivator is the product itself. A developer is more likely to work on a project that will be of use to themselves down the line. For this reason, demos of the software, screenshots, video walkthroughs of the software can attract several contributors. Apart from a minimum viable product, good documentation is paramount to attracting new contributors. The documentation can help developers learn about the software and convey the maintainer’s expectations. Without good documentation, new developers can feel lost and are discouraged from working on a FOSS project.  
It is very important to convey the mission statement/ values of the project. A FOSS project can evolve to be something very different then what was initially conceived. A good documentation and mission statement makes sure the project doesn’t stray too far from its intended goal. Additionally, FOSS projects need to have a good bug tracking/ticketing system. These bug tracking/ ticketing systems can be used to not only highlight defects in the code but can be used for unsolicited patches and new feature requests.
Finally, projects need to have good communication channels between contributors and maintainers. These communication channels can be either synchronous or asynchronous. Synchronous forms of communication include real-time communication via platforms such as Internet Relay Chat (IRC), video/audio chats, or Slack. Asynchronous channels of communication include emails, a wiki page, documentation, or blog post. Synchronous communication provides instant, personalized feedback while asynchronous channels can provide more in-depth information that is readily available.
0 notes
infoscavenger · 6 years ago
Text
Another Red Hat?
Can the Red Hat model work again? Criticism of FOSS Tuesday, December 10, 2019 Red Hat Enterprise Linux is probably the most successful FOSS project. Although, the Red Hat flavor of Linux is free and open-source Red Hat customers are able to subscribe to a stable release of Red Hat Linux. Red Hat bundles software and adds new features to the OS. Users don't have to worry about the software being buggy and error-prone. While this packaged OS is also available for free the subscription comes with customer service, installation and repairs. Red Hat works because it is able to remove complexity and provide ease of use and stability to its customers.
Although, the Red Hat model worked wonders for Red Hat Software Company, pundits have pointed out that the Red Hat model won’t work again for any other software company. Because the model does not allow adequate funding for research, ongoing investments and sales, software under this model can have lackluster sales. Companies adopting this model have minimal product differentiation and limited pricing power. Additionally, successful FOSS companies can have their features stolen. It is very difficult for such companies to compete with proprietary software companies.
FOSS philosophy has also faced criticism outside of its inability to create successful businesses. Some people see companies push for FOSS as exploitation of free labor. In fact, many software companies require new hires to be involved in FOSS to even be considered for job positions. There’s also the argument to be made that contribution to FOSS is limited to only the privileged. As FOSS contributions do not pay developers any money, developers must have another job or be affluent enough to support themselves. Additionally, developers would have to have ample free time outside of their primary means of income to contribute. This makes FOSS inaccessible to many women who have to spend significant portion of their time taking care of their children.
Allowing employees to work on FOSS projects during company hours and thus being compensated for their contribution would be a workaround to the problems discussed. This would keep the FOSS community thriving and companies would be able to integrate software components from FOSS projects, worked on by people who aren’t employed by the company, into their software. Time outside of work would not need to be dedicated for participation in FOSS projects.
0 notes
infoscavenger · 6 years ago
Text
Motivations for FOSS
Friday, December 6, 2019 Participating in FOSS projects can seem like a good learning opportunity for developers who are looking to gain some industry experience. Legitimate peripheral participation (LPP) describes how newcomers become experienced members and eventually old-timers of a community of practice or collaborative projects. Welcoming FOSS projects allow for LPP which ensures the continuity of the project and also offers new developers a chance to feel like they are working on something substantial, with a real-life impact. To gain experience, the barrier to entry for FOSS projects is also lower compared to other avenues like a job.
Not only do FOSS projects offer clear benefits to contributors but the FOSS model can also be beneficial to businesses.  By adopting FOSS businesses have access to free labor. FOSS projects can sometimes have millions of collaborators which business wouldn’t be able to utilize if they had to pay for these developers. Making your software FOS might also present the company in a good light and improve the image of the company. The product that’s delivered through the FOSS model is also more likely to be in line with user expectations, as users often contribute towards its development.
Businesses can still remain lucrative under the FOSS model by adopting different business models like support sellers, loss leader, widget frosting and accessorizing. In the support seller model businesses give away the software product, but sell distribution, branding, and after-sale service. In the loss-leader model, companies give away open-source as a loss-leader and market positioner for closed software. Widget Frosting model allows hardware companies to create better drivers and interface tools inexpensively and still make money from hardware sales. Another model accessorizing allows companies to make money off of books, compatible hardware, complete systems with open-source software pre-installed rather than the core software.
However, if the core developers are unable to perform their roles then OSS projects can fail. It is essential to promote developers to make sure that there are replacements available. A FOSS project can only be successful if clear channels of communication exist between maintainers and contributors. Turning your software into FOSS also means that businesses lose control of proprietary knowledge and the direction of the software. However, the benefits of making your software FOSS outweigh the cons.
0 notes
infoscavenger · 6 years ago
Text
Software Processes
December 3, 2019 In the latter half of the 20th-century software engineers started to notice that most software was not being developed in time, was riddled with bugs and was not meeting customer expectations. This necessitated the need for better processes and methodology in building software. Software processes are a set of related activities that lead to the production of software.
Some common software processes include Code and Fix, Waterfall, Sashimi, and Spiral. The code-and-fix process is what it sounds like and involves unplanned uninterrupted coding sprints where errors are fixed as they are discovered. The Waterfall model is a sequential process that involves distinct steps: requirements, architecture, module design, implementation and operation and maintenance. Each step has a corresponding verification and validation activity. Requirements are validated, architectures are verified against requirements, module designs are verified against architectures and requirements, implementation is tested, and the operational system is revalidated through its use. After one stage is completed it is not revisited and development teams work on only one stage at a time. This process is suitable for really big projects which involves several developers as minimal communication is required between developers in this process or if the requirements are known to remain unchanged after development. However, software created using this model is rigid and there’s no feedback making the software unlikely to meet customer requirements if these requirements are changing. The Sashimi model is similar to waterfall except there is overlap between stages i.e. requirements can't be completed until architecture is at least partially explored, and architecture can't be completed until module design is at least partially explored, and so on. Spiral development process is a cyclic process and involves identifying the highest priority problem, evaluating alternatives, identifying risks, developing solution to problems and planning the next phases of the next cycle.
Agile Development is not necessarily categorized as a software process but agile principles can lead to quicker development times and create software that is inline with customer expectations. It involves prioritizing individuals and interactions over processes and tools, working software over comprehensive documentation, customer collaboration over contract negotiation and responding to change over following a plan. Scrum is a software development process based on Agile development principles. In scrum work is split into sprints and stories, there is a backlog organized according to priority, developers have flexible roles and meetings are frequently held.
0 notes
infoscavenger · 6 years ago
Text
Importance of Testing
Why software testing is importance and how to do it? Tuesday, November 26, ‘19 Software testing ensures that software is able to achieve a programmer’s intended functionality. It is the process of identifying correctness, completeness, and quality of developed code to ensure that it is defect-free. This is important to ensure that the end-user does not face any expensive or dangerous repercussions due to bugs in the code. In 1999 the Mars Climate Orbiter crashed onto the Martian surface due to a bug in the software that made erroneous conversions between imperial and metric units. This event cost NASA about $200 million. Similarly, the erratic behavior of the software on the Boeing 737 Max, pilots were unaware of the adjustments made by the software which made the plane automatically pitch down, led to the several crashes which took the lives of hundreds of people. These events highlight the importance of software testing.
Several principles can be utilized to form effective testing strategies. As it is not possible to test each and every aspect of the code, risk assessment needs to be performed and it should be ensured that the features most critical to the software work. Another useful behavior that can be exploited by testers is that 80% of the bugs are found in 20% of the code. Identifying risky modules and rigorous testing on this part of the code might save time. However, if similar tests are repetitively performed then no new errors will be discovered. Additionally, even though your software might pass all the tests, bugs could still exist in the software so more comprehensive tests need to be designed and expectations managed. Another important principle is understanding that software that is 99% bug-free might still not meet the business needs or user expectations. It could still be unusable which necessitates the need for early testing. Testing should be started as soon as possible so that the requirements of the software are allowed to change, and the software is better able to address client needs. It is useful to note that testing different kinds of software will be different as testing is dependent on the context.
Testing constitutes both validation and verification. Verification involves checking to see if the software is consistent with programmers’ desires and validation is ensuring the software delivered is what the customers want. General testing can be divided into three categories: functional testing, non-functional testing and maintenance testing. Functional testing test-specific actions or functions of the code while non-functional testing tests aspects like scalability or security. In maintenance testing, tests are performed to check whether the software is still functioning as intended after changes have been made to it.
Other important subcategories of testing are black-box, white box, integration, and unit testing. In black-box testing, tests are performed without any knowledge of internal implementation while in white-box testing the tester is aware of the implementation of the software. Black box testing requires more test cases as some test cases might be redundant while white box testing might lead to the pesticide problem where similar tests are repetitively performed and no new defects discovered. Unit testing involves testing individual components or modules and integration testing examines if all these modules are able to perform collectively.
While writing code in classes, rigorous testing is often overlooked due to the simplicity of the software. However, the readings from the class about software testing highlights its importance.
0 notes
infoscavenger · 6 years ago
Text
Functionality by Design
Overview of Software Design Patterns Tuesday, November 19 2019 Software developers have to fulfill several requirements including creating simple public interfaces, establishing relationships between classes and exposing inheritance hierarchies. They need to make sure that software is reusable, extensible, operational and legible. Software design patterns are reusable general solutions to commonly recurring problems and developers have realized that by molding their software according to these tried and tested design patterns it is easier for them to achieve their goals. Design patterns dictate how classes and other objects interact with each other. They comprise of a name, a description of the problem that the pattern aims to solve, the solution to the problem and a breakdown of the consequences of using the design pattern.
It is very important to consider the consequences of using a certain design pattern as the design pattern employed could pigeonhole the functionality of the program. For example, using the composite design pattern, which allows programmers to abstract containers as components, makes it difficult to create new containers and distinguish containers from components without using runtime type checks -which lead to brittle bug-introducing code. In fact, sometimes design patterns are counterproductive. These counterproductive patterns can be called anti-patterns.
Software design patterns can be classified into four categories depending on their usage: creational, structural, behavioral, and concurrency. Creational patterns deal with how objects are created, composed and represented. An example of this design pattern is the Singleton pattern where a class has only one instance and this same object is accessed by all other components. Structural patterns define how classes and objects can be combined to form larger structures. The flyweight design pattern is a structural pattern that limits the number of objects by reusing objects that have already been instantiated and making tweaks to them when objects are similar. This involves categorizing fields into extrinsic fields, shared by other objects, and intrinsic fields, unique to an object. If objects have the same extrinsic field but a different intrinsic field, then changes are made to the intrinsic field of an already instantiated object which shares the extrinsic field. Behavioral patterns have to do with algorithms and assigning responsibilities between objects. The strategy design pattern makes it possible for classes to use an algorithm from a group of algorithms interchangeably. Concurrency design pattern addresses aspects of multithreaded programming. An example of this is the producer-consumer pattern in which a producer thread pushes data onto a buffer which is pulled by the consumer to be used.
0 notes
infoscavenger · 6 years ago
Text
Three Flaws of Software
Thursday, Nov 15 ‘19
Max Kanat-Alexander identifies the three flaws of software engineering as writing unnecessary code, writing code that’s not easy to change and writing code that’s too generic.
Kanat-Alexander states that software has to follow the law of change which states that “The longer your program exists, the more probable it is that any piece of it will have to change.” Because of this law programmers spend a lot of time future-proofing their code and fall victim to the first flaw and write unnecessary code. Kanat-Alexander mentions that it is not about needing or not needing a feature in the future but more about whether you know how the feature it’ll be designed now. Because programmers don’t know what the requirements of the future are, they try to cover all possible bases. Often programmers have dead code in their program to implement a feature in the future. Kanat-Alexander stresses that code that isn’t being used in the program should be deleted and that programmers should abide by the philosophy of YAGNI: “You ain’t gonna need it.”  If this feature is absolutely essential, then it can be restored using version control.  The author adds that programmers need only keep comments that add to the code but not describe the code redundantly. Programmers should also not feel the pressure to validate every input.
While working on Bugzilla Max Kanat-Alexander was halfway into implementing a new feature that he anticipated needing in the future but forgot to complete the feature. His code was then responsible for several bugs because he tried to add extra features that weren’t needed at that time.
While the author realizes the importance of removing unnecessary code, he also stresses that code should be easily modifiable. Some programmers write code without any design or adopt the waterfall model of software design. This model includes designing the software upfront, big development, big quality assurance and then release. When the software gets to users the requirements of the software have changed. Kanat-Alexander emphasizes avoiding spaghetti code including convoluted if-then statements or 1000-line methods. Spaghetti code can be hard to change.
As an example of complications caused by programmers falling victim to this flaw, Kanat-Alexander presents the story of his friend who worked as a contractor spend. His friend spent 2 years to write up a 300-page requirements documents before any code were written and by the time it was delivered it was incomprehensible, internally inconsistent and too complex to implement. The author also talks about the difficulty he had turning Raw SQL code into generic code.
According to the author, the third flaw is being too generic when writing code. When programmers try to accommodate every single future requirement, they put in a lot of effort into something that generates little value. Users don’t get much out of this code as the feature is not required at present. Programmers often think they need to catch every exception but there are some exceptions that don’t need to be caught and can be left to bubble. Kanat-Alexander is quick to point out that by asking programmers to use generics sparingly, he is not encouraging bad design but discouraging over-design He reiterates that it might take more time to implement a generic solution that solves an anticipated need of the future when present requirements could be fulfilled in significantly lesser time
A developer at Bugzilla anticipated that others would use different queueing systems and coded plugins to support the different queueing systems which made the code unreadable. Another developer joined the project and purged the additional code. None of the Bugzilla users have used another queueing systems since these new features were added illustrating that all the added code would have been unnecessary.
Not only does Kanat-Alexander discuss these flaws but it also offers a simple solution to these blunders: incremental design. Kanat-Alexander argues that rather than designing upfront and making incremental changes towards achieving your design, rethinking the design and refactoring the code at every stage is more beneficial. He also emphasizes producing a useful product as soon as possible that the users can use and building the rest of the product incrementally around user feedback. For example, if somebody was coding up YouTube, Kanat-Alexander would suggest building the video player before the system to browse videos.                
0 notes
infoscavenger · 6 years ago
Text
Software Licensing
Friday, November 1, 2019
A software license communicates, to users of the software, the intended use of the software and expectations regarding the modification and distribution of the software. Software licenses also ensure fair use and provide legal protections to the software. Adding a license to software can be as simple as stating the license on the project’s front page and including a full license text file -usually named COPYING or LICENSE- with the source code. There are different types of software licenses including the GNU Public License (GPL) and the MIT License.
Software under the MIT License can be freely used, copied, modified and redistributed as long as the original author is credited, and license agreement remains intact. The GPL offers the same freedoms as MIT Open License, however, unlike the MIT License software with a GPL license cannot be made proprietary. All iterations of the software released under a GPL license must be released under the same license. This practice of ensuring that all derived works be open source is known as Copyleft. The GPL License has two versions namely GPLv2 and GPLv3. GPLv3 adds on to GPLv2 and protects against tivoization or the practice of restricting modified open-source software from being installed and run on certain devices. GPLv3 protects against tivoization by requiring source code to be released with signature keys and an installation guide so that they can be run on all compatible devices.
Software can also be dual licensed as long as the two licenses are compatible with each other. An example of a dual license is the programming language Perl which is licensed under the GPL license as well as the Artistic License. When a software contains multiple licenses, depending on the developer specification, the recipient can either decide on which license to adhere to or the terms of all the accompanying licenses apply at the same time.
0 notes
infoscavenger · 6 years ago
Text
Software Architecture Tradeoffs
Oct 25, 2019 Software architecture primarily comprises of the design patterns employed, the interaction between data and allowing for scalability in an application. Good software architecture addresses the needs of today while leaving room to address the needs of tomorrow. Software architecture is less concerned with how the program is implemented. For example, software architecture has little to do with the programming language used and the data structures and algorithms chosen.
The decision regarding which software architecture boils down to functional factors and quality attributes. The functional factors refer to what the software can do along with the amount of memory consumed and the time is taken for the program to run. Quality attributes, on the other hand, are concerned with maintainability: whether a bug can be easily discovered and fixed; scalability: whether new features can be added with ease, cost: the expenses to operate the software and, security: how easily can unauthorized parties access data.
When deciding on a software architecture there are several popular styles to choose from which include Pipes and Filters, Data Abstraction and Object-Oriented Organization, Event-based Implicit Invocation and Layered Systems among others.
In Pipes and Filters architecture, a set of inputs is run through components called filters which produce an output. Different arrangements of these filters: when the output of one filter is fed into another filter, when two filters operate concurrently or when the same input is passed to multiple filters, it allows the software engineer to create complex applications. The Bash script is based on the Pipes and Filters style. This style is simple, supports reuse: a filter can be used multiple times, is scalable: more filters can be added as required and naturally supports concurrency: two filters can be operated at the same time. The disadvantage of this style is that a filter might be dependent on the output of another filter which might delay the program execution.
Data Abstraction refers to the omission of details regarding data manipulation to focus on the overarching interaction between data and object-oriented programming involves representing all data in terms of classes and objects. This makes programs written in this style easily understandable and maintainable. The program is also scalable as changes can be made to a class without affecting other classes. The disadvantage of using this style is the memory overhead required to keep track of what class an object belongs to. Several popular programming languages like Java and C++ support object-oriented programming.
In Event-based Implicit Invocation architecture, an action like the user clicking a mouse triggers an event that propagates to other parts of the program which handle the event i.e. perform an operation in response to the event. This architecture style strongly supports reuse as new event handlers and event triggers can be added, and old ones replaced, without affecting the rest of the program. Since all components receive information regarding an event, this style can introduce security vulnerabilities.
The decision regarding which architectural style to use is an important one and is dictated by the requirements of the software. A tradeoff exists between the advantages and disadvantages of using a specific style which has to be considered when making the decision. Additionally, different compatible styles can be used in the same software to reap the benefits of both architectural styles.
0 notes
infoscavenger · 6 years ago
Text
Version Control Systems and Specifically Git
Tuesday, Oct 1, 2019
Version control enables projects to keep a track of their history and manage changes made to the project over time. Version control makes it possible for a large number of developers to collaborate amongst themselves and safeguards projects against software breaking bugs, as changes made can be reverted back to. Files that can be anticipated to change -like source code, configuration files, readme documents, licenses, and wiki pages- should be version controlled while files that will remain static throughout the life of the project -including automatically generated code and binary executables- should be archived.
Version control systems -a suite of software that facilitates version control- can be centralized or decentralized. In centralized VCS only one main repository is maintained that the developers work on in small segments. CVS and Subversion are examples of centralized version control systems. In a decentralized version control model, the developers of a project maintain their own copy of the repository while the original is hosted on a server. Pull requests are made by the developers to make sure the changes made are consistent across all repositories. Git is an example of a decentralized VCS. The advantage of using centralized version control is that there is less overhead -as the entire repository does not have to be downloaded or copied before working on the project- when making small incremental changes to the codebase and only the code that is to be changed along with its dependencies can be copied/downloaded. Decentralized version control accommodates more developers and also ensures a redundant backup in case the original repository were to be compromised.
Git, a popular version control system, stores the information regarding changes made in the project as a series of snapshots -called commits- which indicates the state of the project at a point in time. A commit object also holds metadata about the changes made, like the person behind the change. Other version control systems on the other hand store this information by listing the changes made. A branch in git is a pointer to a commit object and a special pointer HEAD points to the branch that is currently checked out. Git consolidates changes made between two branches either through merging or by rebasing. Merge simply combines the changes made in one branch with another branch using a single commit. Rebase on the other hand moves the point at which branching occurred to a new starting point. Merge allows for the resolution of all conflicts at the same time while rebase requires all surfaced conflicts to be resolved one at a time. The decision to use merge or rebase is based on the intended tree structure that the developer wants to end up with.
0 notes
infoscavenger · 6 years ago
Text
Forks in the Road to Open Source
Friday, Sept 13, 2019 Open-source software projects can be categorized into different types based on their intellectual property control, goals, leadership structures and the activities they conduct. Projects range from single-vendor open source projects, developer community, user community, and competence centers. In single-vendor open source projects, the copyright is usually held by a company that develops the software in- house and contributors assist with this software’s bug tracking and development. This software is then used by other users.  Companies that operate single vendor OSS are usually for-profit institutions and earn revenue through practices like dual-licensing -where an edition of the software is sold to users while another edition might be free. In developer communities, the developers/contributors hold the copyright to the software which is made available to other users. These developers usually pursue personal interests or follow a business plan while making their software. Users in a user community OSS hold the licensing rights and also split the cost of development.  Competence centers, on the other hand, exist to promote open source software and do not restricts users or developers, from carrying out certain tasks, based on copyright.
While initially, I thought that single-vendor open source projects pursued corporate interests and were not supportive of the open-source community, -through classroom discussions- I have developed an admiration for them and realized that single vendor OSS is more likely to be successful because of their lucrative revenue-generating model -while still supporting OSS. I still think open source software should be developed using developer communities where the contributors have distribution rights and every competent programmer -who has invested valuable time into the project- can contribute. This also imbues contributors with a sense of accomplishment and increases morale.
Speaking of the ability to contribute, participators in OSS move up a hierarchy ladder where they start off as a user, are promoted to seekers, become collaborators and end up as contributors. Users install and use the software, seekers look for bugs and errors in the code, collaborators report on the found bugs and contributors are involved in bug fixes and rolling out new features. Open source projects must accommodate all these participants by having verbose documentation, multiple channels of communication -in case some participants are uncomfortable using one channel- and be open, by clearly specifying their mission statements and goals.
A key feature of OSS is its ability to be forked. Copying the source code to make two identical versions of the software is known as forking. There are two kinds of forks namely development forks: where developers fork the code to make iterative changes and merge it back to the master, and hard forks: where someone who is dissatisfied with the direction the project is heading starts a competing project -by forking. Development forks are necessary parts of the development process while hard forks can divide the community by splitting up participants among the two competing projects. While this could undermine all the work that had gone into the development of the original project, hard forks prevent one person, termed as ‘the benevolent dictator’ in the readings, from undemocratically taking over the entire project. Hard forks also make sure that every participants’ voice is heard and that there is a level of engagement and unity in the community.
0 notes
infoscavenger · 6 years ago
Text
Bazaar: a liberating style of software development?
Tuesday, Sept 10, 2019
In class we discussed Richard Stallman’s four freedoms and the effectiveness of the bazaar model vs the cathedral model of software development.
Stallman categorizes the four freedoms as the freedom to run the program, study and modify a program’s source code, to distribute copies of the original program and to redistribute modified copies of the program. Stallman’s ideas form the pillars of Free Open Source Software movement and distinguish truly free ‘libre’ software from free to use ‘gratis’ software. It was pointed out in class  that this was an important distinction as the ability to see and modify the source code meant that the programs were less likely to indulge in nefarious practices like selling user data, spreading viruses or cryptojacking.
The remainder of the class focused on the differences between and the effectiveness of the bazaar model and the cathedral model of software development. In the cathedral model of software development, relatively few professional developers conceive, design and produce software under close supervision. This model was praised for its focused vision -how the functions of the program undergo little change since conception-, ability to produce a refined product and its reliability -the fact that the project will see completion. However, the cathedral model was also criticized by its longer development times, vulnerability for bugs and relatively high costs.
On the other hand, the bazaar model had software developers working together to create programs that were useful or interesting to them. Frequent releases and a large number of collaborators are some of the characteristics of this model. The bazaar model was applauded for its ability to detect bugs early, its shorter development times and its low costs. Proponents of the bazaar model also said that the bazaar model made the developer feel a sense of accomplishment and pride in their work. It was pointed out during the discussion that in the bazaar model since more people can scrutinize the code the program is less likely to be malicious or have bugs. Some of the shortcomings of the bazaar model were that it required significant overhead initially, as a bare-bones program needed to be up and running before it could attract contributors, and that it required a large number of contributors to be effective.
Having heard about both the bazaar and the cathedral model of software development I prefer the bazaar model as it primarily promotes unity and healthy competition among developers. At the same time, some caution needs to be exercised while adopting this model because of its open nature. This vulnerability was exemplified when the maintainer of the popular javascript library event-stream was replaced and the new maintainer injected bitcoin stealing code into the library affecting thousands of computers1.
References: [1] https://www.theregister.co.uk/2018/11/26/npm_repo_bitcoin_stealer/
0 notes