anthony4912
anthony4912
Anthony's Senior Seminar
23 posts
Don't wanna be here? Send us removal request.
anthony4912 · 4 years ago
Text
5/7: Reflections on the Senior Seminar and My Engagement with NumPy
By the end of our senior seminar, I have grown from mostly focusing on the technical aspect of software projects to recognizing the big picture, i.e. understanding the practical implications of software innovations and how they can be double-edged swords. Indeed, from our discussions, I have become mindful of the growing security risks that go hand in hand with state-of-the-art technologies, making them particularly vulnerable and potentially harmful in the event of misuse and abuse. In the case of NumPy, there has been a critical security bug discovered in 2019 with the "pickle" module that would allow remote attackers to perform arbitrary code execution on compromised machines running NumPy (Cyware, 2020). Such awareness has prompted me to deepen my understanding of cybersecurity techniques in software development. This will allow me to build pieces of software that are functional and robust to increasingly sophisticated attack vectors.
In addition, I also realized the importance of diversity in tech innovations. The current lack of diversity in tech manifests in several aspects of the software development process, from the data used to the humans involved.  Flawed datasets can lead to unconscious biases and inequality in access that continue to hurt marginalized communities. A prime example is the documented flawed training of facial recognition software with mostly images of White people, leading to its inability to detect the faces of people of color and causing these groups unnecessary hurdles, which is the opposite of the reduced time and effort they were meant to offer. On the other hand, the lack of diversity and inclusion among developer communities can mean a lack of unique perspectives that can hinder the software's outreach to a broader user base. For NumPy, the Steering Council has acknowledged the lack of diversity among the contributors base, particularly the underrepresentation of women and ethnic minorities (NumPy, 2020). Since 2019, the project has dedicated two thirds of its grant to employ female developers, designers, and translators, but there is a lot more to be done to encourage the participation of historically marginalized communities in open source development. Thus, I became passionate about improving accessibility to software products, ensuring that they are useful and open to everyone's contributions, especially in developing communities. I have been working on a website that allows students from diverse backgrounds on Dickinson College campus to exchange skills with each other, which hopefully can help bridge the knowledge gap among student groups and promote equality in our community.
References
Cyware. (2020, February 22). Critical vulnerability in numpy could allow attackers to perform remote code Execution: Cyware Hacker News. Retrieved May 12, 2021, from https://cyware.com/news/critical-vulnerability-in-numpy-could-allow-attackers-to-perform-remote-code-execution-33117832
NumPy diversity and Inclusion Statement. (2020). Retrieved May 12, 2021, from https://numpy.org/diversity_sep2020/
0 notes
anthony4912 · 4 years ago
Text
4/27: Using Vein Matching to Improve Facial Recognition of Twins
One of the potential causes of misidentification in facial recognition technologies is called the "evil twin" phenomenon (Patently Apple, 2020). This issue happens when a biometric system cannot accurately distinguish between similar-looking faces (Patently Apple, 2020), as is the case with twins and siblings and other closely related faces.
Apple has attempted to solve this problem and improve its Face ID feature. In July 2020, it received a creative patent for an unprecedented technology that relies on vein matching to achieve a higher success rate in “difficult biometric authentication cases” (Patently Apple, 2020).
Because the pattern of blood vessels underneath the skin of a face is unique for every individual, including twins, this type of imaging leaves hardly any room for error and will likely eliminate the “evil twin” issue. (Patently Apple, 2020) This approach will involve machine learning models and subepidermal imaging to analyze data captured by sensors built into Apple's device. In my opinion, this advancement is indeed promising news in the fight against inaccuracies in facial recognition, but it can also become a major evolutionary leap in identification with biometrics as a whole.
References: Apple Wins an Advanced Face ID Related Patent that is "Evil Twin Proof" using Subepidermal Imaging and more. Patently Apple. July 2020. https://www.patentlyapple.com/patently-apple/2020/07/apple-wins-an-advanced-face-id-related-patent-that-is-evil-twin-proof-using-subepidermal-imaging-and-more.html.
0 notes
anthony4912 · 4 years ago
Text
4/16 - Big Data and the Privacy Dilemma
One of the key characteristics of big data is increased frequency of data collection, which can reveal identifiable or sensitive details about individuals. Mobile health apps and devices use sensors to continuously monitor and record features related to an individual’s health and behaviour. High-frequency data dramatically leads to greater possibility of identifiability, with as few as four data points on an individual’s location or retail purchases being sufficient to uniquely identify her records (Altman, Wood, O'Brien, Gasser, 2018). In many cases, commercial and government big data collection leads to much more frequent observations than those collected in the research setting. For example, microphones, cameras, accelerometers, GPS receivers, and other sensors embedded in a mobile device can generate fine-grained data, capture variations microsecond by microsecond, and transmit the data to the cloud for long-term storage and analysis. Since data collection, analysis and release in commercial and governmental settings are subject to less regulations than in a research environment, which are often carefully monitored and assessed by an IRB, this also create heightened challenges for privacy protection (Altman et al., 2018). 
More frequent analysis of data about the same individuals inherently increases the risk of learning information specific to the individuals in the data. Many approaches to privacy in common use, such as de-identification, are unfortunately insufficient to protect against such risks across multiple releases and analyses. For example, de-identification techniques can provide misleading results when several spatiotemporal data points are grouped together to identify an individual, and tweets, text messages, and videos can contain implicit identificable information through audio or visual clues that algorithms can easily overlook (Altman et al., 2018). 
References: Micah Altman, Alexandra Wood, David R O’Brien, Urs Gasser, Practical approaches to big data privacy over time, International Data Privacy Law, Volume 8, Issue 1, February 2018, Pages 29–51, https://doi.org/10.1093/idpl/ipx027
0 notes
anthony4912 · 4 years ago
Text
3/30 - Adapting Quantum Cryptography to Current Security Standards
The European Telecommunications Standards Institute has outlined several challenges with the adoption of post-quantum cryptography. I found it surprising how these challenges stem from universally agreed best practices within the security industry. Security practitioners are taught to avoid new cryptographic algorithms that have not received years of public scrutiny, to not design their own post-quantum cryptography and rely only on well-established industry security standards. Thus, there is significant overhead to make systemic changes to existing security products and infrastructure because of this pragmatic mindset - security practices that block and protect against questionable security schemes also slow the adoption of changes meant to protect against never-before-seen attacks. In particular, rigid security protocols might not play well with post-quantum ciphers because of historical protocol design that were not cryptographically agile. For example, while the X.509 data format allows for very long post-quantum public keys, some older applications put size limits on X.509 fields which hinders future cipher changes. Implementing post-quantum cryptography on S/MIME protocol, on the other hand, has backward compatibility issues with clients of versions 3.1 or earlier that may use cryptographic primitives that are not quantum-safe. This may present difficulty to one party trying to communicate securely with another party, who might then be forced to communicate using weak encryption. Similarly, for SSH protocol, even with a quantum-safe suite of algorithms in the SSH protocol, via SSH proxy forwarding of other protocols (SMTP, HTTP, etc) may compromise machines when versions of these external protocols that are not quantum-safe are used. Not only can weak cryptography compromise an otherwise secure network security protocol, but compromised protocols themselves can further jeopardize machines within the network even with the integration of quantum-resistant protocols.
Source: https://www.etsi.org/images/files/ETSIWhitePapers/QuantumSafeWhitepaper.pdf
0 notes
anthony4912 · 4 years ago
Text
3/23 - How can the Use of Drones be Ethical?
The legal implications of drones can be analyzed through the social contract theory and the utilitarian model. Indeed, if privacy was to be considered a human right, drones could violate a fundamental right of individuals under the social contract theory. Under the Fourth Amendment,  individuals are protected against illegal search within their home, but the legal line becomes less clear-cut since drones mostly operate in public. In the case of Kyllo vs. United States, the Supreme Court ruled that "things one exposes to the public can allow police to do a warrantless search, provided that there is readily available incriminating evidence." 
On the other hand, under utilitarianism, drone use is considered ethical if its benefits to society outweigh the potential costs.  Drone advocates argue that surveillance drones’ benefits to national security can outweigh the privacy concerns. According to Michael Toscano, president of the AUVSI, drones provide a low cost, fully capable alternative to more costly security measures like manned aircraft or border agents. However, the utility of drones also relies on their susceptibility to abuse by either internal or external agents. To establish a legal framework for drones in the face of their growing adoption, bills have been introduced to require authorities to acquire a search warrant before using drones to collect data except in the event of national threats, border patrol or law enforcement matters. This would allow drones to be operated in accordance to the principles of the Fourth Amendment. In addition, a potential ethical framework for drones can be adapted from that of Omand and Phythian, which determines the ethicality of intelligent activities under the following aspects: right intention, proportionality, right authority, reasonable prospect of success, discrimination, and necessity. 
Sources: 
https://vce.usc.edu/volume-1-issue-1/drones-and-surveillance-an-ethical-approach/
https://www.zocalopublicsquare.org/2018/09/12/can-spies-ethical/ideas/essay/
0 notes
anthony4912 · 4 years ago
Text
3/2 - Targeted Surveillance on Marginalized Communities
Recent studies have shown that poor communities are disproportionately the target of government surveillance programs. Indeed, according to Eubanks, marginalized communities have been the testing ground for the cutting edge of surveillance technologies. Persistent stereotypes of poor women, especially women of color, as inherently suspicious, fraudulent, and wasteful provide ideological support for invasive welfare verification programs that track their financial and social behavior. Immigrant communities are more likely to be the site of biometric data collection than native-born communities because they have less political power to resist it. More alarmingly, much more aggressive forms of high-tech surveillance have been employed to oppress these underprivileged groups, such as drones built for warfare overseas repurposed to photograph migrants on the border, then used overhead at anti-Trump protests.
As a result of these deeply biased and intrusive forms of surveillance, members of marginalized communities have begun to alter their behaviors. According to an ethnography by John Gilliom, welfare recipients have learned to live with surveillance as a totalizing and encompassing force, as if they were "in prison" or "owned by someone."  Indeed, far from its intended purpose as a defense against fraud, surveillance systems have become an effective tool for controlling human behavior and enforce rules by observing compliance. Acknowledging this reality allows us to challenge mass surveillance based on the 14th Amendment, which provides for equal protection under the law, not just on the 4th Amendment, which protects citizens against unwarranted search and seizure. As Eubanks put it, should be seen as a collective issue, a civil rights issue, not just an invasion of privacy.
Sources: https://tcf.org/content/report/disparate-impact-surveillance/?agreed=1
https://prospect.org/power/want-predict-future-surveillance-ask-poor-communities./
0 notes
anthony4912 · 4 years ago
Text
2/16 - Diversity in Biometric Data
The lack of diversity in biometric datasets has dire consequences in terms of social injustice. According to Dr David Leslie, recognitional injustice comes hand in hand with distributive injustice. Specifically, when the identity claims of a marginalized group are denied and violated by biometric systems, their subordinated position is reaffirmed and augmented as they are refused access to benefits, resources, or opportunities on the basis of their identity. A recent example is the UK Home Office’s online passport photo checking service. First launched in 2016, the system uses an algorithm to scan uploaded face images and assess whether or not they are of sufficient quality to be used in a person’s passport. However, in several public instances that have been shared widely on social media and the press, the facial detection and analysis component of the technology has failed to successfully process the photos of passport applicants who have darker skin types. It inaccurately returned error messages for them like “it looks like your mouth is open” and “it looks like your eyes are closed” when there was evidently nothing wrong with the uploaded images. As such, an automated system that is built with the intended purpose of making an administrative process more efficient and adding a convenience to a burdensome process systematically achieves its goals for some privileged social group or groups but does quite opposite for underpriviledged ones - they have to spend more time and effort than the manual process.
To address the persistent issue with biased recognition algorithms, IBM has launched a new dataset named "Diversity in Faces." This data set is made up of a million publicly available images, annotated according to 10 of the top coding schemes available in AI literature resources. It is motivating to see such a major course of action from tech companies because up until now, most of the research into the bias of biometric systems has mainly focused on pointing out the problem while not addressing them. IBM's provision of an abundant unbiased dataset for the research community will hopefully help ensure that new facial recognition techniques are fair and accurate for everyone.
Sources:
https://www.biometricupdate.com/201901/ibm-launches-public-data-set-to-further-research-into-diversity-and-facial-biometrics
https://arxiv.org/ftp/arxiv/papers/2010/2010.07023.pdf
0 notes
anthony4912 · 4 years ago
Text
2/9 - The Ethical Knob for AVs
It appears that the challenge of determining responsibility in the event of accidents is the central bone of contention with regard to autonomous vehicles. One suggestion that Contissa, Lagioia, and Sator (2017) made is to design an ethical knob with three settings: altruistic mode which prefers third parties over the passengers, impartial mode which gives equal weight to the passengers and third parties, and the egoist mode that gives preference to passengers in the car. Such a knob will only be activated in the event that an accident is unavoidable, and it is up to the passengers in the car to decide which mode to activate. One strong benefit of the ethical knob approach to autonomous vehicles is that responsibility could be easily assigned, similar to how responsibility is assigned in cases of human drivers. Since the prospect of being liable for any accidents might face opposition from manufacturers, the ethical knob would allow manufacturers to hand the responsibility back to the passengers of the car given that the ethical knob functions properly. Moral responsibility would also be easy to assign. Given a morally libertarian framework, the person exercising their free will in setting the ethical knob can then be praised or blamed since they had a completely free choice between their duty to others and their strong desire to protect their own life at all costs.  Thus, the passengers are no longer stuck in the ethical dilemma of doing vs. allowing harm. On the other hand, numerous flaws with the ethical knob remain. What happens if everyone chooses the same setting? Or does the ethical knob favor wealthy passengers over poorer ones, since people in poverty do not want to face the higher insurance premiums of killing other people than themselves? And does the knob also switch the dilemma back of doing vs. allowing harm back from the passengers to the manufacturers?
Source: https://mountainscholar.org/bitstream/handle/20.500.11919/4409/STUW_HT_PHIL_2019_Tay_Kathryn.pdf?sequence=3
0 notes
anthony4912 · 4 years ago
Text
2/2 - Addressing The Power Imbalance to Improve Ethicality of Actions
Data collection is a prime example of where act utilitarianism exposes its loopholes. Imagine a scenario where a person's data is collected without their knowledge and shared across powerful organizations such as Google, Amazon, or Microsoft. If we were to apply the act utilitarianism principle, we would hold that since the net benefits to the data collectors involved far outweigh the disadvantages to the affected person, the act of collecting that person's data is morally right. Such an action, however, would certainly fail under theories such as rule utilitarianism because unsolicited data collection is not an ethical action if adopted by everyone.
The power imbalance between the affected person and the data collectors in the mentioned scenario also hints at how we can resolve conflicting interests so that the act utilitarianism model can produce a more favorable outcome for all parties. Request for the person's consent is the bare minimum solution. More importantly, the data collectors can offer users reports of what analytical models say about their behavior and personality with regard to society as a whole (Spiekermann, 2017). As a result, the person understands more about how they fare socially from highly objective information, and such knowledge can be highly beneficial. On the other hand, they might also demand that such insights not be disclosed to other third parties (Spiekermann, 2017), and the collectors have an obligation to respect that request in turn. Such back-and-forth exchange of knowledge and consent reduces the power and benefit asymmetry and makes it a more ethical act that also aligns with rule utilitarianism.
References: Sarah Spiekermann. (2017). IT’S ALL ABOUT DESIGN: AN ETHICAL ANALYSIS OF PERSONAL DATA MARKETS. 
0 notes
anthony4912 · 5 years ago
Text
11/24 Reflections - Fall Semester Review
Being a part of the NumPy project team this semester helped me understand the paramount importance of a having strongly connected team in software development projects. What helped foster the robustness of my team was not only the diversity in our skillsets, working style, and time differences (certainly both a challenge and opportunity) but also a concerted focus on working across differences for collective progress and growth. First, we did so by utilizing synchronous working sessions which proved most useful to us during the developer install activity. Due to our varied experience with Python and the different development environments, some of us stumbled against errors with the test suite while others did not. We decided to utilize a troubleshooting session to go through each problem that our members were encountering and working out the solutions together. Arguably, it could have been faster for the team members who had completed the installation error-free to move on to other activities. However, participating in such coworking sessions offered an opportunity for each member to showcase their strengths and help the team progress as a whole while developing our technical communication skills. In addition, we prioritized immediate feedback. By scheduling regular meetings to debrief our 5-15s, we kept ourselves on top of individual and team progresses and were able to revise our goals and designate tasks according to changing schedules and requirements. Before the important checkpoint presentations, we also scheduled dry-run sessions to go over our parts and gave each other quality feedback regarding time and content management. Thanks to the support from my team, I have now developed a good foundation of the basics of NumPy and Python development along with enhanced presentation skills. I am also content with my healthy working relationships with other teammates, and I have come to appreciate the role of having a well founded team and being a team player in conquering my own ventures in the technical field.
0 notes
anthony4912 · 5 years ago
Text
11/20 Reflections
In order to promote equality in the tech industry, schools and companies can take some practical steps to incentivize women to pursue a career in tech. First, computer science education should start early in schools and be specified as a mandatory subject. With coding skills being a priority, individuals can get early exposure to the field and develop their expertise to the regardless of their gender. Second, schools and companies can show their commitment to diversity by publicly disclosing their gender pay gap figures with a focus in tech functions where women are more likely to be underrepresented. Such open sharing of diversity and inclusion data will encourage healthy competition to recruit more female talents and improve representation in tech-oriented roles. Finally, it should also be noted that the scarcity of women in tech also stems from deeply held stereotype of tech being “geeky”, boring and exclusive for male, perpertuated by the classical movies like the Jurassic Park or The IT Crowd. More needs to be done to challenge this stereotype and to show the many different types of roles and personalities that can be found. Indeed, not all roles in tech have to be heavily technical - individuals who find their strengths in planning and cross-function communication can consider a career in product manager, and ones who are artistic can find UX/UI design a lucrative field.
0 notes
anthony4912 · 5 years ago
Text
11/17 Reflections
The most important aspect of a sustainable open source project is perhaps its community – having a robust network of skilled developers is critical to the project’s success. There are several ways project owners can recruit developers to their project. First, prospective project owners should attempt to establish their credibility and leadership in the open source community. This is achieved by being an active contributor to existing open source projects; for organizations, they can also help sponsor open source initiatives to distinguish themselves as meaningful players in the open source ecosystem. In addition, developers can strategically utilize networking events, hackathons, and conferences on their areas of expertise to connect with like-minded developers and pitch their project ideas. For companies, they can promote their brands by sending key developers to present to such events, setting up information booths, or even hosting a hackathon to draw talents. Moreover, project owners should make it a priority to mentor potential developers to the project. For smaller projects, letting contributions fall through the crack can become a big demotivation for further contributions. Therefore, there should be consistent effort to keep discussions going. The current contributors should also demonstrate their open-mindedness, for instance, by engaging in constructive criticism when a contribution is not accepted rather than outright rejections or hostile responses.
Sources: https://opensource.com/business/16/9/how-to-get-programmers
https://www.linuxfoundation.org/resources/open-source-guides/recruiting-open-source-developers/#:~:text=Ask%20good%20developers%20in%20the,help%20them%20present%20in%20public.
0 notes
anthony4912 · 5 years ago
Text
11/13 Reflections
While agile is the fastest growing software development methodology right now, there are several major drawbacks that makes it far from a panacea for all software projects. First, agile software development does not work well with projects with a strict budget and timeline. For some software deliverables, developers cannot quantify the full extent of required efforts. This is especially true in the beginning of the development life cycle on larger products. Teams new to the agile methodology fear these unknowns, which drive frustration, poor practices, and often poor decisions. In those cases, the waterfall method offers a rigid estimate of the resources needed for delivering the final project. In addition, the agile model requires much more time and commitment – developers, testers, and customers must constantly be interacting which other to ensure the project’s satisfactory progress and quality. This involves a hefty amount of face-to-face communication, which might not always be feasible given scheduling conflicts among teams with global stakeholders. Finally, this method requires little planning under the assumption that project requirements are ever changing. This lead to the possibility of developers focusing on the wrong development areas if the feedback from customers is not clear. Consequently, the scope of the project may extend much further beyond expected (scope creep), and the need for fixing the mismatch between customers’ demand and the end product leads to an ever-lasting vicious cycle.
Source: https://www.inc.com/adam-fridman/the-massive-downside-of-agile-software-development.html
0 notes
anthony4912 · 5 years ago
Text
11/10 Reflections
There has been fierce debate about whether tester needs a strong level of programming skills for their job. According to Gerrard, in reality, most testers would benefit from some programming knowledge. While exploring a feature, a tester having some level of code knowledge could think more deeply about the possible modes (the risks) of failure in software and there’s value in that. Though their programming skills need not be at a professional level, at the very least, testers must be able to read and comprehend the developers’ code. In some organizations, testers are now mandated to learn a programming languages as teams move to the “shift left” model as part of the agile methodology, which emphasizes earlier testing to prevent software defects. In shift left teams, which prioritize test driven development, behavior-driven development, or acceptance test-driven development, software developers are encouraged to do more thorough testing of the code. To assist these developers in testing, companies try to involve testers with a deeper level of technical knowledge. The goal of companies is to get rid of traditional testers, ones who “can’t do anything but script tests, follow scripts and log incidents – the plain old functional testers.” Therefore, it is safe to conclude that testers now need a good grasp at programming fundamentals to prove themselves an asset to employers.
Source: https://www.ministryoftesting.com/dojo/lessons/the-testers-and-coding-debate-can-we-move-on-now
0 notes
anthony4912 · 5 years ago
Text
11/06 Reflections
It is sometimes argued that singleton pattern is an anti-pattern. According to Safyan, singleton pattern makes a risky assumption that there will ever be only one instance is often broken during a project's lifetime, and this pattern structures the code in a way that requires a very significant effort to refactor when this assumption is inevitably wrong. For instance, we might reasonably assume that there will only be one HTTP request processed by our application at a time under the assumption that a new instance of the process is instantiated for every incoming HTTP request that must be processed. Thus we might initially write the HTTP request object as a singleton. However, if we were to later modify the architecture to use multiple threads to handle many requests within a single instance of the application, this assumption suddenly turns out to be false, and it becomes necessary to rework the code significantly to handle this new condition. In addition, the rigidness of this pattern also makes it difficult to test the code. This is because the singleton pattern introduces a dependency through a side-channel that is not explicitly given as a parameter to constructors or other functions that use it. Even if this side channel can be replaced, the need to do so is less obvious and makes tests very difficult to understand. An alternative for the singleton pattern is to instantiate a single instance and propagate it to places that use the object as a parameter to make the dependency explicit. The singleton property would then become a property of the caller of the object, not itself, thus enabling tests to supply an alternative implementation and makes the code capable of dealing with multiple instances in the future. An objection to this choice is the need for a large number of parameters that rely on several singleton objects. The solution proposed by Safyan is to encapsulate multiple dependencies in a single object, especially ones that are not used directly using a factory. In addition, the burden of having to pass multiple explicit dependencies can also be greatly reduced by using a dependency injection framework such as Dagger.
Source: https://www.michaelsafyan.com/tech/design/patterns/singleton
0 notes
anthony4912 · 5 years ago
Text
10/23 Reflections
Several bad practices exist that heavily impact code readability and changeability, which should be avoided. First is insisting on one-liner solutions. A lot of developers fall into the habit of contracting 20 lines of code into 2 or 3, which might earn them the badge of clever code writer. However, this is certainly not the ultimate outcome of writing a piece of software as it takes more effort on the readers’ end to understand the logic of the code. Second, bad variable or function names can also reduce code accessibility. Good naming aims at adding information that the rest of the code does not intuitively convey, so that other developers can look at the names and get at least a general idea of what the code might do. Therefore, names should succinctly describe the purpose of the code and avoid pieces of information that does not assist comprehension. Finally, when errors emerge, some developers have the tendency to catch but ignore exceptions or use libraries that do not report errors. Such practices make it hard to identify where things go wrong when code is reviewed later. The easiest way to avert this is to log the errors that you currently cannot address in order to handle them later.
Source: https://www.codementor.io/@christianmaioli/35-habits-that-make-you-write-bad-code-8ks62091v
0 notes
anthony4912 · 5 years ago
Text
10/16 Reflections
According to statistics from Synopsys.com, the most popular open source licenses are, in order of usage, MIT (32%), GPLv2 (18%), Apache 2.0 (14%), GPLv3 (7%), and BSD License 2.0 (3-clause, New or Revised). Thus, there appears to be a clear split among open source projects that lend themselves to any modifications or use and ones that strictly follow the "free" principle of open source development. However, the above statistics also show that there is a clear preference for permissive licenses, and in fact, research by White Source indicates that permissive licenses are continuously gaining edges. Between 2012 and 2019, among all open source projects, the percentage of copy-left licenses has decreased significantly from 59% to 33%.
This trend can be attributed to the expanding use of open source projects, to the point that it is now mainstream for developers to use open source code of some kind. In addition, open source communities are now gaining support from big corporations, including Microsoft and Google owning some of the most widely used open source solutions, namely Visual Studio and Kubernetes. Therefore, the more permissive licenses offer a straightforward solution to copyright, minimalize legal risks, and help promote the coorperation of developer communities worldwide.
0 notes