Just a Guy Curious About Tech: Exploring Open-Source & Beyond
Don't wanna be here? Send us removal request.
Text
Coding with Copilot: Innovation, Ethics, and the Future of Junior Developers
Not long ago, writing code meant facing down a blank screen, building every function from scratch a debugging line by line. It was a craft that had a mix of logic, persistence, and creative problem solving. Now? You type a few words, and entire blocks of code appear, written by a machine that’s read more GitHub repos than any human ever will. Tools like GitHub Copilot, trained on billions of lines of public code, can autocomplete functions, suggest algorithms, and even write full files in seconds. For many developers, it feels like magic. Taks that once took hours can now be done over lunch. But that speed comes with risks. As AI-assisted, coding becomes more common in the tech industry, important questions are starting to appear. Who really owns the code that Copilot suggests? Can companies trust the legal safety of using AI-generated code in commercial products? And what happens to the career path of a junior developer when much of their entry-level work can be done by a machine in seconds? In this blog, I’ll explore the ethical and legal challenges of using AI-generated code in the software industry. These tools may boost productivity and change how we work, but they also raise serious challenges that need to be faced head-on.
(https://www.google.com/url?sa=i&url=https%3A%2F%2Fwww.amitmerchant.com%2Fcode-explanation-using-github-copilot%2F&psig=AOvVaw1F-03FUgh-p5yKc-qw4b7f&ust=1746673769915000&source=images&cd=vfe&opi=89978449&ved=0CBQQjRxqFwoTCNjPs_6wkI0DFQAAAAAdAAAAABAW)
The Rise of Copilot: Fast Code at What Cost?
GitHub Copilot, launched in 2021 by GitHub and Open AI, was advertised as a “pair programmer” that could autocomplete entire blocks of code based on just a comment or a function name. It uses the Codex model, trained in public code from GitHub repositories. According to GitHub’s official documentation, Copilot “suggests whole lines or blocks of code as you type,” and its training data includes “publicly available source code and natural language from public repositories on GitHub.” On paper, this is a game-changer. According to GitHubs’s 2023 productivity report, developers using Copilot complete tasks up to 55% faster. Some claim it reduces mental fatigue and gives more time for “higher-level thinking.” But while GitHub emphasizes productivity gains, others have been raising a red flag about what’s happening behind the scenes. While AI tools like Copilot bring undeniable benefits in speed and convenience, I believe their adoption must be balanced with strong legal safeguards and thoughtful workforce planning. The enthusiasm around automation shouldn’t blind us to its legal and ethical blind spots or the impact it may have on the careers of tomorrow's engineers like me.
Legal & Copyright issues: Who Owns This Code? One of the biggest legal questions around Copilot is whether its code suggestions inherit the licenses of the open-source code it was trained on. Many public repositories use licenses like MIT or GPL, which require attribution. If Copilot suggests code without including that, is it violating copyright? That’s the basis of an ongoing lawsuit filed by the Joseph Saveri Law Firm, which argues that GitHub, Microsoft, and OpenAI are failing to respect open-source license terms by removing credit or legal context in Copilot’s output. This raises serious concerns about how AI tools interact with the rules and values of open-source software. In response, Microsoft introduced its Copilot Copyright Commitment, promising to defend commercial users from legal claims if they follow proper use guidelines. It’s a bold move, but one that doesn’t resolve the deeper uncertainty developers face when using AI-generated code in real-world projects.
Ethical Concerns: Transparency and Trust While Copilot boosts productivity, it also raises real ethical concerns, especially around bias, transparency, and accountability. Because it was trained on billions of lines of public code, it can unintentionally reproduce outdated practices or even embed discriminatory logic. A 2024 study in ScienceDirect highlights how AI models often “inherit and reinforce societal biases” when trained on massive, unfiltered datasets. Transparency is another issue. Copilot doesn't explain why it suggests a certain solution, making it hard to verify the quality or origin of its code. As noted by the team at Human Made, this “black box” behavior limits the developer’s ability to fully trust or audit what the tool produces. Then there's the matter of responsibility. If AI-generated code introduces a flaw or causes harm, who is accountable, the developer who used it, or the creators of the tool? USC Annenberg’s research points out that as AI becomes more involved in creative and technical work, the boundaries of liability are becoming harder to define. These tools are undeniably powerful, but we shouldn’t treat them as neutral. Ethical risks need to be part of the conversation every time we choose to rely on AI for building the systems people depend on.
What happens to Junior Developers? As AI tools like GitHub Copilot become more sophisticated, many developers are left questioning what the future holds for entry-level roles. Microsoft CEO Satya Nadella recently revealed that AI now writes between 20% and 30% of the code in the company's repositories, depending on the programming language, with better results in Python than in C++. While this certainly boosts efficiency, it also means that tasks once reserved for junior engineers are being automated shifting the landscape of what it means to start a career in tech. This change, as highlighted in a Business Insider report, could disrupt traditional career paths, making it harder for newcomers to gain the hands-on experience that’s crucial for growth. Instead of writing boilerplate code, entry-level developers might find themselves grappling with debugging AI-generated output, code they barely understand. However, many experts argue that the human touch is irreplaceable. AI researcher Anima Anandkumar offers a hopeful reminder: “Curiosity is irreplaceable”. While AI may transform the nature of work, it’s unlikely to eliminate the need for human insight, creativity, and problem-solving skills that are still crucial to innovation in tech. Conclusion AI tools like Copilot are changing how we write code, but faster isn’t always better. As we embrace these technologies, we need to stay curious, question the impact, and ensure that progress doesn’t come at the cost of ethics, opportunity, or human creativity.
Coding with Copilot: Innovation, Ethics, and the Future of Junior Developers by Elias Berhe is marked with CC0 1.0 Universal
0 notes
Text
The Post Office Horizon Scandal: A Failure of Ethics and Accountability in Software Development
Imagine being accused of a crime you never committed, dragged through the courts, and thrown in prison all because of a faulty computer system. This nightmare became a harsh reality for over 900 sub-postmasters in the UK, victims of the Post Office Horizon Scandal. Horizon is a computer system originally developed by UK software company ICL and later acquired by Japanese IT giant Fujitsu in 1998. It was designed to automate sales and accounting in Post Office branches. When it was rolled out in 1999/2000, it was the largest non-military IT project in Europe at the time and no one could predict the disaster that followed. Almost immediately of being deployed in UK’s post office though, sub-postmasters began noticing unexplained shortfalls in their accounts. They complained about bugs in the system after it falsely reported shortfalls often in thousands of pounds, but their concerns were dismissed. Under the old paper-based system, they could track down errors, but with Horizon, there was no way to prove the system was wrong. Instead of investigating, the Post Office assumed theft and fraud, leading to over 900 wrongful prosecutions. It is now recognized as the most widespread miscarriage of justice in UK history.
What Went Wrong?
The Horizon system simply wasn’t fit for purpose. The Post Office knew this as early as 1999 when trials exposed serous issue faced by sub-postmasters. David McDonnell, a member of the development team working on the system’s electronic point-of-sale (EPOS) side, painted a troubling picture of its creation. He described a team where only a couple of developers were highly skilled, a few were mediocre, and several lacked the ability to produce professional code. This poor-quality development team contributed to a deeply flawed system that would ruin lives. By 2001, McDonnell’s team had already found hundreds of bugs, though a full list has never been disclosed. However, as wrongly accused postmasters fought to clear their names, the public began to learn just how disastrous these flaws were.
Breakdown of Software Standards in the Horizon System
From a software engineering perspective, how do we break down the ethical failure of the Horizon scandal? One way is by applying the Software Engineering Code of Ethics, which sets clear principles for responsible conduct in software engineering and deployment. Several key principles from this code directly relate to the failure in this case, we going to highlight the serious ethical breaches that led to one of the UK’s worst miscarriages of justice.
1) Product The Post Office Horizon Scandal is a glaring example of what happens when professionals’ standards in software development are ignored. The Horizon system was rushed out the door despite clear signs that it wasn’t ready and its quality fell far short of what was needed violating Principle 3.01 (“Strive for high quality”). The engineers involved didn’t set realistic goals or objectives and they failed to document the system’s limitations which left many key issues unaddressed. What followed were countless bugs and errors that could have been avoided with proper testing and review. This ties with principle 3.10 that emphasizes adequate testing and debugging of software. To make even matters worse, sub-postmasters were left in the dark about the risks the system posed and many were wrongfully blamed to theft. This reflects the failure to address the ethical and legal issues as outlined in principle 3.03.
2) Management The Post Office Horizon scandal highlights serious failures in management, as outlined in Principle 5: Management of the Software Engineering Code of Ethics. Those in charge in Horizon failed to enforce good management practices that allowed a faulty system to be rolled out without proper risk assessment or quality control. The engineering team wasn’t properly informed about the standards they were expected to meet and worse, some were simply not qualified for the job. Unskilled developers were hired for a project of this scale leading to a system with many flaws. On top of this, managers ignored realistic estimates for cost, scheduling and performance, pushing forward with a system that was doomed to fail. Perhaps most troubling, employees had no safe way to voice ethical concerns (5.12) allowing serous problems to go unchallenged until it was too late.
3) Public Interest One particular important principle in the Software Engineering Code of Ethics is 1.03 which states that engineers should only approve software if they have a well-founded belief that it is safe, meets specifications, passes proper testing and does not harm society. This principle applies to all engineers, regardless of their role because every decision in software development can have real world consequences. In the case of the Post Office Horizon scandal, this responsibility seems to be neglected. The system was approved and deployed despite having hundreds of bugs and leading to devasting consequences towards innocent workers. Instead of ensuring that the software served the public good, the project prioritized rollout over reliability which violates one of the core ethical duties of software engineering.
Conclusion Computers are only as good as the person who created the algorithm, but this is often forgotten, says Ian Pay, Head of Data Analaytics. The Horizon case is a timely reminder with all the current talk of AI hallucinations that computers have a long history of getting it wrong. We tend to implicitly trust the output of computers, forgetting that whether its AI or a very simple computer program, it is ultimately an algorithm written by a person. Software engineers need to understand that our work impacts many people and upholds the highest ethical standards to prevent harm and ensure technology serves society responsibly.
The Post Office Horizon Scandal: A Failure of Ethics and Accountability in Software Development by Elias Berhe is licensed under CC BY-SA 4.0
0 notes
Text
How Good is the quality of OSS
Understanding Software Quality
What makes great software? It’s not just fancy features or spotless code but it’s about creating tools that people love to use and that treat everyone fairly. In the fast-changing world of software development, the idea of “quality” means much more than just working code.
From a technical perspective, Amy Ko.jo, in Cooperative Software Development, highlights key traits like correctness, reliability, robustness, performance and portability. These are the building blocks for good software, helping it stay free of buts, handle unexpected problems and work across different devices. In this article, we’ll explore how these traits play a big role in testing and finding buts which are critical for building reliable systems.
But software isn’t just about machines or computers but it’s also about people using it. A major part of software quality is making sure it doesn’t unfairly harm or exclude anyone. For example, Amy J. Ko explains, biased data or algorithms can lead to discrimination like reinforcing harmful stereotypes. Fixing these biases is a key part of building software that regards everyone equally.
The Quality of Open-Source Software
Open-Source Software has sparked a lively debate about its quality. On one side, supporters say OSS creates better products because of the way it’s built which is by a global community of developers working together. As Oleh Romanyuk explains in FreeCodeCamp’s article on the Pros and Cons of OSS, open-source software is shaped by thousands of contributors each bringing unique skills and experience from different industries and technologies. This mix of perspectives makes it easier to spot and fix problems quickly, which means the software becomes more reliable and functional over time.
Data backs up this claim. The Coverity Scan Open-Source Report shows that OSS often has fewer bugs per thousands of lines of code compared to proprietary software. Why? Because open-source projects are constantly reviewed and improved by many developers creating a kind of “always-on” quality control system.
But the story doesn’t end there. A blog post by the World Bank, Quality of Open-Source Software: How many Eyes are Enough? points out that bigger proprietary software projects often beat OSS in quality. The reason you ask. Big companies have dedicated teams of paid experts and structured processes to find and fix complex problems. These teams don’t just fix the obvious, but they tackle the tough ones that require deep knowledge of the software’s inner workings.
This brings up a challenge for open-source software projects. They often lack the consistent funding and organized workflows that proprietary software teams have. As Amy K. Jo explains in Cooperative Software Development, software quality isn’t just about the code but also how the code is created. While OSS does a great job fixing simpler bugs through teamwork, the harder to spot issues may require the resources and expertise that only a well-funded proprietary team can provide.
So, is open-source software better? The answer isn’t so simple. OSS thrives on collaboration, bringing together diverse ideas and talents to create something great. But proprietary software benefits from structure, funding and expert teams. Maybe the real takeaway is this: quality depends not just on the software itself but also on the people and process behind it.
The Role of Quality Assurance in OSS.
Open source thrives on community collaboration - volunteers from around the world contributing their time, skills and insights to develop, debug and enhance software. However, because open source operates on a voluntary basis, challenges like varying levels of expertise, inconsistent time commitments and the “release early, release often” philosophy can make quality assurance difficult. For open source to maintain high-quality standards, developers, quality engineers and users must work together to ensure the software remains reliable and functional.
Continuous integration tools like Jenkins are essential in this process. Projects like Pulp use Jenkins to automate testing, monitor results and catch integration failures early. This helps ensure only stable, well-tested code reaches the end users. Automated test frameworks such as Pulp Smash further strengthen quality by allowing contributors to write flexible, portable tests that identify regression swiftly.
Community Test Days also play a key role. Projects like Fedora organize test days, inviting users to propose areas for testing, document their findings, and collaborate in forums like IRC channels. These test days empower users to become active participants in quality assurance, ensuring critical issues are identified and addressed. Additionally, Bug Triage days, like Foreman’s Bug Day, give users the chance to prioritize and resolve existing bugs, reducing the backlog of unresolved issues.
Through tools like continuous integration, automated testing, community test days and but triage, open-source projects can ensure high quality and maintain stability even in the face of decentralized development.
Addressing Bias in Open-Source Software: Overlooked Quality in OSS?
Bias in OSS development specially when it comes to gender, continuous to be a significant issue. Women are underrepresented in OSS and they miss out on a valuable development and professional opportunities. As new jobs open up, many women lack the experience needed to apply which can only continue this cycle. According to researchers like Anita Sarma and Steinmacher, these problem may be build into the very tools used for OSS development which creates barriers that discourage women from participating.
Their research focuses on tools like Eclipse, Git, GitHub, Jira and Hudson, tools that weren’t always designed with divers users in mind. Steinmacher explains, “If someone implements a tool without considering a diverse set of user, they may include such kinds of bugs which affect one gender more that the other.” The team plans to use GenderMag, a method developed by Margaret Burnett to identify gender-biased barriers in these tool.
To truly improve the quality of OSS, addressing bias requires a deliberate effort from a diverse group of contributors who bring a variety of perspectives to the table. By prioritizing inclusivity in design and testing, OSS can create software that works for everyone not just a select few. Without this focus, bias can undermine the overall quality and fairness of OSS, reducing its effectiveness and limiting its impact. In the end, addressing bias is not just about fairness but also about improving the overall quality of OSS and ensuring it works for everyone.
References
Ajko, A. (n.d.). Cooperative software development: Quality. University of Washington. Retrieved from https://faculty.washington.edu/ajko/books/cooperative-software-development/quality
Gibbons, S. (2021, December 29). What is great about developing open source and what is not? FreeCodeCamp. Retrieved from https://www.freecodecamp.org/news/what-is-great-about-developing-open-source-and-what-is-not/
Black Duck Software. (n.d.). Coverity Scan: Open source. Retrieved from https://www.blackduck.com/
M. K. (2019, May 28). The quality of open source software: How many eyes are enough? World Bank Blogs. Retrieved from https://blogs.worldbank.org/en/opendata/quality-open-source-software-how-many-eyes-are-enough
How Good is the Quality of Open Source software compared to proprietary software © 2024 is licensed under Creative Commons Attribution-ShareAlike 4.0 International
0 notes
Text
Oi Oi
Welcome to my blog! I’m Elias, a passionate learner and advocate for open-source software development. Here, I explore the complex world of open-source communities, their quality assurance practices, and the challenges they face—from biases and inclusivity to improving software quality for all users. My goal is to provide insights, share research, and spark conversations about how we can make open-source software more equitable, efficient, and impactful. Whether you’re a developer, enthusiast, or simply curious about the future of open-source, you’ll find thoughtful discussions and practical solutions that aim to strengthen the open-source ecosystem. Let’s dive in and explore the world of OSS together!
4 notes
·
View notes