Don't wanna be here? Send us removal request.
Text
For the senior seminar next year, for all who are interested in FOSS – Some thoughts
Getting started in Open Source contributions is not easy, but it is not terribly hard either. I remember when I first looked at the issue tracker and thought, “where do I start”? Now looking back, I guess the answer would be, “it does not matter where, just start somewhere”. My FOSS experience has been a journey of explorations. I learned how to use version control, how to read unstructured code, how to ask and explain technical problems, etc. More importantly, it was a journey on personal growth, on civic engagement.
If I were to give advice for future FOSS contributors, I would suggest that they be well prepared before starting. Preparations extend beyond developer installations, and studying the issue tracker. One could simply join the chatroom and talk to the contributors. People often mistake that chatrooms are only for technical questions, and therefore make the community their last resort for help. For FOSS projects, the community is the best resource for everything. By joining the chatroom early, one can get a good sense of project status and outlook. Contributors are more than happy to share their insights, and offer all kinds of help. When our group worked on freeCodeCamp, the Gitter chatroom answered our technical questions, suggested outside resources, gave us feedback on our ideas, and sometimes just share their ideas on the project in general. Without a formal structure of departments, communications among contributors became truly significant in moving the project forward. The positive feedback from other contributors validated our effort, and motivated us to work harder. Sometimes, the sense of community is just a simple “good job!”
Working on open source projects are very much like doing community service. It is at first hard to see the impact. But results are never the ultimate goal for FOSS contributions. Alas, one does not need to be the best coder to contribute. With an open mind and good effort, everyone can contribute. And through communications and collaborations, we will leave an impact TOGETHER.
0 notes
Text
Ethics on AI
Society is often wary of the AI development. The question people have is, what if the AI harms us humans? Harms from AI could fall into two different categories, accidental and deliberate maiming. First, accidents happen, and we can only do our best to take preventative measures. As for deliberate harms, would AI cause damage autonomously? No. At least for now, the technology is far from having fully autonomous functions. Meanwhile, should we regulate AI development? Yes, we should regulate AI development as we should for any other practices. Such a development should follow a code of ethics, such that it is for the best interest of the public. But one must realize that regulations should not be in place merely for the fear that AI will one day attack humans.
The fact that AI makes machines much smarter and more capable does not make them fully autonomous. We are accustomed to thinking that if a person is granted more autonomy—inmates released from jails, teenagers left unsupervised—they may do wrong because of their questionable/unusual background. In contrast, machines equipped with AI, however smart they may become, have no motivations of their own. It is hard to see why driverless cars would unite to march on Washington without any human instructions. Of course, one may argue that the government should be concerned about autonomous weapons for the sake of society and human beings. It is interesting to debate on the development of autonomous weapons that employ AI to decide when to fire, with how much force to apply, and on what targets. Note that a group of robotics and AI researchers, joined by public intellectuals and activists, signed an open letter that was presented at the 2015 International Conference on Artificial Intelligence, calling for the United Nations to ban the further development of weaponized AI that could operate “beyond meaningful human control.” The letter has over 20,000 signatories, including Stephen Hawking, Elon Musk, and Noam Chomsky, as well as many of the leading researchers in the fields of AI and robotics.
I think AI should be regulated on a case-by-case basis. On the other hand, we should include the opposite question to our concern, whether it is ethical to use a person in high-risk situations when a robot can carry out the same mission as well, if not better. This question applies to clearing mines, dragging wounded soldiers out of the line of fire and civilians from burning buildings, and ultimately, fighting wars.
0 notes
Text
Ethics for software engineers
Technology has brought in many ethical debates. Questions range from privacy, ethical coding, to technology vs humans. As software developers, where does ethics come in? What should we keep in mind when we code?
The medical profession has an ethic: First, do not harm. Silicon valley has an ethos: build it first and ask for forgiveness later. Such an ethos has indeed encouraged many technological advancements with bold ideas. On the other hand, tech innovations, often aiming to better human lives, could be unethical. Practices such as constant location service, keeping personal data, sharing personal data, are among many that were brought into attention of the public. It often does not occur to the software developers that writing code is very much the same as manufacturing products. The code that goes into production could harm the users, just like a defect product would.
Bill Sourour, a software developer, wrote in his blog about “the code I’m still ashamed of”. In it, he talked about making a website for a pharmaceutical company that targeted teenage girls. The quiz he designed would result in only one recommendation of a specific medicine. After he learned from a news report that a girl committed suicide after prescribing to the medicine he helped advertised, he became wary and ashamed of the code he wrote, thinking that he had a direct impact to the well-being of those oblivious patients. He remarked, “as developers, we are often one of the last lines of defense against potentially dangerous and unethical practices”. So what should we keep in mind when we code?
IEEE and ACM joined task to provide code of ethics for software engineering professionals. The 8 principles are,
1. PUBLIC – Software engineers shall act consistently with the public interest.
2. CLIENT AND EMPLOYER – Software engineers shall act in a manner that is in the best interests of the client and employer consistent with the public interest.
3. PRODUCT – Software engineers shall ensure that their products and related modifications meet the highest professional standards possible.
4. JUDGMENT – Software engineers shall maintain integrity and independence in their professional judgment.
5. MANAGEMENT – Software engineer managers and leaders shall subscribe to and promote an ethical approach to management of software development and maintenance.
6. PROFESSION – Software engineers shall advance the integrity and reputation of the profession consistent with the public interest.
7. COLLEAGUES – Software engineers shall be fair to and supportive of their colleagues.
8. SELF – Software engineers shall participate in lifelong learning regarding the practice of their profession and shall promote an ethical approach to the practice of the profession.
Is the code of ethics enough to remind software engineers be ethical? I think not. If not for this blog, I would have never known this code of ethics. To truly instill the ethics for this profession, we need to first take ethical approaches in education.
0 notes
Text
On self-driving
Computers make errors. Most of the time we trust computers, and rely on it to deliver our messages, calculate and analyze data, and even control rocket propellers. We have let the computers do a great deal of work for us; we are still using the computers for a great deal of work, and we should continue to use the computers for even more work. Fatal accidents like the self-driving Uber hitting a woman in Arizona should not stop us from using technology and developing self-driving. In turn, we should put more resources into development to and make better policies to avoid tragic accidents in the future.
Indeed, computers make errors, so do humans. On that same day the Arizonan woman was killed by the self-driving Uber, there were approximately 90 people killed on U.S roads due to human error. The very reason we hope to have self-driving cars is to reduce vehicle accidents and have the traffic behave more consistently. We must not let the perfect be the enemy of the good. One fatal accident should not lead to the demise of self-driving cars, but instead should galvanize the tech and auto industry into further development.
I often imagine that the traffic would one day become similar to air traffic, mostly guided by air traffic control and the flights are on auto-pilot. Modern flights are so autonomous that people argue flight pilots are really just a point of liability. Air traveling has become the safest means of transportation, and 2017 was the safest year for passengers in the history of air travel. The air traffic control system we have today was not built overnight.
It took years of investment and years of missteps to create the safe flight ecosystem we now have in the country. In 1956, for example, two planes collided mid-flight near the Grand Canyon. The tragedy led to renewed investment in air-traffic control systems nationwide and spurred the creation of the Federal Aviation Administration. As a result, mid-air crashes are now extremely rare. Air travel has become the safest transportation.
Self-driving cars have the same potential. First of all, it is important to note that an established system requires both the technology and a comprehensive guideline. At the moment, we have good-enough technology for cars to operate on its own. But we hardly have any rules or guidelines on self-driving cars. Looking at the Uber incident, technology was not the only culprit. There was clear oversight from Uber on safety guidelines. Their cars, albeit one of the firsts on the road, were in fact behind their competitions from Google and Waymo.
Problems with self-driving cars at such an early stage would be inevitable. The reaction we have from the tech industry, the government, and society will have an immediate effect on the future of our roads.
0 notes
Text
Network Security: A response from the public and the private sectors
Facebook has been under the spotlight for the past couple of weeks. Network security became a public concern, such that privacy is not only threatened by the government, but also private companies. People were uncertain whether Facebook sold their data to Cambridge Analytica or the Russian hackers, and suddenly became wary of the network security and how their own data could be used in cyber war. Under pressure, both the public and the private sectors have made responses.
The United States and Britain on Monday issued a first-of-its-kind joint warning about Russian cyberattacks against government and private organizations as well as individual homes and offices in both countries, a milestone in the escalating use of cyberweaponry between major powers. Ciaran Martin, chief executive of Britain’s National Cyber Security Center, said Russia had targeted “millions” of devices in both countries, often seeking to hack in to individual homes of small business or to control their routers. Having control over the router means having access to all the traffic through this router, leading to a security breach in credentials and passwords. The officials said the Kremlin was often utilizing what were known as man-in-the-middle attacks, in which hackers secretly inserted themselves into the exchange of data between a computer or server in order to eavesdrop, collect information, misdirect payments, or further compromise security. officials of both governments repeatedly urged individuals and business to better protect their own networks.
Meanwhile, major tech firms have agreed not to assist any government, including that of the United States, mount cyberattacks against “innocent civilians and enterprises from anywhere”, reflecting Silicon Valley’s effort to separate itself from government cyberwarfare. More than 30 tech firms, led by Microsoft and Facebook announced a set of principles. The principles, which have been circulating among senior executives in the tech industry for weeks, also commit the companies to come to the aid of any nation on the receiving end of such attacks, whether the motive for the attack is “criminal or geopolitical”. It is also interesting to note that Google, Apple and Amazon declined to sign on at least for now.
It is hard to say why the sudden effort from both the public and the private sectors on network security, perhaps the issue has truly gotten out of control, or that they are under heavy pressure from the public. While it is good to see their joint effort, ultimately, the general public has to be aware and tech literate enough to protect themselves.
0 notes
Text
Government and Privacy: So, what about foreigners?
One cannot eat the cake and then have it too. It seems that neither can one have both privacy and security these days. Exactly how much privacy do we compromise, and exactly how much more secure do we get from our compromise seem to be the never-ending debate. Individuals get tensed up upon realizing that their own privacy is compromised. In today’s society, one of the underlying issues is that people tend to turn a blind eye when they are not directly affected by the situation. The mentality “as long as MY privacy is assured without compromising security, it is okay” is troublesome.
Whereas Facebook and Cambridge Analytica have received inescapable spotlight from the media, a recent state department proposal undergoes the application process without much fanfare. Nearly all applicants for a visa to enter the United States — an estimated 14.7 million people a year — will be asked to submit their social media user names for the past five years, under proposed rules that the State Department issued on Friday. The proposal covers 20 social media platforms such as, Facebook, Flickr, Google+, Instagram, LinkedIn, YouTube, and some other non US-based sites. The proposal would affect millions of people who intend to visit the United States for business or pleasure, including citizens of such countries as Brazil, China, India, and Mexico, from where many students come to pursue education in the states. However, citizens of roughly 40 countries to which the United States ordinarily grants visa-free travel will not be affected by the requirement. “This attempt to collect a massive amount of information on the social media activity of millions of visa applicants is yet another ineffective and deeply problematic Trump administration plan,” said Hina Shamsi, director of the American Civil Liberties Union’s National Security Project. “It will infringe on the rights of immigrants and U.S. citizens by chilling freedom of speech and association, particularly because people will now have to wonder if what they say online will be misconstrued or misunderstood by a government official.”
So far I have not heard fellow American students talk about this, but I have heard questions from some international students, bewildered by the new vetting requirement. On Sina Weibo, one of China’s largest social media platforms, several users were critical of the plan. “Does it mean someone’s visa application will likely be rejected if he/she has been critical of the U.S.?” one wrote. “What about your sacred ‘freedom of speech?’” Another user wrote: “We Chinese have learned well enough the lessons to be drawn from isolation. Now it’s America’s turn.”
This requirement is bizarre in that citizens from certain countries are treated differently from American citizens and those from allied countries. Of course, proponents of the proposal may argue back that such difference is nothing new, as there has been extra vetting process for middle eastern countries, and that social media accounts of US citizens have long been transparent to the government. Yet, the US government should not ask for carte blanche access to every visitor’s social media. Aside from China, US government perhaps has the most powerful surveillance data. Asking for social media is unnecessary and ineffective to national security. On the other hand, demanding that visitors surrender their social media information is about far more than who America lets into its borders – it’s about suppressing criticism of the president. In this case, privacy and security seem to be back on the same side again.
0 notes
Text
Less Privacy = More Security?
During the FBI’s battles with Apple over encryption, former FBI director James Comey remarked that the Constitution’s provisions for searches “couldn't have imagined any box or storage area or device that could never be entered.” Yet the U.S government is now considering limits to Stingray Surveillance, which law enforcement agencies have worked hard to keep secret. In spite of the mass data collection and surveillance, it is hard to argue just how effective mass surveillance is to public safety. An advisory panel appointed by then-President Barack Obama concluded in 2013 that in 2013 that the NSA’s phone records program “was not essential to preventing attacks” since information could be obtained through court orders. The conclusion further agitated the debate on privacy and security. In one survey, a quarter of Americans polled said they changed the way they use their phones, the web, email and texts after learning about government’s surveillance programs.
I certainly changed how I used the internet when I discovered that Google could predict my age, gender, and interests based on my searching records. Of course, I am sure that Google could not care less about who I am or what I do, for that I am just a string of data on their records. But the idea that my behavior on the Internet is constantly recorded and analyzed is bizarre. Imagine a person follows you around all day and at the end of the day tells you who you are. Two concerning ideas arise—no one wants to be followed around all day, and no one wants to be judged on who he or she is just based on some unilateral record. The predictions could be way off. Then how is technology different? How does government surveillance get a comprehensive intelligence simply based on the location, associates, and activities of a suspect? Lately, I have been seeing more cases of “the perpetrator/bomber/attacker was on the FBI surveillance list for 4 years”.
In class, I hear fellow Chinese classmates claiming that the Skynet system in China has indeed helped China’s public safety. I disagree. Having lived in China most of my life, I would not attribute my confidence in public safety to the constant surveillance. There are far more aspects that contribute to the safety than public surveillance such as, culture, economic status, education level, etc. The argument that a compromise in privacy leads to more security is specious. On the other hand, systems like Skynet is solely controlled by the authority. One may argue that such system would be a threat had it been in control of malicious individuals. Indeed, but we have no control in how the authority utilizes the system, and no one can guarantee that the authority is completely benign to the public. Skynet seems pretty effective in consolidating power against dissidents and mass protests.
Technology has certainly helped the law enforcement solving crimes. With technology, it is harder to get away with murder, but it is arguably harder to commit a crime. Surveillance and intrusion to public privacy are merely reactive measures, what we need is proactive measures. In this case, it is not the fault of technology, but how we use it.
Sources:
https://www.bloomberg.com/quicktake/privacy-vs-security
https://www.washingtonpost.com/world/national-security/nsa-shouldnt-keep-phone-database-review-board-recommends/2013/12/18/f44fe7c0-67fd-11e3-a0b9-249bbb34602c_story.html?utm_term=.1d352b05ed50
0 notes
Text
Can we ever beat the pirates?
Journalist Nick Bilton once said in his New York Times article, “stopping online piracy is like playing the world’s largest Whac-A-Mole”. Taking one down, and then countless others appear within hours. Out of the 12 participants in my senior seminar class, 3 have not used the infamous pirate bay, not including the 1 student that was absent from class. Yet we all seemed to have pirated stuff here and there, let it be downloading YouTube music or sharing electronic textbooks. As the Pirate Bay struggled with its legal accusations, it wrapped up its code that runs the entire website, and offered it as a free downloadable file for anyone who would like to use it on his/her own server. People then turned to the mirrored versions, and other alternatives such as the KickassTorrent(and when KickassTorrents went down in 2016, more alternatives came up). There are also websites like TorrentFreak that reports on copyright and piracy news—its editors adamantly believing that “piracy won’t go away”.
Sure, there may always be piracy, as there will always be thefts and crimes. But can we at least reduce and control piracy? It is interesting to note that new laws do not contribute to fundamental changes. As much as the copyright holders want to believe in new laws and legislations, many believe that any laws will just push people to find creative new ways to getting the content they want. It is also interesting to note that there is a established relationship between the legal availability of material online and copyright infringement—it’s an inverse relationship. The most downloaded TV shows on the Pirate Bay are the ones that are not legally available online, think the Game of Thrones. To solve the piracy issue, it is perhaps better for the big media companies to come up with services to discourage users from pirating—a proactive approach rather than reactive.
With the ever expanding tech world, the piracy community is on the same pace. Piracy is easier with cloud-based sites and harder to trace than ever. Soon the pirated materials could extend far beyond the Internet. People can now share and pirate 3D printing data, so it is very possible to make physical tangible objects from pirated blueprints. Currently, copyright laws do not apply to standard physical objects as they are deemed “noncreative”, so pirating 3D schematics is not exactly illegal. Will there be new laws to combat this? But once again, it will reactive, and once the laws are passed, the pirates will be likely to find new water to stay afloat on.
0 notes
Text
Is there a right amount of censorship? Who has the say?
The Charlottesville car attack in Virginia in August, 2017 left the country in shock. Many in the states were appalled by such barbaric incident, but more importantly, by the divide of their own country. The tension did not die down in Virginia. As people across the nation became united and condemned the attack, the American neo-Nazi website The Daily Stormer published a long, hate-riddled post mocking the victim.
Criticisms over the article led its domain registrar, GoDaddy, to end The Daily Stormer’s service. Google soon responded accordingly as the administration quickly hopped over and registered with Google. It was not until Cloudflare, a website security and performance service, dropped the site as a client that The Daily Stormer truly lost its ability to stay online. Due to the nature of Cloudflare’s business, and the scarcity of competitors, its power over internet censorship is ineffable.
Cloudflare offers its clients protection, very much the same way security guards protect controversial public figures as they give public speeches. The company is able to block malicious traffic from flooding clients’ with requests to take them offline. Security service providers like Cloudflare, ISPs like Verizon and Comcast, and domain name servers often do not look at the content their clients and customers are posting online. They provide a pipe for the information to flow. It is crucial to understand that these companies are different social media websites, where content is scrutinized. In other words, the pipe is truly a foundation for websites.
Long has been debated over ISPs favoring access to some content over others, but few have thought about these ISPs having arbitrary control. The reason is that, when GoDaddy refuses to host, there are Bluehost, Google, and many other competitors. If these companies are at the very beginning of the pipe, the Cloudflare lies in latter parts. With almost no competition, Cloudflare’s decision to drop service would be fatal to the website, hence the demise of The Daily Stormer. Of course, denying a neo-Nazi website seems perfectly reasonable now, but what if it suspended service for a political candidate that its administration did not like?
While neo-Nazi is much more than just a political agenda, one could well argue that it deserves its spot on the Internet despite the public angst for the sake of freedom of speech. After all, didn’t Cloudflare made its judgment based on its client’s content? When private companies start to police the public speech of their clients, it is hard to draw the line as to what is right or wrong, biased or not. Due to the economic incentives and nature of business, private companies certainly wouldn't act arbitrarily for now. Yet, how are they supervised? How do we users ensure that we are getting the service we desire?
0 notes
Text
How should we introduce technology to the new generation?
Technology has become essential in people’s daily life, as we can accomplish most of our mundane tasks through our phone, tablet, or computer. When I was a kid, you would probably catch me playing my GameBoyAdvanced, or Sony PSP here and there. But now, it seems like smartphones are the new PSP, and kids are playing games on iPhones and iPads almost anywhere anytime. I certainly do not envy them, as I adamantly believe that my childhood without constant presence of technology was much better.
I could not help but think that kids nowadays should spend less time on the screen, but on the other hand, how do we introduce technology? Kids nowadays see technology as entertainment and fun gadgets. But technology can have a much deeper impact to these kids as educational tools than as portable gaming consoles. The lack of proper netiquette led to cyberbullying, ignorance of copyrights, and many other social issues. It is important for kids to realize that the person on the other side of the screen is real—the ethics of right or wrong apply to technology the same way they do to our daily lives. In fact, technology is deeply imbedded into our life that one should not exempt technology from any arguments.
Nowadays, with technology integrated into classes, one must think about the ethics of technology in education. Schools often rush into getting the newest technology to entice more prospective students, but fail to address the ethical issues in technology. As teachers are encouraged to use technology in the classroom, cyber ethics should also be a part of the curriculum. Jerry Crystal, the technology coordinator at Carmen Arace Middle School in Bloomfield, Connecticut states, “Educators do need to address cyber ethic, but they can address it in the context of their current curriculum and incorporate the lessons into ongoing programs.” To foster a healthier, safer, more efficient technological environment, the younger generations should learn about privacy, cyberbullying, and the law regarding the use of technology.
0 notes
Text
“It will be fine”, says Facebook
Facebook has introduced drastic changes to its news feed feature to prioritize what their friends and family share and comment on while de-emphasizing content from publishers and brands. The aim of this long waited overhaul is to bring people together, maximizing “meaningful interaction” that people consume on Facebook, and in turn, to reduce what Zuckerberg called “passive content” – videos and articles that do not spark social interactions.
Over the years, the role of Facebook plays people’s daily lives has changed. One would be surprised by how many people use Facebook nowadays as a major news source. And by no surprise, less and less people sign up and use Facebook with the explosion of myriads of other social media outlets. The number of people in the United States and Canada who use Facebook on a daily basis fell for the first time ever in last year’s fourth quarter, dropping to 184 million from 185 million a year earlier. The way people react to updates and changes in technology is quite similar to people’s reaction to any new technology—skeptical yet curious. One cannot yet predict how Facebook will fare against other social media in the future, but it is worth commending that Facebook is resetting the tone of social media, that technology should bring people closer, and not further apart.
Many, including but not limited to wall street investors, may be concerned with the future of Facebook with such a drastic overhaul and declining usership. But Zuckerberg argues that less is more; he reassures that “the time spent on Facebook will be more valuable”. As the CEO of Facebook, Zuckerberg highlighted that “we have a responsibility to fully understand how our service are used to amplify the good”. The issue about today’s social media , or lack thereof, is exactly that. Entrepreneurs and users care more about the trend, hype, social pressure, or whatever one may call it, when talking about social media. In a sense, the change facebook has brought up this time is not only a technical one, but also one that concerns who we perceive social media.
0 notes
Text
Personal reflections as a member of FOSS
It must have been half an hour since I created this document. As I stared into the blank page and contemplated on what to write about, I could not exactly find a word to describe my experience in my senior seminar. It was both good and bad, exciting yet frustrating. The past semester was truly a learning experience.
Aside from learning how to use Git, shell bash, and basic JavaScript, I have learned to communicate better with both group members and strangers. Initiating a conversation with or proposing a question to a stranger was not the easiest thing. Wording, appropriateness, and expectations were all uncertain. Fortunately, prompt and helpful responses from other contributors instilled a sense of community in me, such that my participation mattered, and my contributions could make a difference. We have long talked about the FOSS community in our seminar class. It might not have been obvious then, but looking back, the sense of community is truly the best part about FOSS. The fact that there are people out there willingly help me and work with me makes me value my contributions, and more importantly, makes me feel responsible for my work.
Our seminar professor posed an interesting question,
How have the experiences in this course and in working with your Open Source Community in particular helped to cultivate the above civic dispositions and skills that will help to prepare you for a life of civic engagement?
Civic engagement, I guess, is the peacefulness I had when I found myself working on bug gardening on a Saturday night before the finals; it is the bond established through communications; it is the joy I had as I heard our first pull request accepted; it is learning from others; it is sharing my knowledge with others. But most importantly, civic engagement is a process and a commitment that no singular event could define. It is the aggregate of all my work and learnings over the past three months.
Civic engagement and learning do not end along with this semester. I look forward to continuous growth as a student as a person in the upcoming year with the FOSS community.
0 notes
Text
Open source is a win!
There are many open source success stories to show the significance of the open source movement in tech industry. But the influence of open source is omnipresent, slipping into non-tech industries, and into everyday life. Large businesses like Walmart and ExxonMobil utilize open source mangement systems and developer toolkits, while financial giants and like JPMorgan and London Stock Exchange Group are among the companies backing Hyperledger, an open srouce software that could reinvent the stock market. In the summer of 2016, even the US government embraced the open source movement. It announced its first official federal source code policy, detailing a program that requires government agency to release 20 percent of any new code they commission as open source software. The government agencies would also share more code with each other, adopting the open source practices and philosophies.
Nowadays, almost all major piece of technology one interacts with on a day-to-day basis—from web to phone to car—is built using at least some for of open source code. Open source movement really seems to have won, in a sense that it is now a core part of how software is created.
As promising as open source projects sound, there is an underlying problem people tend to turn a blind eye on—funding. The lack of steady and continuous funding leads to a weakening impetus for development, and potentially security issues. For instance, OpenSSL is an encryption software library used by websites and operating systems, including but not limited to Android and iOS, that securely processes sensitive data. Prior to 2014, only one person worked on the project full times, and this was a big reason no one noticed Heartbleed, a massive security breach that led to one of the worst digital security emergencies. I remember my professor in Algorithms talking about Heartbleed just recently. The reaction from class was, "how did that happen in the first place? How did no one notice?"
The incident helped raise funds to prevent future problems. But now that the publicity has worn off, donations have dwindled, too. On the other hand, open source communities face the same social issues a normal workplace would have, let it be pay, gender, race, barriers to entry, or the meritocracy culture. These problems seem to bring a bottleneck to open source development. Programmers seem to be optimistic nonetheless, in a Wired Article, a software developer named Audrey Eschright argues that programmers "are here not because of the code, but because of the community". Indeed, I personally think that the biggest success story about open source is never about a particular business, or an industry, but the nurturing of a community. Open source simply brought people together, and that is a win!
0 notes
Text
Going Open-Source
Going open source seems rather counterintuitive for a business, as it does not seem to directly amass more revenue. Then how and when does a business decide to go open source?
As suggested by Michael Widenius, and Linus Nyman in “The Business of Open Source Software: A Primer”, the essence of a business is to figure out how the firm can create value for the customer while simultaneously extracting some of that value for itself. There is of course a much higher goal in going open source, such as making an exclusive technology readily available, and hence galvanizing the development of the industry.
Often times, open-source entrepreneurs think of their business model only after their free-software downloads are off the charts. However, according to the CEO of MongoDB, “some of the best open-source companies had a clear idea of how they were going to make money from their early days.” Red Hat, for instance, thought about monetization since their kickoff. When the company was shifting from shipping Linux CDs in textbooks toward selling to big enterprises, executives drew a clear delineation between the company’s free Fedora user community and the more sophisticated Red Hat Enterprise Linux product. They recognized that their target buyers – CIOs and operations executives – valued stability over features. Red Hat offered exactly that—an enterprise-grade, paid product supporting their core Linux applications.
As Red Hat maintained a vibrant open-source community around Fedora, updating features every once in a while, the company marketed enterprise-grade security and optimization on standard Intel hardware to people using the paid enterprise product. Consequently, Red Hat was able to generate a recurring revenue stream of more than $1 billion from enterprise customers, and simultaneously helped Linux become the most widely adopted server operating system.
People often relate open source to the grand scheme of things, the greater good, and even humanity. While they are not wrong, they certainly overlooked the fact by bringing the greater good to the industry really pushes the competition and development of technology, and hence generate more revenue in the long run. Like any other business model, open source is not the golden key to success. To go open source, one must have a clear and strong motivation behind it.
0 notes
Text
Agile methodology in school
In previous posts, I have briefly talked about my experience with Agile methodologies in my internship. Since the semester started I have soon realized two things about Agile. First, Agile is not exclusive to the company that I worked at. Second, all my classmates seem to share the same revelation, that agile is not exclusive to his/her internship.
The questions then arise: 1. Do companies assume that interns and young professionals know about software processes models? 2. If yes, how come no classes, until my senior seminar, have talked about it? 3. If not, how come companies do not explain better so there is no misconception about agile methodologies?
Given so many of my classmates have used Agile development processes. I am more curious about the second question. If educational institutions were to implement it, what is it like? Baochuan Lu, and Tim DeClue from Southwest Baptist University shared my curiosity and published a paper on “Teaching Agile Methodology in A Software Engineering Capstone Course”. In this paper, the course in the study is in fact very similar to my senior seminar.
The course under study is a two-semester course sequence that encapsulates a software engineering (SE) course and a capstone project. This course includes two parallel components. The theoretical part consists of weekly lectures/seminars, reading assignments, and quizzes, which cover core SE theories and best practices. The development component involves building a software product for a real customer. Students are required to work at least 10 hours each week on their projects. The goal of this course is to introduce fundamental software engineering concepts and provide an opportunity for students to apply their knowledge in real software development and project management.
And like my senior seminar, “More than 50% of the students in this class have had an internship or significant industrial experience” so that Agile is not entirely new.
The main challenges faced by students were communication and planning. To achieve agility, agile methodology relies on the tacit interpersonal knowledge embedded in the team rather than knowledge written in plans. Close collaborations and interactions are therefore crucial. In this class, students are required to set their own milestones and deadlines. However, authors found that self-imposed deadlines do not usually work for students as they will procrastinate and scramble to meet the due dates by resorting to college survival skills. While agile development is not plan driven, heavy mental planning is required to ensure the success of the project.
Lu and DeClue also mentioned that teaching agile methodologies poses challenges to instructors as well. Since students do not write code from the teaching projects, they feel less attached to the code and less motivated to refactor the design and complete test cases, often contradicting the course objective. Instructors’ responsibilities then become tricky, as there is a fine line on how hard to push the students in projects. Too hard, the agile methodologies are then blurred; yet too soft, the students maximize their learning potential through the projects. Agile or not is then not important. Lu and DeClue conclude that while teaching agile methodology is challenging, “it is worthwhile as it enriches students’ software development experience.”
As I wrap up this blog post and attempt to reflect a bit more on this paper, I can’t help but think: my senior seminar is really based on Agile methodology, isn’t it? No scrums and no sprints =/= not agile
Work Cited:
https://www.researchgate.net/profile/Carol_Spradling/publication/234828656_Scratch_the_workshop_and_its_implications_on_our_world_of_computing/links/0c9605378da8d733bb000000.pdf#page=306
0 notes
Text
Testing: A Top Priority
In 2013, the issue of software defects was taken to the national stage with HealthCare.gov, the website intended to provide access to affordable healthcare insurance. Then President Obama personally address the issue, “there is no excuse for the problems”. One would think that a government website would have enough testing beforehand to ensure its functionality. But there are two problems with such thoughts. First, people often underestimate the complexity of testing, whether that be the developers, or the users. Second, there is never enough testing, such that testing is as fundamental as the development per se.
Andreas Kuehlmann, a senior vice president at Software Integrity Group at Synopsys, once remarked on his blog that, “The times when software bugs were tolerable are long gone, and developers across all industries find themselves held to unprecedented levels of accountability for software quality and security.” While development teams often rely on manual testing for the ease and affordability, manual testing no long keeps up with the software development.
Kim Zetter, a senior staff reporter for the Wired magazine, points out 8 out of 10 software applications fail to meet a security assessment. The claim was made based on a State of Software Security report by Veracode that analyzed 9,910 applications submitted to its online security testing platform. Government websites and applications fared worse when it comes to cross site scripting and SQL injection flaws, while commercial applications were often susceptible to remote-execution flaws. According to the report, 40 percent of government applications contained these flaws. While the prevalence of SQL injection flaws has gone down 6 percent overall in the last two years in the apps market as a whole, it has remained even in government applications, indicating that government apps have made no improvement in this regard.
As a computer science student in college, it is disheartening to see this happening in the industry. The problem lies not only in the industry, but also in the culture of programming. As much as the professors have tried to emphasize the necessity of testing, coding always came first. No one that I know of write test cases prior to coding the main task, and few write test cases as they progress through the project. One may argue that the culture is shifting—people now implement test-driven development. But test-driven development does not represent the culture as a whole. Our solution should be an increasing emphasis on software testing across the institutions and industry—when students in intro to computer science courses test as much as they program.
0 notes
Text
Don’t think about design patterns too much
When I first learned about design patterns, I think of their relationships to software as outlines and rough drafts to an essay. So do programmers choose a design pattern prior to writing the code? It sounds plausible but the idea is rather inflexible given the flexible nature of software designing.
If you google “what design pattern should I use”. Chances are the results will be disappointing. There are flow charts, and no concrete answers. The reason is that one does not write a program using a design pattern. It does not work.
Patterns are not building blocks for a software. A pattern, something that one uses when he or she finds a problem, which requires some flexibility that the pattern offers, or when it is recognized that there exist substantial precedents on such issues. The author of How to Use Design Patterns Erich Gamma said that,
Trying to use all the patterns is a bad thing, because you will end up with synthetic designs—speculative designs that have flexibility that no one needs. These days software is too complex. We can't afford to speculate what else it should do. We need to really focus on what it needs. That's why I like refactoring to patterns. People should learn that when they have a particular kind of problem or code smell, as people call it these days, they can go to their patterns toolbox to find a solution.
Starting out with a design pattern is like having a solution and looking for a problem, often leading to over engineering and ultimately inflexibility in design.
That is not to say that one should not do any planning or designing. Designing the structure of an application and planning ahead will likely reveal a design pattern, or perhaps lead to an original framework.

1 note
·
View note