samhrncir
samhrncir
Sam Hrncir
19 posts
Don't wanna be here? Send us removal request.
samhrncir · 5 years ago
Text
Reflection: Take Aways of my CS Senior Seminar
Tumblr media
Dickinson’s College Computer Science Department’s senior seminar is not like any other schools I’ve hear of.  Our focus is to slow down instead of speed up.  In the world of CS tech, faster seems to be always better.  Faster to get code to production, faster to use new technologies, faster to beat competition.  But our senior seminar brilliantly slows this world down to allow us to actually consume what is going on around us and think independently and critically about it.  This was done through assigned readings.  I personally greatly loved this part of the course as it taught me various sources of technical news and articles.  This will allow me to continue to stay informed on the current happenings of the industry through out my future career!  The assigned readings were used to inform course peers to have discussions on the many ethical issues in our industry.  Even though we all had very limited real world experience, and definitely felt that limitation during each discussion, we were encouraged to think critically out of comfort zone and make our own sense of the issues.  The civic disposition of ethical discernment was reinforced along with empathy.  We learned how to find sources of impact and important  industry news, think of hidden concerns around the topic, and come to an informed understanding through open and inclusive discussion.  
Stepping back to the beginning of the course, we were exposed to many well put together and directed activities that exposed us to some software developer actives.  These ranged from scripting, using git, and navigating a command terminal.  I would prefer some of these to be more introduced during years 1 and 2.  This will help push the under classmen more and allow project to be more accessible.  This will also assist student with internship preparation.  I am very excited to learn that this is being done in the new computer abstractions course!  Besides, these activities were very relevant and I will be referring to them to help me as I build up my skills as I onboard to a software product team.
The courses ethical discussion work was paired with team open source project work throughout the year.  Before the senior seminar I had very broken and limited knowledge of open source.  I actually had a negative and false perception of it.  This course thankfully taught me the vastness and limitless opportunities that open source provides to the world.  While I was reading the College’s mission statement of  “to prepare young people, by means of useful education in the liberal arts and sciences, for engaged lives of citizenship and leadership in the service of society.  I am so proud that my Department’s senior seminar embodies this.  Most impactful are the words liberal arts, citizenship, and service of society.  Our discussions leaned on our liberal arts cross discipline education to have a multi perspective broad understanding of issues.  Citizenship and service of society that we joined an open source community to contribute to the OSS that is simply helping the world.  
I now have the civil skills to adapt to a changing world as joining my OSS project required adaptation of knowledge and the necessity to learn to be able to contribute.  Finally a mentality I learned was the “if you see something wrong, fix it yourself”.  As I used my projects tutorials and application, when I saw things wrong I had the confidence and know how on how to initiate the change myself.  
I am very happy with the wide array of skills I have learned through my Senior Seminar.  From ethical discernment to Open Source software development, I will take this enlightened knowledge to my future career.  I hope to continue contributing to the open source communities. 
0 notes
samhrncir · 5 years ago
Text
Advertisement: Algorithms can be biased!!! How so?
Tumblr media
An algorithm is a very general term to refer to a finite sequence of well-defined, computer-implementable instructions.  Importantly, algorithms are never ambiguous and are used to solve particular tasks for a wanted output.
Algorithms come in all shapes and sizes.  Some may think of algorithms as varying scope.  Some may refer to a whole system as an algo, or some may refer to a system as comprising of or utilizing multiple algos.  Of course it is always important to analyze the validity of an algorithm.  The determination of validity can come in many different areas.  On of these areas is the question of does the algorithm have biases. 
Bias means to have an inclination or prejudice for or against something.  In most societies, having a negative leaning bias towards people is considered wrong.  We generally hold algorithms to these same social rules, or at least the use of them.  The use of algorithms is far reaching and can be seen and felt in almost every niche of daily life.  Therefor they have the potential to make a big impact on our daily lives.  Some algorithms that provide great examples of negatively impact-full episodes of bias are: Google Translates gender bias, Microsoft’s Twitter chat bot, and general big data AI systems.
We can bring up the ideas of subjectivity and objectivity.  Subjectivity can give entrance of bias due to the opinion-like nature of the attribute.  Objectivity on the other hand is opinion, or feeling, free.  It is when a statement is based off of facts and rules.  So if an algorithm is a set of rules the computer follows, and these rules are impossible to be vague, then the algorithm is objective, and does not have the possibility to be biased.  An algorithm by definition can be argue that they do not have any bias.  An algorithm is not inherently bias.  However, almost all algorithms are in fact bias on varying degrees, so how does this bias get introduced?
Algorithms are written by people.  People make all the original decisions of the algorithm when they create the computer instructions.  When they do this they can impose a bias on the instructions even unintentionally.  Three areas that this bias entrance can be categorized into are data, constraints, and algorithmic principles.  
We live in the era of big data.  Data is valuable and is collected automatically in massive quantities in the systems that we use.  Unfortunately, this data could be processed/interpreted by people, where the data is skewed to a favored side.  This was seen largely in big data systems for criminal analysis.  The data used was created by police officers/staff over many decades.  The AI system was trained on this officer produced data and learned the biases itself.  It is important to learn from this realization that data is not inherently valid and must be determined if it is complete and just.  
0 notes
samhrncir · 5 years ago
Text
Suspicion around automated identification
Tumblr media
What once was thought of as a far out future like technology, is now an everyday tool used by systems for identification, categorization, and verification.  As people living in a society we are used to having to take part in identification activities.  We can all recall a time or few where we were required to have our figure prints recorded for need in future identification.  Just like with every process there comes a need for improvement.  Such as how entertainment as shifted from a channel system to an on demand streaming system, identification as shifted from fingerprints and memorized passwords to facial identification.  
Facial recognition is still a young technology.  With every young immature technology comes drawbacks that may cause some to deem it as breaking.  What if the facial recognition technology is racist, or in a sense that it only properly accurately works for white people disproportionately compared to other races.  The technology struggles in many  common real world situations.  This could be when people wear makeup, hats, and accessories.  Then there are cultural concerns with how an expectation of use for the technology my clash with certain customs.  Cultures where people where hijabs or various types?  We can see there could be system breaks if facial recognition technology is relied upon too heavily with no alternative forms of identification available for the system process. 
As this technology matures and become more accessible to users, in what new places could we see this technology used?? What about a college campus (or campuses in general, such as a business/health campus)? With my personal subjective social norms, I believe that it is a right to have privacy and not be observed by a “big brother” for all actions.  I am against “big brother” situations because I believe it will cause unintended behavior outcomes due to our instincts not being used to functioning in this unnatural situation.  As humans, we thrive with some degree of privacy and ability to become “invisible” from others.  I am against facial recognition technology to be used in full surveillance coverage of areas to record and log all action performed by somebody.  But I agree with facial recognition being used for verification purposes to allow people access to services and resources.  For a college’s campus grounds, this could be seen as employing facial recognition to record all who pass entry points.  The technology could detect people banned from entering and notify proper personnel of the infringement. 
As many new modern technologies become more integrated into our daily used systems, as a society we are finding less control of our personal information.  Would you be OK with Facebook automatically tagging you in photos?  Lets say these photo’s are on your friends account.  If you were to be tagged in their photos, this would cause those photos to be displayed on your profile and easily seen by your followers/friends.  Choosing what people know/see if a simple definition of privacy and control of your own information.  If facebook did this automatically and didn’t ask you to be tagged in each photo, then the site is taking away control of your personal information.  Therefore, I am against FB using facial recognition to automatically tag me in photos.
0 notes
samhrncir · 5 years ago
Text
With new technologies, comes new routes for exploitation
Tumblr media
An emerging technology that is slowly being matured to practical use in the real world is Autonomous Vehicles.  As one can easily expect, AVs being successful in the real world requires an incredible amount of engineering and planning to account for the unlimited possible scenarios that AVs could face.  As the industry grows closer to the reality of working AVs the emergent of new technologies and practices become used by, but also exposed to the public.  Sometimes new technologies and things may be used or interacted with in unexpected ways.  Also, technologies always have some degree of vulnerabilities that can be exploited by malicious agents or accidentally interacted with by an innocent user. 
Traditionally cars are driven by people, the person driver makes all the decisions of the car for the most part.  If there is a deer in the road, the driver needs to decide if he will attempt to dangerously swerve out of its way, or drive through the animal.
Suppose you are driving on the interstate.  Your car is traveling at 70 miles per hour.  It is rush hour, and the traffic is less spread out as preferred.  Behind you is a semi truck having to ride your tail.  You see a car pulled over on the shoulder.  As you approach closer to where the car is pulled over, a distraught man steps into your car’s path.  You need to make a decision fast.  Should you break hard, likely causing the semi truck to collide in the car’s back.  Or drive through the man, certainly causing a fatal injury him, but keeping control of the car and allowing the semi truck to stop in a relatively safe manner?
The two scenarios above show quick happening events that require a decision.  Decisions of drivers, especially if it involves an accident, are reviewed by others for if they are right or wrong.  The determination of one decision being right while the other decision wrong requires decision guidelines.  The decision guidelines of AVs are referred to a modal.  This modal needs to be defined by human creators.  So what should the creators base the modal after.  Put in other words, the creator must chose which types of ethics to be deemed as true.  This can be challenging, since we want to car’s correct decisions to match correct decisions of human drivers.  Modal engineers must consider many variables such as: business goals, feasibility, and cultural norms to name a few.  One ethical theory that has been played around with for AV decision modals is Utilitarianism. 
Utilitarianism promotes actions that maximize happiness and well-being for the affected individuals.  AVs could be given a modal based off of utilitarian values.  This can cause complicated situations.  Suppose an AV is driving down the road and a large group of pedestrians J walk in front of the car’s path.  On the side walk is a single person waiting at the bus stop.  Should the AV drive into the group of many rule breaking pedestrians or divert and crash into the single law abiding person?
If we were to have cars that would chose to minimize lives loss as an effort to follow utilitarianism, peoples interactions among AVs would change in unforeseen ways.  Possibly one of these ways is that people would prefer to walk in larger groups to lower the risk that they would be “sacrificed” by an AV.  Personally, I would follow this behavior in high risk situations.  But during normal situations, I would act without regard to the AVs decision modal, due to complicated cases of AV ethical dilemmas being so rare.  It would be similar to the way one acts to the possibility of being stuck by lightning.
An AVs decision modal would be implemented with a form of programmable technology.  This technology would likely be built off of and related to many  matured and standard technologies.  There is always the possibility of exploitation of technologies.  But with knowledge of the potential attack surface of a system, mitigation can be put in place to prevent malicious acts.  A similar situation was the rise of the internet technologies.  These technologies introduced many new vulnerability in systems that needed mitigation.  Such as how unencrypted data over the network can be exploited and is mitigated with encryption, AVs could potential mitigate the hacking of an AVs decision modal with a hash.  
Ultimatly, the idea is that with new technologies comes unknown.  Unknown in how people will interact with the tech and unknown in the vulnerabilities introduced.
0 notes
samhrncir · 5 years ago
Text
An Informal Discussion on Ethics
Tumblr media
First, I want to define what ethics is, to have a general understanding of the space I will be exploring.  Google defines ethics as “moral principles that govern a person’s behavior or the conducting of a activity”.  A common theme surrounding ethics is how we use the study of ethics to determine/explain what is right and wrong.  The study of ethics takes a complicated turn due to the subjective nature of humans and how we view things.  If we took a skeptic approach to everything, the only thing we could prove as correct is the fact that we are consciousness.   If we adopt the belief that facts are only valid if there is no possibility of doubt that they are false, then consciousness is all we can prove due to the creation of doubt casting arguments.  One of these famous arguments is that we are all brains in vasts.
This argument follows, that the reality is that you are just a brain in a vat.  All of your experiences are possibly generated by some higher entity and fed into your brain via wire pathways.  Do you have a hand?  Chances are you “do”.  Feel it, move it.  Of course its a fact that you have a hand.  But what if you were a brain in a vat and these senses being “experienced” were just artificially made and fed to your consciousness.  There is doubt that your body doesn't exist so we don’t know for a fact that your body is real.  This brain vat argument can be used to cast doubt on many things; showing that there is a potential to doubt almost everything. 
So if the brain in vat argument seemed too extreme.  There is the other argument of the evil deceiver.  This can be thought of as an invisible evil figure behind you at all times manipulating how you understand the world.  When you are in this situation, someone asks you what 2+1 equals.  You say 3 of course.  But is it actually 3?  What if how you understand simple number counting was manipulated by the evil deceiver.  So in reality the true answer is 9, because 9 comes after 2.  1, 2, 9...  You don’t know, so our skeptic self would not be able to know that math is true!
Taking a skeptic side can sometimes show gaps in explanations of why somethings is valid truth.  Do animals have a consciousness.  If they do, does that make them valid to be our equals.  Do all conscious beings require the right to live?  Where do we draw the line of when someone’s body loses  consciousness forever and is considered dead?  Car crash victims left severely mentally damaged, are they considered without a consciousness in their vegetative state?
Skepticism can be beneficial in moderation, but if leaned on too much, can result simply always in unproductive dead ends.  When determining what is right and wrong, we need to question if our reasoning is based on truth or falsities.  But if nothing is true, then does right and wrong really exist.  Is it a unary relation instead of a binary one...
0 notes
samhrncir · 5 years ago
Text
Knowing What One Doesn’t Know is a Great Success
During my Fall 2019 semester of my undergraduate study I took a course labeled as Senior Seminar.  This course is taken by all Computer Science majors in their senior year, usually year 4.  Class discussions are over software engineering topics with a focus on Open Source Software communities.  The discussions are prepared for through greatly selected readings and reference material that students are directed to.  Class discussions are supplemented with peer activities to learn certain technical skills that directly assist in contributing independently to an OSS project.  
Outside of class meetings, we were tasked with dividing into teams and contributing to an OSS community.  We started by evaluating OSS communities based on our interests.  After selecting one, we spent a majority of our time tackling the learning curve of “on boarding” to the project.  We would complete assignments that helped directed this on boarding process.  These assignments had us complete the activities of installing the package users use, setting up a development environment of the project, bug gardening, and bug fixing.  These activities were completed in sub teams of two people each where the team would come together periodically to consolidate things learned and have knowledge sharing.
Tumblr media
One of the biggest challenges I had faced during my summer internship experiences is the level of freedom given to software engineering associates.  This comes from a significant learning experience shift from common work performed in college classes, where there is already a widely known answer that is based off of what was just taught, to an completely undefined problem and solution space.  This can range from the actual problem wanted solved not being correct or complete to the solution not having ever been done!  We can see the significant shift in problem solving environment between college and the industry that shocks many fledgling software engineers.   These unstructured problems/solutions and the expectation of defining one’s own road map and daily objectives is the challenging level of freedom that I refer to.  Through failing quickly and learning fast during internships and challenging independent work focused courses, a novice can rise up over this challenge and speed a head of the vast majority.
I was very happy to see that the course content of the Senior Seminar had this challenge of freedom built into the entire curriculum.  To me this was the biggest challenge to overcome to be successful and improve on the civic dispositions and skills Dickinson students strive to embody.  How so?
I believe developing a independently critically thinking mind set is crucial to being progressively successful in an open-world environment.   A life improving realization I had during the first summer came as a result of learning software engineering practices that believed in agile.  One of these “ceremonies” was end of week retros (retrospective).  These meeting had members of the team list together things that went great, and things that went poorly.  At first I didn’t understand why anyone would want to find things to list as having gone poorly.  As I gained general experience in living agile principles and priorities I began to build an appreciation for constantly finding ways to improve.  This includes finding improved ways on improving!  Reflecting back to the retros I would naively participate in with my senior coworkers I finally made the connection to the importance of being able to be honest and aware of ones self to state what went wrong and come up with actions (action items) that will be tried to improve with the next iteration.
These teachings that I gained through independence over summers was reiterated in our senior seminar! A key objective of the seminar was to expose us to concepts in the software engineering and the open source industry.  The goal wasn’t to make us experts at each topic, but to have an initial understanding to allow us to go out and learn the topic sufficiently on our own.  Exposure to topics such as agile, qualities of successful communities, and software flaws help in reinforcing the retrospective mentality that I find so important. By being aware of the full picture, one is able to prioritize in more specific areas to improve.
A second skill that builds off of one’s ability to be self critical is one’s ability to access the work of others and determine good and bad qualities of it.  This allows one to observe everyone and learn from their successes and failures.  I have found that knowledge is very costly.  Taking a failure in order to learn something can be very taxing on the net gain.  That is why it is very important to gain empathy to understand the big picture of someone’s situation and learn from them.  Thus the cost of failure is not taken personally as much and one is able to learn in more situation with a greater resulting net gain!  By working in an OSS community one can situate themselves to learn a lot from observation!  To succeed in this skill and ultimately working in the industry one must have a continuous learner disposition where they are constantly seeking to make connections and understand to big picture.  By taking robust cross disiplinary courses over many majors for Dickinson’s liberal arts education, I find my peers and I able to have big picture interconnected understandings of topics.  Through the discussions, we are able to confidently think and make opinions of topics we have had little experience in.  This is a skill need if one want to take on higher level positions. 
Lastly is the point I wanted to bring up about knowledge awareness.  It is dangerous to not know what one doesn't know.  The more someone is unaware of not knowing connecting topics, they are at a greater risk in their ability to make good decisions.  By being aware of all the topics in one’s space and aware of one’s level of knowledge in each topic, they can then go about a decision making process in an efficient engineering manner.  The senior seminar exposed me to countless gaps in my awareness that I reflect back as would have been helpful in previous experiences.  I gained exposure to common software flaws, software design patterns, and software architecture.  The seminar has made me a more comfortable computer scientist.  I feel I am more empowered to go out, listen, access the situation, and come up with a plan to make and implement a solution more than ever!
0 notes
samhrncir · 6 years ago
Text
I want to start a software company, is it possible?
Tumblr media
Some may argue its a byproduct of being American. Some could argue its been the dream of people for centuries to be rulers of an empire. The dream of starting and running one’s own company usually crosses everyone’s mind.  Being able to be one’s own boss.  Being the person that reaps the potential flooded rivers of gold following a world changing product.  However it is know that there is a high failure rate getting to that point and the ones that are successful have worked night and day for years while enduring accumulated risk to get to the top of the success mountain.
The trends of economics do not help the situation for entering entrepreneurs.  Things like established companies under cutting entering products in price forces the young company to succeed to the bully.  Large empires like Facebook, Amazon, Walmart, and the like are constantly looking to grab as much of the market as possible.  Thus their goal is to suppress entering market competitors.  
So if it is impossible for a new company to enter the market, what should we do?  Should we say defeat before even going to war and joining their ranks as an employee to the executive board’s agenda?  If one has a revolutionary idea, it would be “easy” to pitch it to a conglomerate, be given a team, settle for a solid salary while the investors take the great excess of produced wealth and recognition.  
I argue that starting software companies have the ability to move faster than larger companies.  They are able to work with no string attached to investors and make decision fast.  Since there is they have no public history, they can literately fine tune their public facing content to quickly reach to the levels of their mega bully competitors.
It is clear starting a company is increasingly difficult in this moderning age.  The best companies spend large efforts in increasing their internal infrastructure to make developing good quality software faster and faster.   However, the large companies of today were built with the foundation of OSS.  To compete with the beasts, one needs to emulate their business approaches on the appropriate scale.  I believe if one can rely on OSS products as an infrastructure, entering tech start ups defiantly have a chance...
1 note · View note
samhrncir · 6 years ago
Text
Is Open Source even relevant?
Tumblr media
As one dives into the world of business early in their career they are hit with the realization of just how much money is spent on marketing.  Marketing has the intended goal to influence people to think their product is better, they need the product, the product will solve a problem that the media tells them they have...  Being bombarded with intentionally influential media has the effect of changing the viewers beliefs and opinion.  If marketing media is used extensively by product owners to prove that their products are better, what does that mean for the products that have no marketing?  Due to the negation of having marketing, does this mean the naive conclusion is negated.  Hence, lack of marketing correlates to the product being bad.  Bad arguments are usually a result of addressing too few factors, by jumping to quick to a conclusion, by succumbing to tunnel vision.  One could go on and on with this Philosophical topic that so greatly affects all of our lives on a personal level.  But I argue that this is an incomplete argument and that having great marketing does not mean the product is great.
So this idea heavily affects the common perception of open source software.  Put simply, OSS is developed by volunteers or ”donations” of community s members and companies.  As one can easily see, this means that there is limited funds, thus OSS does not rely on expensive marketing advertising efforts to grow their user base.  Support for OSS has a quiet different motivation for buying into a project.  While proprietary products rely on investment fueled marketing campaigns to gain users to ultimately further strengths the company and the progress of the product, OSS relies primarily on the backers of the community, the community feel, and the true quality of the product itself to grow.  I biased-ly point out how propitiatory products rely on manipulative marketing, while OSS relies solely on the quality of the product to gain users and support.  Something to think about.
On the similar topic of how marketing influences perception of products.  Strong advertising campaigns can be viewed as a guarantee that the product is reliable and strong in all critical areas.  This led some people to default OSS as being built on lesser quality source code compared to their proprietary counterparts.  Also the ability for “any body” to contribute to the OSS no matter their experience caused an unjustified negative view of the quality of the OSS source code.  Following similar trends of most arguments on my page, this argument of OSS having less robust source code is generally false.  An increasingly adopted consideration is that OSS is more reliable and secure than closed source software.  This comes from an effect of the bazaar model that OSS communities are.  The communities have people from many different backgrounds, each having their own pair of fresh eyes to critically review design/implementation decisions of the OSS project. Thus a more reliable and secure product is made because their is less of a chance for bad design decisions falling through. 
0 notes
samhrncir · 6 years ago
Text
Software Building Processes
Tumblr media
Buzz words.  They are the saving grace of sales reps, inexperienced tech interns, and a coded language for experienced devs.  They are the golden idol for business associates within the tombs of an ancient jungle tribe.  If you work in business, especially one that touches the tech industry, then you have most certainly heard the words “agile” and the accompanying flashing words associated with the term.  Striving to be a proficient lifelong learner dev, it is critical to understand this term and similar terms.  This will allow a developer to explain these buzz industry specific terms in layman terms to ensure effective communication with any level of knowledge.  I cannot stress the importance of the ability to explain technical ideas in a simple concise way.  Mastering this skill will allow aspiring developers to excel in career growth.  With this awareness to the importance of understanding, lets discuss agile and the space of software building processes and management methods.  
Like all things, veterans of industries define systems to doing things most efficiently for particular priorities and situations.  For software engineering, these systems can be called building processes.  These processes define the management of a team, defining the activities to be held, and when they are held.  Also defined are expect outcomes.  An example of building process is test driven development.  
Test driven development is a process of software development the cycles starting with adding a test first!  One ensures the code fails the new test.  Then all tests are ran, if at least one test fails, the developer adds or changes the code in an effort to have all the tests pass.  Once all the tests pass, the developer refactors the code into a more optimal design and repeats the cycle.  This process is referred as an incremental approach due to its cyclical nature. Test driven development  can fall under the the over arching idea of Agile.  So what does this mean, and what the hell is agile?
First, it is important to understand that Agile is not something you do, it is something that you believe and think.  It can also be seen as goals one strives for, but agile does NOT define how one implements to get to those goals and holds those beliefs.  The beliefs of agile are to value Individuals and interactions over processes and tools. Working software over comprehensive documentation.  Customer collaboration over contract negotiation.  Responding to change over following a plan.  In addition there are 12 principles that can be seen as goals one following agile can strive for.  The Agile Manifesto, put together by the founders of the mentality is a perfect concise document for agile reference.  Processes such as test driven development are implementations of living agile.  For example test driven development helps strive for working code as the main priority.
From my course’s preparation, I recommend this link to learn more about software process models.
0 notes
samhrncir · 6 years ago
Text
Testing, how much?
Tumblr media
In the context of software, testing is the process of validating or verifying that the code does what is expected or wanted.  This is done in a variety of ways.  Such as the functional testing activities of Unit testing, Integration testing, Regression testing, Acceptance testing and the non-functional activities of performance testing, stability testing, usability and security testing to name a few.  A complete list can be seen here.  Functional tests test that the software operates with the expected behavior, while non-functional tests verify that the software still functions even when there is invalid or unexpected inputs or operating environments.  
Taking a step back, something I like to do is discuss the topics from the point of view of when a undergraduate student gets introduced to the topic to how the topic integrates in daily work of the professional software engineering industry.  Implementing code testing is taught in parallel to learning that basic structure of ones first programming language.  During my time, the 2010s, undergraduates leaned programming concepts through learning Java.  This type of testing that we formally learned are unit testing.  Unit tests simply are code that verify that the functionality of a specific section of code does what is expected.  These tests verify the function level usually.  Its important to note that by nature we all perform ad hoc reactive tests as we start learning how to code through our course work.  This straightforwardly is writing code, then running the code manually with manual input to see if the code works as expected.  This falls under the category of manual testing.
With small scale projects, such as work in undergraduate CS courses, unit tests and manual testing is sufficient.  As students begin to take on larger projects and work in the industry, there becomes a need to perform other forms of testing.  This comes about due to the immense increase in number of requirements and reliance of the project by users.  Companies need to be able to provide assurance and guarantees to their clients that the software adheres to certain security standards.  This is validated with security testing and penetration testing.  There could also be the case that the software meets a certain reliable up-time quota agreed upon by the dev team and clients in the SLA.  This would be verified and validated with non-functional testing of load tests, security tests, and stability tests.
It is important to be aware that testing comes in many forms on many levels, and when each form is used and why.  
I wanted to quickly touch on the idea of full test coverage.  For the programming assignments that I was assigned they had as perform full unit test coverage of the code.  This was to help teach us how to be able to test any line of code to validate that it works as expected.  Yet in the industry when dealing with very complex functions, having full test coverage of the code base is hard to do efficiently.  What tends to happen is a cost benefit analysis of where to spend effort in writing tests.  This usually involves defining mission critical functions prioritize as where to have testing efforts performed.  This idea coincides with one of the 7 testing principles of Exhaustive testing is not possible.  This principle in summary states Exhaustive testing is not possible and we need to instead perform a risk assessment to determine the optimal amount of testing for the application.  
I am in a Senior Seminar course were I am working on a team of student peers to contribute to the Jenkins community.  Jenkins is a java based server to provide a extensible solution to CI/CD.  For developers of the project, the community provides a document over viewing testing for the project.  The Jenkins community states developers should unit test ones code as much as possible.  
0 notes
samhrncir · 6 years ago
Text
Design Patterns a one stop shop, or are they?
Planning is a skill taught to us from the very beginning.  We can bring about the nostalgia of sitting in the chaotic and lively rooms of our elementary school English classes learning about how to write papers.  With the assumption that you are able to understand and speak rough English, they start by teaching the foundation skills of grammar.  This can be learning how to create certain forms of statements and combing them to create a more complex statement.  With  basic grammar, novice students start practicing creating sentences, then paragraphs, then essays!  I’m sure all of use can think back to the eternal memory of mandatory planning methods taught by our elementary school teachers.  This could included brainstorming trees and skeleton outlines.  While a young student I had trouble in seeing the importance to this mandatory essay planning.  This was possibly due to the straight forward and shallow prompts given to elementary school students.  As we all know now, knowing why and when to do something is just as important as knowing how to do it.  Through the progression of my studies, ability to plan has stood out as a skill that separates success of people and projects by multiple magnitudes.  OK, so planning is very important and a necessary this to do...
I brought up the allusion to leaning how to write in grade school due to its close resemblance to learning how to write software applications.  First one learns the fundamentals of a language.  Then how to create basic structures, i.e. loops, conditionals, constructors, etc... Then how to chain them together to create algorithms, data structures, and classes.  At this point one can start to learn how to create software applications to solve problems and provide value.  So how do we start? Planning!
Planning a software project consists of many activities such as: Setting requirements, situational analysis, defining a minimum viable product, defining a management system, Architecture research/planning, and design pattern research/planning!  This blog post will focus on points surrounding design patterns.  In brief, design pattern can be seen as a patterned solution to solve a reoccurring problem scenario.  They are relatively low level compared to there software architecture counter parts that consist of combining multiple design patterns with descriptions of how components interact to create a solution plan.  
An example of a design pattern is Double Dispatch.  Double dispatch is used in polymorphic method calls for migrating the lack of multi-method support in programming languages.  Basically double dispatch is used to invoke an overloaded method where the parameters vary among an inheritance hierarchy.  You can learn more about this pattern here.
Tumblr media
However, slapping a design pattern on a problem doesn’t always create an efficient optimized solution.  Without proper knowledge of design pattern options, the requirements, and the problem.  When one uses a design pattern that causes increased hindrance and growing technical dept, it is seen as an anti pattern.  A classic antipattern example is creating a God class.  A God class is a class that knows too much or does too much.  Doing this contradicts the foundational skill of to break a big problem into smaller more manageable problems, sometime object oriented programming naturally makes easy. God objects usually result in technical dept by having a monolith complicated code file.  
In short, panning is a necessary activity to creating a successful product.  It is important to have familiarity with most design patterns in a particular space to be able to consider requirements and the problem when choosing the pattern.
0 notes
samhrncir · 6 years ago
Text
Fail Quickly, Learn Fast
Tumblr media
Much of my personal and professional development came from my summers in between college years where I was a software Engineer Intern for The Home Depot in Atlanta Georgia.  I was thrown into a new culture, state, and environment.  I was all alone and had to create a way of being comfortable, happy, and successful in this unfamiliar space.  This led to the realization of a necessity of decomposition of problem.  Where I had many things to keep track of and floating in my head, there was a need to simplify everything.  Things ranging from home chores, long term plans, personal social life, continuous learning, and tasks at work.  I tried to allow myself to go on auto pilot as much as possible to reserve energy for creating custom solutions while I was on the job.  How is one able to successfully go on auto pilot and produce robust outcomes? With planning a head.  
An agile approach to everything in life: I always do things the same predetermined way.  For example, how I do my laundry.  I know every step by heart and can do it the exact way without thinking.  I can do my laundry on auto pilot.  So there is little problems encountered when doing this historical task.  With just a little energy, I can retrospectively think of how can I improve the way I completed the task for the next time, iteration?  Thus, over time I compound improvements on how I navigate independent living and working in a new and unknown place.  I used this iterative retrospective methodology to improve how I assimilate to new living and working situations.  I have seen the results from comparative improvements between the summer of ‘18 vs summer of ‘19.  The summer of ‘18 was a emotionally demanding summer due to it being the first summer I was fully on my own.  I took all the challenges faced as opportunities to learn how to do things better and improve myself for the long run.  When the summer of ‘19 came around.  I hit the ground running.   All of the bumps felt in ‘18 were hopped over immediately through how I learned from my failures and challenges.  I was able to compound on the progress of ‘18 and improve on other areas!  Ultimately the idea of agile living is a very personal one.  It requires one to be extremely self aware, critical, and constructive.  It invites an optimistic attitude towards challenge.  By embodying retrospective reflections and an iterative approach to life, one truly has no barriers.
This can then invite the idea of one’s growth curve.  Suppose there is a graph where the X axis is time and the Y axis is ones skill.  One can enter a new space and have their skill starting point significantly lower than peers.  Yet this means nothing as one needs to think about how they are standing in the long term over time.  If one fails fast and learns quickly, their skill curve’s slope will be large vs their peers who might start out with higher skill but their growth slope is shallow.  The summary of this realization is that it doesn't matter where one starts.  It matters what they are doing to improve.  The differentiating factor is not what someone’s skill is at time T.  It should be their ability to adapt and improve quickly over time.
So how does this discussion of improving over time connect to software development? We know recently, agile like qualities are being integrated into the management of teams everywhere.   This partly thanks to the ability to have quick feedback loops and ability to relatively easily modify distributed products.  Instead of focusing on the team management scope or the quality of made product, lets explore the way one personally works as a software engineer.  Unpreventable, one can encounter the flaws of: 1. Writing code that is not needed. 2. Not making the code easy to change. 3. Being too generic. How can one work in a way to avoid these common pitfalls?  With the common themes in the discussion, self awareness, humility, and “failing quickly, learning fast” can be a proactive measure in mitigating the three flaws.  For example, writing code that is not needed is usually a result of having incomplete requirements defined.  So the developer may make assumptions of needed requirements.  Usually assumed requirements are incorrect, leading to the engineered solution being over complicated and having unneeded code.  This could have been avoided with humility, where the developer speaks up quickly saying they are unsure about the requirements.  This can then have been addressed at the beginning of development when problems are cheap to solve.  An important take a way is that you will probably encounter these flaws, yet its how one recognizes the problem and reacts in the long term that matters.
0 notes
samhrncir · 6 years ago
Text
Awareness of Licensing
Tumblr media
Historically, if someone wanted a service or product they had to pay for it.  Predominantly in the past, services would be provided by a human worker, such doing someone’s hair, and a product was a physically item that added some form of value, such as cloths, building materials, and furniture.  Yet there was a great shift in the early 2000′s that has been continuously increasing in its impact.  This is the rise of software as a service and product.  Software can be thought of as intellectually knowledge.  Where it can be made valuable if it is kept secret.  This is how proprietary software works.  The code used to “build” the software product is kept secret.  Then through control of its distribution and some anti copying controls, companies are able to successfully charge a fee for access to the software.  One control is software licensing, which we will discuss later.  Since historically, one usually always needed to pay for services, products, or intellectual property, I had the assumption while growing up that all software was proprietary software.  This meaning one might always have to pay to get the software, and the companies had full dictatorship control of how the software ran and changed.   
The term Open Source Software was known but not understood by my younger self.  It was a flashy buzzword that, to me at the time, meant was developed by its users.  This same understanding came with the negative connotation that OS software was unreliable and unsecured.  So I avoided it through all of my childhood years.  I knew I wanted to study Computer Science since senior year of high school and immediately dived into the CS domain freshman year.  Frightening at first, I learned that to be successful in this space, one must constantly keep up to date with the technologies that come in the field.  This at the minimum entail understanding a technology’s trade offs and main use cases.  Doing this led me to become aware of the incredible quality and accessibility of open software.  It was as if I finally opened a chest of gold that I kept walking by as if it was a box of rubbish.  My time without OS software through my childhood allowed me to incredibly appreciate the qualities of OS software that make it so special.  This being it being free in price and use, available to all,  transparent source code, and control of how its ran.  Where once I was “forced” to use the invasive adobe acrobat PDF viewer, I can now install an opensource PDF viewer where I have more control of.  In one night, my perception of software went from an expensive product that was restricted by big money hungry own agenda companies to truly freedom-ful software!  There was almost an opensource version of all basic software in the market.  Need software to turn your computer into a wireless access point? Need software to be a network ad blocker on your personal network?  No problem! There is free OS software to do the job that is one easy download away.  Usually the only trade offs of using OS software is that the software isn’t as beautified and not as dummy proof.  The accessibility of using OS software is exponentially increased with technical experience and specialty.  Being a CS major, this was perfect!
So how does one enforce these controls on software?  One way is by stating and/or registering a licensing for the software.  As soon one writes code for a project, they can state a license and the controls they want to be placed on the software.  It is important to do this as soon as possible and to pick the most controlling of licensing first. This is important because one owns copyright of the software just by writing it, but controlling the use of it can be hard if it enters the public domain without a robustly stated license.  If ones code is made accessible to the public without a license, it could be considered public domain knowledge and not able to control.  There is a multitude of different licensing one can apply to their product.  Applying a license that is well know helps in the enforcement of its controls as people usually already understand the restrictions the well known license enforces.  
0 notes
samhrncir · 6 years ago
Text
The next step: System Architecture
Tumblr media
Being able to understand and create software architecture solutions is a necessary skill for progression as a Software Engineer in one’s career.  Architecture requires cross disciplinary thinking, knowledge of vast architecture styles and tech stacks, and the ability to negotiate and evaluate other team’s systems.  So the job of architecting systems is is left to highly experienced people.  However it is critical for general software engineers to be aware of software system architecture concepts so they are able to follow explanations and understand why design decisions were made.  
Software architecting is challenging because it is the designing of how aspects of software interact with each other.  This may seem like a straight forward task, but the short definition employs a large abstraction that hides the complexity of the task.  One could think of an architecture design as designing how various components work together to provide certain functionality.  So one must design a way to connect the many differing components in the most efficient and acceptable way possible.  Where as each component can be seen as a software product the comprises its own tech stack and architecture style.  A tech stack can be understood as the supply chain of technologies that are used to build a product/component.  The architecture style is a classification of types of common design structures for planning a software so that it is scalable, maintainable, and efficient.  At every touch point of this large planning process, evaluation of trade offs is done to pick the optimal solution.  One can see that to do this sucessfully it take a wealth of knowledge, teamwork, communication, and alignment to create a successful architecture plan.  With this knowledge, engineers can understand and empathize with the team’s architects, ultimately increasing alignment.
An archetecture plan is called for when a business partner calls for a software solution to their problem or need.  The business partner provides the engineers with a list of requirements on what they need to have in their wanted solution.  The engineer take these requirements and use them to guide their planning of the over arching architecture of the solution.  Even better, if these requirements are given in a prioritized way, it makes evaluating architecture style trade-offs easier.  
Requirements can come be categorized in two ways: functional and non-functional attributes.  A functional requirement is one that specifies what the system should do.  Examples of this are: Business Rules, Authentication, Audit Tracking, Legal and Regulatory Requirements. Nonfunctional requirements specify how the system performs a certain function.  This means its a description of how a system should behave and what limits there are on its functionality.  Examples of non-functional requirements are: Performance (response time), Capacity, Availability, Reliability, Security, Usability.  Definitions and examples were taken from the following link where one can read more on requirement types.
https://reqtest.com/requirements-blog/understanding-the-difference-between-functional-and-non-functional-requirements/
Like previously stated, architecture design patterns can be categorized into groups of architecture styles.  An example of a style is Process control systems.  This is a system that dynamically responds to the physically environment.  It does this by utilizing a feedback loop where input is taken with sensors in the physical environment. The input is used to determine a set of outputs that will produce a new state of the environment.  This system can be seen in one’s home Air Conditioning system.
Another style is state Transition Systems.  These types of systems are defined by a set of states and a set of named transitions that move a system from one state to another.  This is seen in theoretical computer science when analyzing and describing the behavior of discrete systems.
0 notes
samhrncir · 6 years ago
Text
Discussion on Version Control
Tumblr media
So why is version control a thing?  With the rise of constant connectivity with wifi and the internet, people can share information with almost complete freedom and constraint.  The internet allows for the creation of virtual meeting places where people can come together and collaborate, plan, communicate, and share.  This leads to the ability of people working together from different physical locations on a shared project.  Popularly, these projects are software engineering development of an software application.  The majority of the work consists of writing and modifying code in files.  
So how do they modify files of the same project while being in separate physical locations? Version Control!  Version Control sets a process that allows developers to not get in the way of themselves, administers merge-ment of work and allows for a features like: edit history, “name calling”(tracking who does what), and documentation hosting to name a few.  The industry standard for version control is decentralized systems.  This means that each contributor makes a copy of the want version of the code to their local computer.  When they make additions and modifications, the changes are only reflected on their local copy at first.  To share their contributions they send their changes to the repository in the cloud.  A set of processes take place that dictate how each developer’s work is merged together as a single version.  
What does this look like in the industry?  Suppose you are a software engineer on a team of 6 other devs.  For the day, the 7 of you will be working on contributing to files from repository.  Direction for the work that each person will do had been planned out previously by the project manager and team leads.  At the morning stand up meeting, each person is assign a structure task.  This could be a bug fix, new feature, refactoring, etc...  Usually these task are assigned in a way so people won’t need to use overlapping files.  This is done to minimize the potential for merge conflicts and need to re-do work.  With your assigned task you update your local copy of the project on your laptop.  You then make the necessary contributions freely to your local copy.  Once you have finished you send your change to the decentralized version control system in the cloud.  Importantly to note, while you were completing your work, the other 6 developers were working unblocked and freely on the “same files”!  All 7 of the contributions are saved and merged to the project to create an updated version that included the work of the 7 developers for the past day.  The ability for many developers to work freely and un-obstructing of each other on the same files while potentially being all around the world in revolutionary and abilitating! With the help of version control and the widely accessible internet developers are able to work remotely, a trend that has previously been on the rise, but has recently been revisited as not always the best option for all devs.  I’d like to revisit the topic of remote work later in my blog.
To Fetch and Pull or to just Pull? I argue that one should run these  local system.  It is important to run fetch and pull separately to reinforce the sense of control over the version control system.  Doing this also helps in understand what goes on behind the scenes as it separates the actions of copying change history to local machine and the act of merging the new change history to your local branches.  
An exploration of a different less mainstream version control system is SVN. SVN stand for subversion control.  SVN is a centralized version control system!  Apache subversion is an open source tool for version control.  Centralized version control in summary consists of users checking out the files that they need.  This locks the files from others.  Changes are made and the files are unlocked, progressing the version by 1.  A subversion system consists of two things, a server that has all the versions of the files, and a local copy of files.  The way it works is that the SVN server has all the files and their versions.  When someone want to make a change, they pull the file from the SVN server, make the change, then return the file to the server.  
0 notes
samhrncir · 6 years ago
Text
To feel comfortable: A reflection on Project communication and structure
Tumblr media
In this world of automated jobs and tasks, sometimes one can forget that everything is made possible and set in motion by human creators.  In software development, this is very apparent in practice.  Computers can do an incredible amount of “brain work” yet they are painfully dumb.  They don’t question their tasks, they say yes blindly and run until they reach as “successful” or crash and burn.  So, people creators must tell computers what they should do down to the smallest desired detail.  This could be good or evil, shady or just.  Software development creates automated jobs and tasks to allow users to do what computers can’t.  Where some job areas are being taken over by the uncomplaining and unquestioning computers, software developing is booming for human creators.  
So, how can human creators come together and work on a common objective to produce a successful and desired outcome?  Team work is at the root, people interacting with each other.  Some may feel they must conform to a certain persona, or the trends of their particular workplace.  I argue that if someone can break past this and be themselves while working with others they will be an invaluable asset to the team.  One may only be able to be themselves if they are comfortable in the community.  The mission statement of a project can be one’s first insight on the values of the project community.  I feel a well done mission statement will show the projects goals and community’s values.  I strongly feel the mission statement is very important to attracting new contributors and users.  
Feeling accepted is a necessary part in being comfortable in a community.  As an outsider hoping to join a community, I look to see if their are invitations and open doors for new people to have a friendly place in the project.  To me this can be seen in apparent documentation targeted towards new members to help acclimate them to the community.  I've been in situations where on-boarding is very rough and where team members act in a way that is exclusive towards new members.  This caused me to second guess my place on the team and not feeling welcomed.  In short I emphasize that special attention should be made towards the projects mission statement and onboarding documentation to make the community environment be welcoming to ones who have aligning values and goals.  Coming round circle, having happy aligned human creators will allow for the creation of successful projects!
0 notes
samhrncir · 6 years ago
Text
H/FOSS Project Community
The purpose of this post is for reflection over a class discussion on H/FOSS Project Communities.
I believe companies initiate single vendor open source projects usually for the optimal business outcome.  This means its a numbers game concerning value added vs amount invested.  A company’s goal is to stay in business, take business from competitors, and make the most money for shareholders if they are public.  This can conflict to the core values of OSS.  Such as the 4 freedoms of OSS: 
The freedom to run the program as you wish, for any purpose
The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.
The freedom to redistribute copies so you can help your neighbor (freedom 2).
The freedom to distribute copies of your modified versions to others
https://en.wikipedia.org/wiki/The_Free_Software_Definition
Single vendor open source projects usually stem from a proprietary project that is turned open source.  This dissension to do so is not easy for business as the confidential code could lose value on the market.  I addition, companies may give up their complete control over the distribution and version control of its users.  This initially could result in a negative net gain from shifting a project to open source, however the company should expect a long term net positive gain through sources such as free contributions.  
I’m curious if publicity and PR plays a part when a company strategically makes a project open source.  What do they have to gain from this?  Are they trying to make their competitors look like the evil proprietary software monopolies, are they trying to be viewed as generous and “for the people”?  As some companies become the size of small country governments, they have the infrastructure to support many open source projects for strategic gain.  An example of this is Google’s Chromium web browser.  Chromium is an OSS bare bones version of Google Chrome.  It provides users with the benefit of less data tracking and limited resource usage/requirements.  Why did Google create the Chromium project?  Every company decision is steered by some form of business gain.  For Google’s case, this could be positive PR to combat their negative persona as data tracking giants.  Something to think about...
Version control can be the bane of an entry level software engineer’s beginning.  Through mastery of version control with tools such as Git, branches are the sword that coding knights use to take a copy of the source for themselves to modify.  Now the concept of “hard forks” plays an interesting unique role in OSS projects, as they are nonexistent in proprietary software projects within companies.  The possibility of hard forks is inherited from the OSS freedoms.  This is known by project leaders upfront and center.  Thus from day one, the possibility of hard forking acts as a countermeasure to dictators and unhappy communities.  Hard forks are usually a result of torn unhappy communities disagreeing on the direction the project is taking.  So the prevention of hard forks is to take extra effort in ensuring the community is aligned and in agreement to the direction of the project.
So what is the secret formula to a successful open source project? By reflecting my experiences on-boarding to two different teams through out my internships, I argue that the on boarding experience imperative to having a successful OSS project.  This of course needs to be paired with a popular mission statement and relevant utility to a strong user group.  I’ve experienced rough and rocky on boarding processes.  The pain points were usually from lack of documentation, lack of suggestions, and failure to relate to new members of a project.  Even though I have limited experience myself in producing documentation, I am aware of its importance in providing a scale-able on boarding solution.  In short, a project with an “automated” on boarding solution is scalable, allows entry level contributors to be semi autonomous, so the resources of the experienced contributors are not hindered when new contributors come knocking on their door.
In common media, dictator has a very negative connotation.  Yet in small early OSS projects, benevolent dictators are a perfect solution to project management and decision making.  For the project to be successful these benevolent dictators must uphold just and mature qualities to gain the respectful following of their contributors.  One quality I would like to discuss benevolent dictators uphold is the quality of being an excellent listener paired with asking the right questions.  This means the leader knows their is immense value in listening to his workers, as they are in more intimate relationship with the development.  This is where asking the right questions comes in to play so the leader can productively understand the status of the project.  Then the leader can direct the projects high level mission and direction.
0 notes