nedyaj-blog
nedyaj-blog
JAYDEN SUNG
50 posts
// journey to the center of the craft
Don't wanna be here? Send us removal request.
nedyaj-blog · 10 years ago
Text
A Short Introduction on Asynchronous I/O
Consider the process of how you’d typically place an order at your local In-N-Out restaurant:
You go up to the cashier to place an order of a Double-Double and fries - both Animal Style, of course.
The cashier inputs your order and sends it to the cook, and gives you a receipt with a specific number on it.
You take a seat while you wait for your food.
Cashier takes orders from other customers.
Cook prepares orders as they come and sends them off when they’re complete.
Cashier calls out the number on your receipt.
You pick up your order and enjoy your food.
Asynchronous I/O - also known as non-blocking I/O, is a similar process.
A thread tells the kernel what kind of I/O work it wants complete. (i.e. read from socket/write to file)
Kernel gives the thread a handle to monitor the progress of the work, and adds it to the queue of other I/O requests.
Thread goes on with life, and may periodically check the given handle for any updates.
Kernel sends an update to the handle whenever something pertaining to the request happens.
Thread takes each update, processes it and continues to check for other updates until its request is complete.
Here’s a simple diagram to illustrate what’s going on.
The thread issues its first I/O request (green line) to the kernel; however, since it’s asynchronous, the thread goes on with life while waiting for its response. Down the line, it decides to send another request (purple line). Eventually, the kernel sends updates to the thread once the requests are complete, and the thread picks up on these updates in the order of completion. This is asynchronous I/O - a method of input/output processing which allows processes to continue before the transmission has finished.
Now, let’s look at another approach - synchronous or blocking I/O. If all requests to the kernel were made synchronously, a thread would have to start its request and then wait for it to complete. Such a process would block the progression of a program while the transmission of data is in progress, leaving system resources idle. If many I/O operations were requested, a program can spend a lot of its time sitting idly, waiting for these I/O operations to complete.
So let's go back to our fast food restaurant example and how that would look if it operated like a synchronous operation. You'd still go up to the cashier to put in your order and the cashier would still send it to the cook; but he'd wait until your order is complete. If this restaurant operated sychronously, the cashier would not be able to take orders from the other customers waiting in line behind you, because he'd have to wait until the cook finishes your order to hand off to you.
This is what asynchronous I/O solves. Because it’s possible to send an I/O request, and continue to perform other tasks that do not require the completion of said request, system resources aren’t just sitting around idly because other tasks can be completed at the same time. So if you ever encounter a restaurant that operates in this fashion, you'd better be prepared to wait for quite some time!
0 notes
nedyaj-blog · 10 years ago
Text
Avoid Stagnation! Become a Lifelong Learner
According to a 2002 International Labor Organization report, in the 1980’s, the half-life of a software engineer’s technical skills - the amount of time it takes for half of his/her knowledge of the field to become obsolete - was estimated at a mere 2.5 years.[1] Considering the fact that this number was measured roughly 35 years ago, this estimate is sure to be much smaller today. It is a scary thought to imagine that everything we know, including our professional skills, is rapidly becoming obsolete. So how do we stay on top of our game to remain relevant? How do we ensure that we can continue to be a valuable asset and stay competitive in our field? It’s no longer enough to just go to work and complete our tasks - we must be constantly updating our knowledge portfolio of new tools, skills, and methodologies to stay afloat on the waves of change by embracing lifelong learning.
What is Lifelong Learning?
Lifelong learning is the "ongoing, voluntary, self-motivated" pursuit of knowledge [2]. It is often defined as the period of learning that occurs after childhood in which learning is and instructor-driven, and into adulthood in which learning is self-driven. Given this, it is important to have the thirst for knowledge and the ability to perpetually motivate yourself to learn new skills. As adults, we no longer have that luxury of being provided a formal education setting with instructors who teach us things. We need to take the wheel into our own hands and drive ourselves along the journey of continuous learning for both personal and professional growth.
So what are some things you can do to continuously learn new things and keep stagnation at bay? Below are some strategies to to employ as a lifelong learner.
Take Online Courses
Because of how accessible Internet access has become over the years, there are tons of great resources to learn all kinds of things right at the tip of our fingers. Today, there is a plethora of online courses from sites such as Coursera, edX, and Udacity that are completely free. Each course is typically taught during the course of several weeks and consists of videos with accompanying lecture slides, assignments, and exams. Learning material from instructors at well-known universities around the world for free - how cool is that?
Go to User Group Meetings
Surrounding yourself and interacting with like-minded individuals who have similar interests is a powerful way of learning as you can meet to discuss new tools and technologies and share your experiences. Besides holding discussions, user group meetings are also a good way to network with others and learn about projects being done at other companies.
Participate in Technical Question-and-Answer Websites and Forums
If you have run into an issue or are stuck on a particular problem, chances are that others before you have had the same questions. Sites like StackOverflow, Google Groups, and the "Learn Programming" community on Reddit are incredibly helpful websites that encourage questions and discussions about programming.
Read Books and Technical Blogs
Reading books and technical blogs is another powerful learning technique. There is an overwhelmingly large number of books and blog posts out there that cover a vast range of technical topics. In the lifelong pursuit of knowledge, it is a no-brainer that reading is a vital skill to practice. Not only do these resources teach you how to do something, but also the why, which you may not discover yourself unless you take in the knowledge and experiences of those who are further along their journey.
Maintain a Blog
One of the most important professional skills to hone is writing. Maintaining and blog and consistently updating it with material that interest you is a great way to improve your writing skills, as it requires you to collect your thoughts and present them in a cohesive and comprehensible manner for others to understand. Writing blog posts about things that you learn also helps solidify your own understanding of these new concepts.
Find a Mentor
Find an experienced practitioner who will help you learn your craft. Because there is an overwhelming number of new tools and technologies out there, sometimes it's hard to know where to even begin. It is helpful to have a mentor guide you by providing a path and structure to your learning process. A successful relationship with a mentor will help you learn and reinforce the foundations of good practices, disciplines, and values.
Be a Mentor
Becoming a mentor is also an incredibly valuable learning experience. Similar to maintaining a blog, mentoring solidifies the concepts you have learned and be comfortable enough with these ideas to teach someone else.
Learn at Work
To conclude this list of suggestions is to ensure that you are learning at work. Given that a typical work schedule consists of 40 hours a week, it is crucial that you are continuously being challenged and learning new tools and skills for your trade. Even the most dynamic workplaces can cause you to be too focused on one particular toolset, while ignoring new technical developments. If you aren't being challenged or you don't have opportunities to learn new tools and skills, then it may be time to look for another job. You should be spending as much time as you can learning at work, and use work itself as your learning platform.
Conclusion
Success is a journey, not a destination. The doing is often more important than the outcome. - Arthur Ashe
The road to craftsmanship is a lifelong journey - one that has no end. To truly invest in your craft, you need the hunger, drive, and desire to continuously improve yourself both on a personal and professional level. Investing time outside of work to develop and expand our skills not only prevents stagnation, but it sets professionals apart from non-professionals. It is not only foolish to be content at being proficient at one single thing, but you will quickly find that the world is changing without you. It is crucial to continuously expand and diversify your skill set, and truly enjoy the journey of lifelong learning to be successful in this ever-changing industry of technology.
0 notes
nedyaj-blog · 11 years ago
Text
Pairing Tour: The Importance of Communication
These past two weeks, I got the opportunity to pair with several craftsmen for my pairing tour - a two-week period in which I pair with a different 8th Light craftsman each day. Resident apprenticeships at 8th Light conclude with a four week process that is broken up into two parts: pairing tour and final challenges. In these two weeks, I worked on several different codebases and while I did do some coding, I learned more about the disciplines and expectations of a professional developer more than I learned about implementing programming practices and patterns that I had originally anticipated when I first paired with a craftsman a few weeks ago. However, this isn't to be taken as a complaint at all - I have learned so much from these past two weeks that solidified the teachings throughout my apprenticeship that I wouldn't have learned without having the opportunity to pair with several craftsmen. While there were several different things that I learned during my pairing tour with each individual craftsmen, the common lesson that I learned in every session is communication.
Communication
One of the important things I took away from this pairing tour is understanding that as software consultants, craftsmen need to do more than just know the programming languages, patterns and practices, and latest technologies out there to successfully do their jobs. We have the responsibility as professionals to understand the business domain of the projects we take on. This doesn't necessarily mean that we need to know the in's and out's of the business, but we should at least have a deep understanding of the business goals. This requires ample amounts of communication with people like managers, business analysts, testers, and other team members to truly understand why we're writing code and how the business will benefit from it. During these two weeks, I participated in all of the stand-ups, email exchanges, talks, and meetings that the craftsmen were involved in to understand that communication is essential skill to hone as part of our journey to become software craftsmen. You can be the greatest programmer, but if your communication skills are lacking then it becomes very difficult to work in a team and to market your product (you).
Because of the nature of consultancy, craftsmen are placed on new projects and need to quickly get up to speed in order to add value to the systems immediately. During my time with the craftsmen, if there was something that we weren't so sure of, we'd communicate with other team members to make sure we understand exactly what the business needs from a feature that we were implementing. Other times, we'd ask Google and do some exploration to find solutions to problems that we run into that didn't involve the business domain. The important lesson I learned from these situations is that it's okay to not know everything. There isn't always time to sit yourself down and take a nosedive into the entire system to get a deep understanding of how each component works and there may not ever be time for that - and that's okay. It's much more important to understand the business needs and expectations of our clients and the necessary information needed to perform our immediate tasks than it is to dig into the nitty gritty and potentially spiral into a rabbit hole that we don't need to get into.
I have learned a lot in these past two weeks during my pairing tour and it has truly been an invaluable experience to be able to work so closely with the craftsmen at 8th Light.
0 notes
nedyaj-blog · 11 years ago
Text
What is Maven?
While transitioning back into my apprenticeship assignments from pairing with a craftsman for several weeks, I've been tasked to complete my Tic-Tac-Toe server with the addition of integrating Maven into my project. What is Maven, you might ask? Maven is a build automation tool that, when integrated with Java projects, makes life easier when describing your project's build process and dependencies. I hadn't initially integrated Maven into my server because I didn't feel like I needed it, but after spending time getting my feet wet in a very large codebase, I began to understand the benefits that Maven brings to the table.
As a whole, Maven appears to do many things but according to the Apache website, Maven is an attempt to apply patterns to a project's build infrastructure in order to promote comprehension and productivity by providing a clear path in the use of best practices. In a nutshell, Maven is essentially a project management tool that helps with these tasks:
Builds
Documentation
Reporting
Dependencies
SCMs
Releases
Distribution
So what are some benefits of using Maven? For starters, it handles dependencies automatically. Maven uses Project Object Model (POM) files which are XML files that contain information about the project as well as the configuration details that Maven uses to build the project. Within these POM files, you specify and list the dependencies that your project requires and Maven will download these dependencies and add them to the class paths automatically. Maven also promotes and enforces modular design of code. By making it simple to manage multiple projects, it allows the design to be laid out into logical modules which weave together using the dependency handler in POM files. Thus, modular design is enforced because when code is separated across other projects or modules, references cannot cross between multiple projects unless its been explicitly stated within the dependency handler.
When dealing with small scale projects like the web server that I am currently working on, it's hard to see the benefits of integrating Maven within your project; however once you begin working on large scale projects in which there are multiple modules that make up said project, you begin to appreciate what Maven has to offer.
maven
0 notes
nedyaj-blog · 11 years ago
Text
The Life of a Software Craftsman
The life of a software apprentice is very different from that of a software craftsman, and I had an opportunity to experience that firsthand. Three weeks ago, I had been asked to put aside my assignment that I had been working on to pair with a craftsman on a client project. I had no idea what to expect as this would be my first time being exposed to client work, so it was both nerve-wracking and exciting at the same time. Although I wasn’t sure what I was getting myself into, I knew that I was expected to contribute to the project by writing a lot of code and wrapping my head around complex problems. Boy, was I in for a surprise.
Throughout my apprenticeship here at 8th Light, I’ve been given several programming assignments built from scratch that really pushed me to my boundaries. I had to learn unfamiliar tools and disciplines to complete these projects while thinking about the actual problem at hand. While I did work on stories that involved solving some problems, we only added code to the existing codebase once throughout the three weeks I was on the project. Most of the stories we worked on dealt with manipulating configuration files and testing/deploying builds.
That’s the reality of the life of a software developer, though. You won’t always get to start projects from scratch or be adding exciting new features to a program. You are typically thrown into a large existing codebase with limited documentation and tests that verify and explain the the intent of the code. Often times, the original creators of the code are long gone and no subsequent developer will really know the system end to end. There was a lot of frustration on our part when there were bugs and issues that we came across when we did add code because there was little to no test coverage. One of the most important lessons I took away from this experience is that tests are so incredibly important to documenting the intent of the code. In order to fully understand things, we saw ourselves printing variables and objects to console to understand parts of the code which could have been avoided if there was test coverage to document these parts.
Until now, I’ve only worked on projects that I build from scratch from course assignments and projects in school and assignments I’ve been given during my apprenticeship. This experience has shown me what I should expect once the apprenticeship is over. Though the reality of the life of a software craftsman differs from the exciting life that I had originally envisioned, I’m still excited and eager to become a craftsman and become one step further on my path of software craftsmanship.
0 notes
nedyaj-blog · 11 years ago
Text
Software Stability: An Introduction
The life of a piece of software always starts full of optimism, hope, and vigor. Like the naive optimism of a fresh college graduate, new software suddenly face the harsh realities of the world outside controlled environments and situations. Things just don’t happen in these planned tests and experiments like they do in the real world because these tests are created by people who know what to expect. In the real world, many people will be using software in so many different ways and as such, tests shouldn’t be designed to have answers as often times they set up software to fail. In Michael Nygard’s Release It!, Nygard introduces the concept of stability and its importance in the software world.
What is software stability? Often times when we use the term stability, we refer to a system that is consistently up and running. Ideally we’d all like our systems to always be running; however, in the real world where anything can happen, that’s never the case. Knowing this, how do we define stability? According to Nygard, software stability refers not to a system’s ability to be up and running, but its ability to withstand sudden spikes in activity, stresses on the system or component failures to perform its normal processing.
Sudden impulses and persistent stress are expected throughout the life of a system, but sometimes these events can lead to disastrous failures. In either case, a component in the system will fail before everything else does. Nygard refers to these component failures as cracks, as cracks tend to spread throughout a system in various ways. The original cause of these cracks, the way the system breaks down, and the resulting damage is what Nygard calls failure mode.
No matter how hard you try to failure-proof your system, it will fail one way or another. Fooling yourself into thinking otherwise will only impede your ability and control to identify when a failure occurs and contain it as soon as possible. As developers, we must expect the unexpected accept that failures will happen. Only when we accept this harsh reality, are we able to design our system’s reactions to specific failures - safe failure modes that contain the damage and prevent this crack from propagating throughout the system. The implementations of these safe failure modes determines the resilience and stability of the system. Without these self-protecting failure modes, there is nothing keeping a crack from spreading into indispensable features of a system that are crucial and at times, even life saving. It is our responsibility to do everything we can to identify these vital components of our systems and protect them to ensure that cracks don't find these parts and cause potentially life threatening situations. In the upcoming chapters, we'll be discussing stability anti-patterns that help contain damage and preserve partial functionality of the system as opposed to taking the entire thing down.
0 notes
nedyaj-blog · 11 years ago
Text
Implementation Patterns: Guard Clause and Exceptions
In most programming languages, a sequence of instructions are executed one by one. As such, it is important to clearly express the behavior of a program in order to make the code more understandable. In this next chapter of Implementation Patterns, Beck discusses several patterns to achieve this task, but we'll just be talking about the patterns that I found most interesting: guard clauses and exceptions.
Programmers generally think of a main flow of control for their programs in which processing starts here, and ends there. The computation has a linear path to follow; however there can be conditions and exceptions along the way. These little bumps on the "main flow" path should be expressed clearly, but they should not take the spotlight away from the main flow of your program. It's not to say that exceptions and conditions are unimportant, it's just that focusing on clearly expressing the main flow of the computation is much more valuable. In order to do so, it's best to use guard clauses and exceptions to express unexpected or error conditions.
Guard Clause
According to Beck, a guard clause is a way to express simple and local exceptional situations with purely local consequences by utilizing early returns. Let's take a look at this small snippet of code:
void initialize() { if(!isInitialized()) { ... } }
and compare it with:
void initialize() { if(isInitialized()) return; ... }
When looking at the first version, you immediately start looking for an else clause while you begin looking within the block of the if statement. This becomes a distraction while you are reading the body of the if statement. When compared to the second version, within the first two lines of the code it is clear that the receiver hasn't been initialized.
A guard clause is appropriate for expressing a situation in which one of the control flows is more important than the other. In the example above, it is clear that the important flow is what happens when the object is initialized. It keeps the focus of the program on the important flows and eliminates unnecessary distractions.
Exceptions
Beck states that exceptions are useful for expressing shifts in the flow that span levels of function invocation. While you may reasonably deal with a problem lower on the call stack that has occurred earlier, it is best to throw an exception at the point of discovery and catching it where it can be handled. In this way, the code is not cluttered with explicit checks for possible exceptional conditions that cannot be handled appropriately.
0 notes
nedyaj-blog · 11 years ago
Text
Implementation Patterns: Eager vs Lazy Instantiation
Initialization is the process of putting variables into a known state before they are used. However, issues can arise depending on when the initialization occurs within the program. For example, if a particular variable is costly to initialize, it will degrade the performance of the system if it is initialized with declaration. In this case, it will be more beneficial to initialize this variable after it has come into existence. In Kent Beck's Implementation Patterns, Beck discusses two initialization patterns: eager and lazy instantiation.
Eager Instantiation
One flavor of initialization is to initialize the variable as soon as it is declared or when the object in which it lives in is created (declaration or constructor). An advantage of eager instantiation is that you can be assured that the variables are initialized before they are used. Another advantage of initializing variables within its declaration is that it makes it easier for readers to understand the actual type of the variable. Take a look at the simple example below.
class Library { List members = new ArrayList(); ... }
Had we initialized members somewhere else, we wouldn't have known that the actual type of the variable is List instead of its declared type of ArrayList. This may sound trivial as both types are Lists, but it is good to be able to easily find its actual type when it's close together to its initialization.
Lazy Instantiation
Eager instantiation works when we aren't concerned about the cost of computing a variable's state as soon as it has come into existence. When performance is compromised or you'd like to defer the cost because a variable may not even be used, it's better to utilize lazy instantiation. In cases like this, create a getter method and initialize the field when the getter is first called. Let's look at the example provided by Beck below.
Library.Collection getMembers() { if (members == null) members = new ArrayList(); return members; }
Lazy initialization used to be a more common technique; however with the advancement of technology, limitation in raw computing power is less of a common issue than it once was. This initialization pattern is still very important when computational power is a limited resource. A drawback to this kind of initialization is that the reader has to look at least two places to understand the implementation type of the field. This may not be a huge deal with smaller programs, but it can be a bit difficult to find within a large system.
Conclusion
There are use cases for both types of initialization patterns. If you are more concerned about the performance of the system and the cost that incurs when initializing objects upon declaration, stick to lazy instantiation. However, because limitation in computing power is less common nowadays than it was in the past, lazy instantiation is not as commonly used as it once was. In most cases, you'll see yourself using eager instantiation as it is easier to read and understand the implementation type of the variable or object.
0 notes
nedyaj-blog · 11 years ago
Text
How to Deploy Your Project to Clojars with Leiningen
So you've finished your Clojure project that you've worked so hard on and you're ready to show it off to the world. What's the next step? Pushing it to Clojars!
What is Clojars?
Clojars is a community maintained repository for open source Clojure libraries. It aims to make it dead simple to share and use Clojure libraries, and to encourage developers to make their projects available to use for automation tools like Leiningen.
Clojars repositories come in two flavors: Classic and Release. The Classic repository is a snapshot space in which anyone can deploy anything to it. The Release repository on the other hand, is a bit more strict and has a few requirements:
Projects cannot be snapshot versions.
Projects must have a :description, :license, and :url in their pom.xml or project.clj file.
Projects must be signed by the author's PGP key
NOTE: Clojars also supports the use of secure copy (SCP) and SSH keys for deployment; however, with the recent vulnerability in Bash Clojars has disabled the scp-based deploy services until further notice.
Let's Get Started
So now that we know what Clojars is, let's get started. There are a few basic dependencies that are required before we can deploy our projects.
Clojure
Leiningen (>= v2.1)
git (brew install git in Terminal)
gnupg (brew install gnupg && brew install gpg-agent)
After installing these dependencies, there are several steps we need to take to get things up and running.
GPG Agent and Creating Your GPG Key
GPG Agent is similar to ssh-agent in that it manages access to GPG keys that have been unlocked. The GPG agent must be running and can be started by typing gpg-agent --daemon in the Terminal.
Create your GPG key with the command gpg --gen-key. You will be prompted with a few options like key type, key size, and key expiration. You will also be asked for your real name, email address, comment, and a passphrase that will be asked of you each time you attempt to unlock your GPG key. Once everything is setup, you will receive something like this:
pub 2048R/8F4A321D 2014-10-28 uid Jayden Sung (clojarz) <[email protected]> sub 2048R/9A4REBA0 2014-10-28
The important bit is the first line after the forward slash, 8F4A321D. This value is the unique key ID that has just been generated. You will need to use this value when retrieving your GPG public key.
Set up SSH Key
Assuming you have GitHub setup on your machine, you should already have an SSH key. If you haven't generated an SSH key already, here's an excellent tutorial on how to do so.
Create a Clojars Account
Create an account here and fill out the basic information. Clojars requires that you give up your public SSH key and PGP key. In order to retrieve these keys and copy them onto your clipboard, type in
cat ~/.ssh/id_rsa.pub | pbcopy
and
gpg --export -a <YOUR-KEY-ID> | pbcopy
respectively. The keys will be placed on your clipboard, so make sure you paste the values into the registration form after executing each command.
How to Deploy
Now that the setup is done, it's time to deploy! Assuming you've used Leiningen to manage your project, you may need to configure your project.clj to include the required fields that were outlined earlier.
After that's squared away, all that's left to do is to run the command
lein deploy clojars
Here's an example of what to expect after running this:
» lein deploy clojars No credentials found for clojars See `lein help deploying` for how to configure credentials to avoid prompts. Username: jaydensung Password: Wrote /Users/jayden/Code/clojure_tictac/pom.xml Compiling clojure_tictac.core Created /Users/jayden/Code/clojure_tictac/target/clojure_tictac-0.1.0.jar You need a passphrase to unlock the secret key for user: "Jayden Sung (clojarz) <[email protected]>" 2048-bit RSA key, ID 8F4A261C, created 2014-10-28 Sending clojure_tictac/clojure_tictac/0.1.0/clojure_tictac-0.1.0.pom (3k) to https://clojars.org/repo/ Sending clojure_tictac/clojure_tictac/0.1.0/clojure_tictac-0.1.0.jar (101k) to https://clojars.org/repo/ Sending clojure_tictac/clojure_tictac/0.1.0/clojure_tictac-0.1.0.jar.asc (1k) to https://clojars.org/repo/ Sending clojure_tictac/clojure_tictac/0.1.0/clojure_tictac-0.1.0.pom.asc (1k) to https://clojars.org/repo/ Could not find metadata clojure_tictac:clojure_tictac/maven-metadata.xml in clojars (https://clojars.org/repo/) Sending clojure_tictac/clojure_tictac/maven-metadata.xml (1k) to https://clojars.org/repo/
If everything went well and you were presented with an output similar to the one above, then you're all set! Your project has been deployed to the Clojars Classic repository. To view your project, log into Clojars and you should see it on your Dashboard. Additionally, you can submit it into the Releases repository by going to your newly deployed project and clicking on the Promote button.
0 notes
nedyaj-blog · 11 years ago
Text
Implementation Patterns: Versioned Interfaces
In object-oriented programming, a class is like a blueprint for a house. It is important for communication because a class describes many specific things about how a particular object behaves and what it does. Because communication is one of the important values in programming, it's essential that we design classes that correctly express their relationships with one another. In Kent Beck's Implementation Patterns, Beck provides a list of class-level patterns to aid in the design of clean classes. In this post, we'll be discussing a pattern that I ended up having to use in my most recent project.
Versioned Interface
An interface is an abstract class without implementations. It specifies the signatures of related methods that classes which will implement the interface must have. It is a technique to group similar classes together and ensure that each class implements the same methods. So what happens if we have hundreds of classes that implement this interface and we want to add another feature or operation? It'd be incredibly time consuming to open up each individual class and modify them to add this feature. What we can do however, is declare a new interface that extends the original interface and add the new feature there. Any class that needs the new method can implement this new interface, and all classes that implement the parent interface will remain oblivious to the existence of this new interface. Let's take a look at this example below.
public interface Response { public byte[] getResponse(HashMap<String, String> serverRequest); public String getContentType(); public int getStatus(); }
This example is taken from a Java server that I had to write. Response is an interface that all server response classes implement. I use this interface to serve up responses to requests that are received by my server. As I was adding more responses, I noticed that I needed another operation to be able to return a HTTP response header and value with the content. None of the responses that I had previously written needed to implement this new feature, so instead of re-opening these classes and force them to implement a method that they do not need, I extended the interface and created a new interface like the one below.
public interface ResponseWithHeader extends Response { public String getHeaderName(); public String getHeaderValue(); }
With this new interface, I was able to retain the logic that I use to serve up responses and simply add a few lines of code to get the new type of responses up and running within my ResponseBuilder.
... stuff ... String requestURI = request.get("URI"); Response response = serverRoutes.get(requestURI); setContent(response.getResponse(request)); setStatus(response.getStatus()); setContentType(response.getContentType()); if (response instanceof ResponseWithHeader) { String headerName = ((ResponseWithHeader) response).getHeaderName(); String headerValue = ((ResponseWithHeader) response).getHeaderValue(); setHeader(headerName, headerValue); } ... more stuff ...
Existing instances of Response still work as before and instances of ResponseWithHeader work anywhere a Response is expected. To use this new operation, I had to write a conditional if-statement to check if the response is an instance of the new interface, and then downcast the object. While the usage of instanceof does reduce the flexibility of the code by tying it to specific classes, it may be justified because it enables the evolution of interfaces.
0 notes
nedyaj-blog · 11 years ago
Text
Programming Values
In Kent Beck's book, Implementation Patterns, Beck states that there is no list of patterns that can cover every situation in programming. He claims that each pattern contains a bit of theory and that there are more pervasive forces at work in programming than are covered in individual patterns. These concerns have been divided into two types: values and principles.
Values
Values are the overarching theme and should dictate every decision ever made in programming. There exists three values: communication, simplicity, and flexibility.
Communication
Communication is possibly the most important value in programming. As mentioned in a previous post, we have the responsibility for communicating well with potential readers. If good code is clean code, then clean code should read well and be easy to understand. Because developers work in teams, it is crucial to think of others as we code. Think to yourself, "How would someone else see this?" when writing code. This gives yourself a fresh perspective and you'll feel as though you are thinking clearer because you think of others as you program.
There is also an economical aspect to good communication. Because the majority of software costs are incurred after the software has been deployed (maintenance phase), you can cut costs by taking some time to write code that is easy to read so that you don't have to spend more time reading existing code to add value to it.
Simplicity
Eliminating excess complexity allows those who are reading, using, and modifying programs to understand them much more easily. While some complexity is acceptable given the complex nature of the specific problem at hand, it is best to remove excess complexity that does not add much value to the software. Communication and simplicity go hand in hand. The simpler the design of the system, the easier it is to read and understand. The more you focus on communication, the easier it is to see the excess complexity in parts of the system. In other words, "Keep it simple, stupid."
Flexibility
As addressed in the discussion regarding communication, the bulk of the cost of software is incurred after it is first released. Not only should code be easy to read, it should also be easy to change. Often times though, flexibility can come at the cost of increased complexity. Beck includes an example of this case by discussing user configurable options. The options provide flexibility, but it adds the complexity of the need of a configuration file. This extra layer of complexity needs to be considered when deciding whether the importance of software flexibility outweighs simplicity.
Principles
Understanding principles can provide general motivation or explanation behind a pattern, and as a guide when coming across novel situations. Below is a list of principles behind implementation patterns.
Local Consequences
When designing a system, it is best to structure code so that any changes to a particular module will only have local consequences. Code that only affects its neighbors communicates well as it can be understood without having to understand the system as a whole.
Minimize Repetition
Minimizing repetition contributes to the principle of keeping consequences local, as changes to one copy of the code will require a change in more than one place. Duplicate code is not the only form of repetition though. According to Beck, parallel class hierarchies are also repetitive, which breaks the principle of local consequences. In order to eliminate duplication, a good method is to break the program into many small pieces as large pieces of logic tend to be made up of parts of other large pieces.
Logic and Data Together
Another principle that ties into local consequences is to keep logic and data together. By keeping these two entities near one another, changes will be kept local because changes in logic are often parallel to the changes in data.
Symmetry
Symmetry in code is where the same idea is expressed the same way everywhere it appears in the code.
Symmetry is also an important principle to uphold when programming. Identifying and expressing symmetry clearly makes code easier to read. As readers understand one half, it becomes easier to understand the other half if these two halves are symmetrical.
Declarative Expression
As another principle, declarative expression states that you should be able to read what the code is doing without having to understand the entire execution context. Here's an example below from the book using an old version of JUnit.
public static junit.framework.Test suite() { Test result = new TestSuite(); ... complicated stuff ... return result; }
What tests is this test suite running? At a quick glance, we wouldn't be able to tell unless we expand the "complicated stuff" section and dive deep into the code to fully understand what's going on. Beck states that JUnit 4 uses the principle of declarative expression to solve this issue, and includes the solution below.
@RunWith(Suite.class) @TestClasses({ SimpleTest.class, ComplicatedTest.class }) class AllTests { }
If it's known that tests are being aggregated using this method, then we just need to look at @TestClasses to understand which tests are expected to run in the suite. Not only does the TestClasses annotation provide flexibility for running tests than the code produced from the older version of JUnit, it makes the code much easier to read.
Rate of Change
The final principle is to put logic or data that changes at the same rate together and separate those that change at different rates. Let's look at this code snippet below.
setAmount(int value, String currency) { this.value = value; this.currency = currency; }
Hypothetically, if a financial instrument can have its value and currency change together, it would be better to join the two fields together and express these entities as a helper object.
setAmount(int value, String currency) { this.value = new Money(value, currency); }
Which can then be further reduced as:
setAmount(Money value) { this.value = value; }
The rate of change principle is an application of the symmetry principle; however it is a form of temporal symmetry as these two fields change at the same time. Expressing this symmetry by making a helper Money object communicates the relationship of these fields to readers, and it can help prevent duplication and aid in consequence localization later down the road.
Conclusion
The values provide motivation for patterns while the principles aid in the translation of these values. This post has introduced the theoretical foundations of the implementation patterns that will be discussed in future posts. Stay tuned!
0 notes
nedyaj-blog · 11 years ago
Text
Examples of Git Workflows
According to Wikipedia, Git is a distributed revision control and source code management system that supports distributed, non-linear workflows. Each workplace has its favorite git workflow, and in this post we'll be covering the most common workflows for development teams.
Centralized Workflow
Centralized Workflow uses a central repository to serve as the single point-of-entry for all changes in the project - the master branch. Developers in a team clone this repository to their machines in which they can commit localized changes to the system without affecting the central repository. To make changes to the project, developers push their changes to the central master branch. However, before a develop can push any changes, they need to fetch the updated central commits and rebase changes on top of the updated project. Git handles conflicts between local and central commits by pausing the rebase process and giving the developer a chance to manually resolve the conflicts before continuing. While this is a perfectly capable workflow, Centralized Workflow does not utilize some powerful features that Git has to offer. For a more streamlined process, let's look into another workflow.
Feature Branch Workflow
Feature Branch Workflow adds feature branches into the mix which encourages collaboration and communication between developers. In this workflow, developers create feature branches to work on instead of the master branch. In doing so, multiple developers can work on a single feature without altering the main codebase. Because Git doesn't make a technical distinction between master and feature branches, developers can stage and commit changes to feature branches like they do in the Centralized Workflow. Once a feature is complete, a developer can push the branch onto the central server and submit a pull request to have the branch merged with master. Pull requests encourage developers to discuss and review the changes being made before it is a part of the main codebase. Once the pull request has been accepted, publishing the feature is very much the same as it is in Centralized Workflow - make sure that local master is synchronized with the upstream master, merge the feature branch into master and then push the updated master to the central repository. The Feature Branch Workflow is an incredibly flexible way to develop a project; however, sometimes it offers too much flexibility when it comes to larger teams. For a more organized way to managing feature development, release preparation, and maintenance, let's talk about the Gitflow Workflow.
Gitflow Workflow
Gitflow Workflow defines a strict branching model designed around project release that provides a powerful framework for managing larger projects. This workflow does not introduce anything new in terms of concepts and commands beyond what we've already covered, but it does assign specific roles to branches and how they interact. Like the previous workflows, Gitflow Workflow also utilizes a central repository and developers work locally to push branches to it. The difference lies in the specific branch structures of the project.
Historical Branches
Instead of one master branch, there are now two branches to record the history of the project - master and develop branches. The master still serves as the official release history while the develop branch serves as an integration branch for features. The rest of the workflow revolves around these two branches.
Feature Branches
Exactly like Feature Branch Workflow, each feature resides in its own branch which is pushed to the central repository once complete. The only difference is that instead of pushing changes to master, they are pushed into the develop branch. Feature branches never directly interact with master.
Release Branches
Once the develop branch has enough features for a release, a release branch is forked from develop. The creation of this branch indicates that a release is being prepared so no new features can be added after this point. Once the release is ready, it is merged into master and tagged with a version number. The release branch should also be merged with the develop branch, which may or may not be ahead since the release was initiated. Using release branches allows for one section of the team to polish the features of this particular release, while another team can be working on features for the next release. It also promotes well-defined phases of development when tagging each release with version numbers.
Maintenance Branches
Maintenance or "hotfix" branches are used to quickly patch and fix bugs in releases. Maintenance branches are the only type of branches that are allowed to branch directly off master. These branches should be merged into both master and develop and master should be tagged with an updated version number. The benefit of these branches is that it lets the team focus on fixing issues without interrupting the rest of the workflow.
Conclusion
In this post, we've covered three common workflows: Centralized Workflow, Feature Branch Workflow, and Gitflow Workflow. These workflows are just examples of what is possible, and are not strict rules for using Git with projects. The goal is to adopt or develop a Git workflow that works for you and your team so it is perfectly acceptable to adopt some parts of a workflow and disregard others. For a more in-depth discussion of Git workflows, as well as awesome examples and illustrations for better understanding, visit this article here.
0 notes
nedyaj-blog · 11 years ago
Text
Extreme Programming: Extreme Team Roles
Like in any team project, there are several different roles in XP with their own set of unique responsibilities and tasks. Because communication is a crucial value in XP, the authorities and responsibilities of the developer and the customer are sharply divided to force communication amongst both parties. Because each person has his/her own unique area of expertise, a developer is given the authority to handle technical decisions while a customer is given the authority to handle business decisions. Each area of influence complement one another, so it is important to draw a clear distinction between roles to ensure a greater chance for success.
The Customer
As mentioned in a previous post, the customer drives the project. He represents the end user and business interests that are funding the project. Listed below are some of the customer's responsibilities.
He identifies the features that users need from his perspective, but he is also trying to maximize the business's investment. Therefore, it is important that he chooses the stories that allow the software to contain the most valuable features at any given time.
He changes the scope of the project to deal with schedule changes by adding and removing stories from iterations if estimations are off.
He measures the progress of the project by running acceptance tests at any time.
He stops the project at any time without losing his investment by keeping the software releasable and scheduling the most important features.
He provides precise stories to allow the developers to provide accurate estimates.
He works with the team by continuously providing guidance and feedback.
He trusts the developers to handle all technical decisions.
The Developer
As the navigator of the driving analogy, it is the developer's responsibility to steer the customer on the correct path. It is the job of the developer to turn the customer's stories into code. Here is a list of the developer's responsibilities.
He provides estimates to the best of his knowledge by understanding how to implement the stories, how long it may take, and the technical risks and issues that may arise.
He works at a sustainable pace by scheduling only the the amount of work that can reasonably be done.
He produces code that meets the customer's needs by focusing on testing, refactoring, and communication.
He follows the team's guidelines so that the system is well-tested and its design simple.
He implements only what is necessary to keep the project as simple and valuable as possible for the customer.
He constantly communicates with the customer to understand concerns and provide feedback, and help the customer make accurate scheduling decisions.
He trusts the customers to handle all business decisions.
Supplemental Roles
There exists two supplemental roles in XP that may or may not be present depending on the team.
The Tracker
The tracker keeps track of the schedule and tracks team velocity, which is the ratio of ideal time estimated for tasks to actual time spent on implementing them. Other data that may be worth tracking include changes in velocity, amount of overtime worked, and the ratio of passing to failing tests. These numbers measure the progress of the project and help determine if the project is on time for its next iteration. Regularly tracking progress also helps the team adjust to the flow of work.
The Coach
Some XP projects may have a coach who guides and mentors the team. The coach's role is to lead by example and his weapon is his many years of experience. XP can be difficult to apply consistently, and there are occasional obstacles and subtleties that require the knowledge of someone who has been using XP for years. The coach serves as a guide to help the team understand XP and software development.
Conclusion
Notice that the last responsibility of the two main roles is to trust one another in handling their own areas of expertise. Trust is the foundation of any relationship, personal or business, and it is extremely important that both parties trust one another. There is no chance for success without trust.
0 notes
nedyaj-blog · 11 years ago
Text
Extreme Programming: Development Strategy
In the previous entry, we discussed the what Extreme Programming is and what its values are. According to Beck, there are four basic activities of software development - coding, testing, listening, and designing. In this next section of Extreme Programming Explained: Embrace Change, Beck discusses several strategies for each of these basic activities. In this post, we will be discussing the Development Strategy.
Development Strategy
"We will carefully craft a solution for today's problem today, and trust that we will be able to solve tomorrow's problem tomorrow."
There are six rules that dictate the development strategy:
The customer is always available.
Code must be written to agreed standards.
Code the unit test first.
All production code is pair programmed.
Integrate continuously.
Use collective ownership.
Available Customer
The development strategy begins with iteration planning which is a meeting called at the beginning of each iteration to produce that iteration's plan of programming tasks. Because user stories are written by the customer, it is beneficial to have the customer available at all times to be a part of the development team as all phases of an XP project require extensive communication. The customer will also need to help with functional testing to ensure that the product is ready for production.
Code Standards
Code must be formatted to coding standards that are agreed upon from all members of the team. These standards keep the code consistent and easy for the entire team to understand and refactor. Code that is consistent and looks the same also encourages collective ownership.
Test First
Typically when you write unit tests first, you will find that it's faster and easier to come up with the production code. Creating unit tests forces the developer to really consider what needs to be done and understand the problem at hand. Unit tests provide immediate feedback and are really the only way to know that your system is fully functional when all your tests pass. Another benefit of testing first is that it forces developers to consider good system design as poorly thought out systems are much more difficult to test. The rhythm to developing unit tests first is this:
Create one test to define some small aspect of the problem at hand.
Create the simplest code that will make the test pass.
Create a second test.
Create more code to make both tests pass, but no more than that.
Continue writing more tests for the problem until there is nothing left to test.
The code that you create will be simple and concise and the unit tests will serve as a documentation to your code.
Pair Programming
Pair programming is a technique in which two programmers work together at one workstation. One programmer serves as the driver who writes the code, and the other serves as the navigator or observer who reviews each line of code being typed in. The two programmers switch roles frequently. This technique increases software quality without impacting delivery time. It is a social skill that takes some time to learn. It is a cooperative way to work that includes input from both programmers regardless of experience. Pair programming is not a mentoring session. If one programmer is significantly more experienced than the other, then the first few sessions may feel like tutoring sessions; however over time the junior developer will start picking up all of the mistakes and deviations of patterns to help the senior programmer.
Continuous Integration
Continuous integration is the practice of integrating and committing code into the code repository every few hours or whenever possible. It avoids diverging or fragmentation of code where developers do not communicate and letting other knows what code can be reused or what should be shared. Continuously integrating code prevents integration headaches down the road as no one would be working with obsolete code. Because this practice is a "pay me now or pay me more later" activity, continuous integration also detects compatibility issues early so that you won't find yourself spending weeks at the end trying to integrate the system as the deadline draws near.
Collective Ownership
Collective ownership is the idea that anyone on the team can change any piece of code in the system at any time. Sounds like a crazy idea, right? So many things can go wrong if anyone is allowed to change anything at will. However, with properly written unit tests and continuous integration, you can get away with doing this. One of the effects of collective ownership is that complex code does not live within the system for very long. Because collective ownership encourages everyone to contribute, any developer can change the code to simplify, refactor, fix bugs, and add functionality. If the newly written code breaks the system, it will be evident by the suite of unit tests and the code will be thrown away. Another effect of collective ownership is that it spreads the knowledge of the system around the entire team. There will never be a part of the system where only two people know how this segment of the code works because everyone on the team should have visited the code before. Not only does this increase a developer's personal power on a project, but it also reduces project risk.
0 notes
nedyaj-blog · 11 years ago
Text
Extreme Programming: An Introduction
In software engineering, there are several methodologies that attempt to streamline the many processes of development. The intent of these methodologies is to separate these processes into distinct phases to provide better planning and management. There are several of them out there, the most common being waterfall, prototype model, iterative and incremental development, spiral development, rapid application development, and extreme programming. In this post, we'll be exploring extreme programming.
What is Extreme Programming?
Extreme Programming (XP) is a software development methodology created by Kent Beck during his work on Chrysler's C3 project. Its focus is to improve software quality and responsiveness to the one constant in our lives - change. It advocates frequent releases in short development cycles to improve productivity and flexibility to adopt changes in customer requirements.
In Kent Beck's Extreme Programming Explained: Embrace Change, Beck likens software development to driving a car.
"Driving is not about getting the car going in the right direction. Driving is about constantly paying attention, making a little correction this way, a little correction that way."
According to Beck, this is the paradigm for Extreme Programming. Because change is the only constant, we always need to be ready to adapt to these changes by shifting directions. There will even be times when we need to make a complete 180 degree change in direction. As programmers, we need to embrace change because just about everything changes. From requirements, to design, business, technology, and even team members - anything can change. Because change is inevitable, we need to learn to cope with change when it comes.
If driving a car is similar to software development, then we can think of the customer as the "driver" of a software project and development can be likened to steering or navigation. It is our responsibility as programmers to give the customer a steering wheel and provide feedback to let them know where we are and to assure them that we are heading in the right direction. How do we know if we're going down the right road? There are four values that tell us how the development process should feel: communication, simplicity, feedback, and courage.
Communication
Communication is key in every relationship and interaction with other humans - personal or business. If the right questions don't get asked or important information is not relayed, a project can fail. In order to keep the right communications flowing, XP employs practices like unit testing, pair programming, and task estimations to force programmers, customers, and managers to communicate.
Simplicity
In terms of simplicity, Beck states that it is better to do a simple thing today and pay a little more tomorrow to change it if it needs it, than to do a more complicated thing today that may never be used anyway. This ties in to the driving metaphor - it is better to make incremental changes here and there to reach the end result. Spending too much time to make a complex system from the get-go will make change a lot difficult to deal with and will only hinder the development process. Simplicity also works together with communication in the sense that the more you communicate, the more you understand the problem and what needs to be done. The more you understand the task at hand, the more confidence you have in creating a simple system. TL;DR - "Keep it Simple Stupid."
Feedback
Feedback is essential in software development because it lets all the parties involved know the state of their system. When customers provide new "stories" or description of features, the programmers should immediately provide well thought out estimations so that the customers have concrete feedback about the quality of these stories. Unit tests are also a form of feedback to programmers. Writing tests to ensure that a piece of code behaves the way it's intended to behave provides a feedback loop to let programmers know when a change in the system breaks an area in the code. Concrete feedback works together with communication and simplicity because the more feedback you have, the easier it is to communicate. If the team communicates clearly, you will know what to test within the system and simple systems are always easier to test.
Courage
There will be times when it is discovered that the system has a huge design flaw that impedes further development. As programmers, we need to work up the courage to fix that flaw, break all the tests, and then spend a few days putting in concentrated effort to make all the tests pass again. There will also be times when you work all day on something that crashes at the end of the day. Instead of working on fixing that failing system the next day, just toss it. It is better to throw code away when it has gotten out of control than to try to tame it and get the system to work. You'll often find that you end up designing a simpler system the next time around. When combined with the previous three values, communication, simplicity, and feedback, courage becomes extremely valuable. The courage to communicate opens up the possibility for more high-risk, high-reward experiments. Simple systems give you the courage to to try new things and concrete feedback supports courage because you are not afraid of making changes in the code when you have tests to ensure proper system functionality.
0 notes
nedyaj-blog · 11 years ago
Text
Expectations of a Professional
In Uncle Bob's talk at the Software Craftsmanship: North America conference titled, The Reasonable Expectations of Your CTO, Uncle Bob discusses the expected responsibilities and behaviors of a professional software developer. He starts the talk by pretending to be a CTO and lists the expectations that he would have for an employee within his company. Here are a few that I thought were interesting.
QA Will Find Nothing
Quality Assurance (QA) should find nothing! It is not QA's job to find our bugs. Professionals have the responsibility of sending the highest quality code possible, and not releasing the code to QA until it is completely ready. QA isn't our debugger - they should have trouble trying to find issues within the code. Often times, developers use QA as a way of delaying deadlines by using the scheduling algorithm where they release the code to QA fully knowing that it isn't ready to cause delay. This is unprofessional behavior and should be avoided at all times.
Fragile Code
No code should be fragile. Software is exactly as it sounds - soft. By definition, it should be malleable and flexible and we shouldn't be afraid of touching and changing parts of the code. Professionals write code that easily adapts to change because professionals know that requirements continuously change.
Honest Estimates
An honest estimate is not a single date - it is too precise. Honest estimates are provided by giving a range of dates and what the odds are that you can deliver the completed task by a given date. Professionals know that the future is uncertain, so they provide a range of dates to account for the best possible case and the worst possible case. When being pressured to deliver by a certain date and you know that you cannot possibly do it by that time, it is your responsibility as a professional to say "No."
Saying No
As a professional, you typically know more about the specifics of your task than your employers. Employers hire you to say "No" when you know that it is the right answer. You should never say "Yes" to something when you know that the answer should be "No." It is the hardest and most important thing to do as a professional. Often times, you will be pressured and be asked to try to get it done by a certain date. When asked this question, the answer is still the same. If you say that you'll try, you admit that you are holding back and that you have extra effort in reserve. You are essentially lying, and that is completely unprofessional.
Continuous Learning
As software developers, we are in an industry that is rapidly advancing and it is our responsibility as a professional to continuously learn new technology and tools. The job of a developer is to ride the waves of change and those who fall behind may not recover. In order to avoid falling behind, professionals are constantly looking ahead anticipating the next wave. Professionals learn on their own time, and it is not the responsibility of your employer to facilitate learning. Doctors and lawyers are expected to be on top of the latest medical journals and current laws, we as software developers should be expected to do the same with the newest technologies and tools.
Conclusion
Professionals are expected to behave, well, professionally. This list of expected responsibilities and behaviors shouldn't be anything surprising, and yet not every career software developer follows all of these expectations. Our lives are managed by software. We are constantly surrounded by software. So given that our industry is affects the lives of so many, it is our responsibility to behave in a professional manner.
0 notes
nedyaj-blog · 11 years ago
Text
Code Smells and Heuristics: Feature Envy
A code smell is any symptom in the source code of a program that possibly indicates a deeper problem. Code smells are usually not bugs - they are not technically incorrect and do not currently prevent the program from functioning. Instead, they indicate weakness in design that may be slowing down development or increasing the risk of bugs or failures of the future. - Wikipedia
In the final chapter of Clean Code, Uncle Bob presents an extensive list of code smells and heuristics that are used to identify areas of code that can use improvement. He also references Martin Fowler's book, Refactoring, in which he identifies many different code smells. Everything in this chapter has been covered in the previous chapters in more detail, so this list is meant to be used as a all-in-one reference to really tie the book together. While I won't be covering all of the code smells (there are over 50!), I will be going over one that I found most interesting.
Feature Envy
As one of Martin Fowler's code smells, Feature Envy states the following:
The methods of a class should be interested in the variables and functions of the class they belong to, and not the variables and functions of other classes.
Essentially, when a method uses accessors of another object to manipulate data within said object, this method envies the scope of the class of that object and wishes it belonged to this class to directly access the data its manipulating. Consider this extremely trivial example:
class Rectangle attr_accessor :height, :width def initialize(height, width) @height = height @width = width end end module AreasOfShapes def self.area_of_rectangle(rectangle) rectangle.height * rectangle.width end end
Here, we are getting the area of a rectangle inside a method of a different module and accesses its data to do its computation. This method wishes that it were inside the Rectangle class itself to get direct access so let us grant its wish.
class Rectangle def initialize(height, width) @height = height @width = width end def area @height * @width end end
Now that the method is situated inside Rectangle class, it can happily get direct access to the height and width fields and no longer has to envy the scope of the class. Rectangle no longer has to expose its internals to other classes to calculate its area.
Conclusion
As previously stated, code smells aren't bugs. There was nothing technically wrong with the first snippet of code - everything still ran fine. The problem is that it breaks encapsulation by exposing its internals which can cause a risk of bugs later down the road. All the code smells and heuristics listed in this final chapter of Clean Code are refactoring guidelines that can help reduce this risk of failure and help clean up our code. However, while it's encouraged to use this list as a reference, you should not expect to be a clean coder by simply following a set of rules. Software craftsmanship comes from values that drive disciplines, not by learning a list of heuristics and code smells. The completion of this book has become another stepping stone in the path of becoming a software craftsman, and while I know that I still have a lot to learn before truly becoming a clean coder, I'm excited to see the journey that lies ahead of me.
0 notes