Don't wanna be here? Send us removal request.
Text
In the Clouds With Heroku and More
Over the last few years, we’ve heard more and more about the move to cloud based services. Cloud computing provides a set of benefits that are nearly impossible for any company to pass up taking advantage of. One of the first cloud platforms to emerge a little over a decade ago was Heroku. With it came a number of features that helps to make software developers' lives easier, and when that happens, 9 times out of 10 a better product can be delivered faster than before. As we’ve made you aware in some of our past blogs, the mission DevOps strives to achieve is continuous delivery of high quality software. When you take this idea and apply it to Heroku’s features, it’s clear that Heroku is a tool that supports DevOps practices at a very high level. Additionally, it’s also able to integrate many of the top technologies in use today like Node.js, Python, Java and others. Recognizing that Heroku enhances the development experience, HerokuDX was born.
Heroku is a platform as a service that enables developers to build, run, and operate applications entirely in the cloud. Heroku is used by LaunchPad Lab, Macy’s, Smile.io, Toyota, and many other companies you might be familiar with. But why do these companies use Heroku? Well, Heroku boasts many benefits. Heroku allows developers to focus on code instead of infrastructure, offers single billing for all projects which can be broken down by team, provides rich application monitoring, supports many open source languages, offers minimal or no downtime during system updates, offers a powerful dashboard and CLI, and allows you to create a new server in just 10 seconds using the CLI.
Heroku is far from alone in the cloud landscape (or should we say cloudscape). One erroneous comparison point that frequently crops when discussing Heroku is AWS Lambda. As the name might suggest, it is part of Amazon Web Services. Lambda allows code to be executed without a server or underlying infrastructure. Code that is put into an AWS Lambda can be executed based on a whole host of triggers, many of which have to do with other AWS services. For example, it is possible to have a Lambda be run every time a new document is uploaded to an Amazon S3 bucket. It is also possible to have a Lambda run when a specified http endpoint is hit. Lambda fundamentally differs from Heroku. While both Heroku and Lambda allow code to be executed without having to worry about a local server, their approach is not the same. Lambda will fire the code only when triggered. There is no server lying in wait, so Lambdas are generally insufficient for running sophisticated webapps, and should typically be limited to smaller bits of code with specific criteria for being run. On the other hand, Heroku provides a runtime environment which will continuously run, similar to a traditional server, with the primary difference being that it is in the cloud. Within AWS’s smattering of tools, it is possible to do anything that can be done with Heroku. However, simplicity is where Heroku shines. You do not have to trawl through an ocean of differently named AWS features to configure, and can be confident that you will find straightforward instructions as well as billing. If you are trying to make deployment of a straightforward webapp easier, Heroku is the obvious choice.
0 notes
Text
The Benefits of A/B Testing
In our last post, we took the opportunity to address the COVID-19 pandemic, and how DevOps teams are able to adjust and continue being productive. This week, we again focus on making adjustments, but to customer facing applications. It’s no secret that the tech world is continuously evolving, and with that comes the need for updates to software applications and products. When updates are made to an application, there are often numerous possibilities to accomplish the same goal, and some will perform better than others. Well, what if there was a way to test the performance of each of the possible updates? Enter the concept of A/B Testing. A/B Testing is the practice of comparing two different versions of an application, to measure their performance and other analytics surrounding their usage. These two versions are shown to two entirely separate audiences at the same time, which facilitates the back to back comparison to determine which version was received more positively by users, or yielded better results for the business, website, etc. The reception is measured using a specific set of analytics, in order to gather quantitative data that illustrates physical results, rather than simply basing decisions off of arbitrary qualitative data. One example of this, illustrated here, is through a websites conversion funnel that relates to new visitors taking action on the site (known as a conversion). The conversion rate simply relates to how well the website drives it’s users to take said actions. To find out which version of the website creates more conversions of new users, A/B Testing can be applied, and the number of actions can be measured through the tracking of clicks, form submissions, and more. President Obama’s 2008 campaign website used A/B testing to determine which combination of visuals would generate the most donations. They ended up choosing the visuals of Obama and his family and using a button that said “Learn More” instead of “Sign Up.” This minimized the amount of bouncing, which is when people go to the website and then leave without donating or doing whatever the website’s purpose is. The chosen visuals also allowed the website to convert more visitors into donors, which is the ultimate goal of A/B testing. One commonly used enterprise software that makes A/B testing easier is Google’s Firebase offering. Firebase is an app development platform with a multitude of features. As one might expect from the name, it includes a real-time database, but that is not all that it has to offer. One of the more recent additions to the platter of Firebase features is A/B testing. Currently in beta, Firebase’s A/B testing is layered upon three existing Firebase features: Remote Config, Cloud Messaging, and Analytics. By hooking directly into the existing application analytics and infrastructure, it greatly simplifies the process of developing and disseminating these tests. Perhaps the biggest boon brought by this feature is the ease with which one can conduct targeted A/B tests. Since there is an existing Analytics suite, developers whose applications have already embraced the Firebase ecosystem should have access to a host of distinguishing characteristics about their users. This information can make A/B testing even more effective, as it is possible to pinpoint specific user types and see what changes will encourage the behavior that the app developer is attempting to provoke.
0 notes
Text
How DevOps Personnel Work from Home
While we’d love to continue diving right into more tools & technology within the DevOps community, there’s an elephant in the room we’d like to address instead: COVID-19. This novel coronavirus has led to an unprecedented impact on the global workforce, as many find themselves out of work due to shelter in place orders, and closure of non-essential businesses. In this new age of social distancing, a key player in keeping as much moving as possible has been the ability for individuals to work from home. Those who likely have had the easiest transition are individuals in the technology industry. With their laptops in hand and a VPN connection to boot, business continues as usual for software engineers around the globe. But what about IT professionals, system admins, DevOps aficionados… can they work from home effectively? Absolutely, and here’s how.
First of all, remote work has been around in the DevOps culture for years prior to this pandemic, it’s nothing new. In fact, you’ll find many statements online from those who have done remote work for years without issue. One person writes: “I've been doing remote DevOps for nearly five years. I met my co-workers once, for a few minutes. Slack and video chat are all the synchronous interaction a team needs if you assemble the right team.” When examining quotes like this, it becomes pretty clear that the availability of DevOps tools in a remote environment is extremely high. Entirely web and software driven, there is no reliance on special hardware to complete tasks. Let’s take a look at how some of the most popular tools that are used in the DevOps world today keep things running smoothly, even when the team is working from home. Staying in touch with your team is the most important aspect of working from home. How else will you know what needs to be done if you can’t talk to your team and supervisors? There are many options nowadays for video, voice, and text communication. Let’s take a look at some of the most popular ones.
Slack is one of the most popular business communication tools. It focuses primarily on text communication, but it does have some voice calling and screen sharing capabilities.In Slack, you can create channels for different topics, create webhooks for easy updates on repositories, give members roles depending on their responsibilities, and more. It’s a free service up to a certain point. Large businesses would need to pay in order to include all their employees on Slack, but small groups can easily use the free version without issue.
Zoom is a service for video communication that has become more popular since COVID-19 has taken hold of the world. Businesses and schools all over the place are taking advantage of it. It allows for users to video chat as well as send text chats during the video call. Zoom also provides the opportunity for a captionist to be part of the video call and write live captions for deaf and hard of hearing individuals.
The most popular communication service that we’ll look at is Discord. Discord is an all in one service for text, video, and voice calling. It’s widely used by friends who would like to play video games together, but can also be used for many other purposes. Discord allows users to create servers for different groups of people and different purposes. Discord servers can have channels, like Slack, for different purposes within the group. You can also easily jump into the live voice chat channel to talk to any online member of the server at any time. Discord also allows for video chatting and screen sharing. Discord Go Live has increased its limit from ten to fifty people because of COVID-19 in an effort to streamline social distancing.
Staying in touch can only accomplish so much if the work itself is constrained by physical location. Fortunately, modern deployments, whether they be continuous or otherwise, are automated to the extreme. Even a rather barebones deployment pipeline with relatively little automation would likely make use of something like Jenkins or GitLab. These automatically run through the steps needed to make new code live, and take away a ton of previously manual work from human developers. One of the great things about these tools is that they make deploying remotely much easier. In fact, it is common to integrate the chat environment of choice with the deployment pipeline, making a manual deployment as simple as a command in a Slack channel. More sophisticated pipelines can go as far as to immediately begin building and deploying once a new commit has been detected. In any case, these modern solutions completely remove the necessity of any kind of physical access.
An uptick in reliance on cloud providers offers another big boon to the work-from-home hopeful. In the days of development past, many software businesses had to rent or own a physical hosting solution to handle their software, like a mainframe. Having that hardware in-house for the company meant for a greater likelihood that employees would need to be out of their own houses in order to interface with these machines. Using cloud computing resources provided by companies like Amazon, Google, and IBM completely removes that concern for a multitude of software companies, who no longer have to worry about any sort of physical hardware, making work-from-home for the entire development team a real possibility.
0 notes
Text
Analysis Paralysis: Keeping Quality Code
After having discussed what DevOps is and how it began and evolved, let’s discuss some useful tools to use in tandem with the DevOps strategies we’ve learned about. Here we compare and contrast two tools for keeping your codebase clean and consistent while several different developers, or even several different teams of developers, contribute to it.
About SonarQube:
SonarQube is a software that statically analyzes the quality of your code. It is extremely helpful for maintaining and improving projects. With robust plugin support, SonarQube can pinpoint common coding mistakes and pitfalls in more than 20 programming languages. SonarQube is capable of parsing through millions of lines of code, and not only does it point to problems that it finds, but it will label these issues by severity, and it will also provide a solution! SonarQube shows you line by line how your code will look once you resolve the issues, which is a great help to new developers or developers working in an unfamiliar language.
SonarQube’s web interface is extremely easy to navigate. It is intuitive to find and analyze the mistakes that you have made, and it is simple to filter for specific issues. For example, it is possible to look only at high severity bugs and to ignore code smells, so that even if your project is not perfect, you can ensure that the truly dangerous issues are dealt with.
SonarQube lends itself naturally to being a part of a deployment pipeline. It is easy to have SonarQube analysis run as part of a Jenkins deployment, and it is possible to set quality gates so that the code will not deploy if there are any critical issues. SonarQube even has support for test coverage, so you can halt releases if your code has not been sufficiently tested.
Another extremely useful aspect of SonarQube is technical debt assessment. SonarQube offers an analysis of how long refactoring away the issues that it finds will take. Aside from letting you know overall how long fixing everything will be, this means that you can efficiently target the problems that will take the least amount of time to fix in order to stop poor code from amassing beyond levels that are maintainable.
With each issue and solution offered by SonarQube, there is also an explanation as to why their solution is correct. This means that aside from just improving your code quality in a project, you can improve the quality of the code that you write, as you learn the rationale and reasons for why bad code is bad, and you learn how to avoid it.
SonarQube is extremely powerful, and there is a free community edition. It is highly recommended that anyone working on a large-scale project put this fantastic tool to use.
About ESLint:
Code linting is a way to analyze code to look for problematic patterns and code that doesn’t adhere to certain style guidelines. ESLint is an open source JavaScript linter that allows developers to set their own linting rules and analyze their code without running it. Many companies use ESLint. Among them are Netflix, MongoDB, Microsoft, Squarespace, Airbnb, PayPal, and Facebook.
ESLint boasts several key features that make it stand out:
Every feature is pluggable. Rules and formatters don’t have to be bundled to be used, and additional rules can be specified at runtime. Every linting rule is standalone and can be turned on or off at any time. Each rule may also be set to a warning or error individually. ESLint doesn’t promote any particular coding style, so each user can decide what rules and formatting they want on their own. ESLint is incredibly well documented and the developers value clear communication.
With these tools in mind, how do developers decide which to use and when? They both share some common aspects in regard to their use, and mainly differ in the output they produce. One key similarity both share, is that they can be obtained for free, making them very accessible to developers looking for an immediate solution to their code analysis problems. Additionally, they both share a similar focus around the idea of code quality and the need to provide useful information to developers in order to maintain this at a high level. However, their usage differs dramatically.
ESLint is exactly what it says in the name, a linter. With a linter comes the expectation that ‘real-time’ feedback will be provided to the user, a classic example being syntax highlighting. This rapid feedback without actually running the code can help developers correct small errors early in the development process before they spiral out of control, potentially leading to larger issues down the road. On the other hand, SonarQube provides feedback only upon the user triggering the analysis. The data gathered from the analysis can then be viewed in a dashboard setting, where users can find everything they need all in one centralized location.
In general, SonarQube should be used in performing more of a deep dive into your code, while ESLint generally covers the high level surface analysis. So, should you use one or the other? There is no definite answer to this, but in weighing the options and features each tool provides, one can draw the conclusion that coupling both tools in development and review efforts may prove to be a powerful combo.
0 notes
Text
DevOps: The Method Behind the Madness
A History of Devops
DevOps as a term dates all the way back to 2008, when Patrick Debois and Andrew Clay Shafer exchanged their ideas of how agile principles could be applied not just to the software development itself, but to the architecture that supported that software. In the following year, Debois founded the DevOpsDays conference in Belgium, after which the concept began spreading through the Information Technology world like wildfire.
One incredibly important aspect of DevOps is continuous integration/delivery. Continuous integration, which relegates the painful process of combining work accomplished by multiple developers and then building the application to machines, is absolutely essential. Continuous delivery then ensures the software is built in a manner that allows it to be pushed to production at any time. The rapid pace of changes and deployments that DevOps expects would simply not be possible if it was all done manually.
The concept sounds great, but how is it actually done? Apache Ant was one of the earliest automated software builders, which was preceded by the make command. Today, tools like Jenkins and Travis dominate the landscape. These tools make it possible to automate not just the software build itself, but various steps of validation that an organization could desire before a software release. This can include steps such as compilation, regression testing, configuration, and more.
It’s no secret that technology is one of the most dynamic industries across the globe. The tools and practices used by many have continued to change over time, with many fading out where new ones are introduced. Because of this, choosing what technology to use in support becomes a critical decision as businesses strive to use sustainable solutions rather than those that are constantly changing. Technology aside, even the core idea of DevOps has changed over the short time it’s been practiced. As said by Gene Kim in an article on TechBeacon, “The original idea was simply to de-silo dev and ops to overcome the bottlenecks in the software development and deployment process, mostly on the ops side. Today, with the evolution of DevOps, the goal is supporting a continuous delivery pipeline.”
The principles that companies utilize to stay relevant and efficient are truly what drives the focus of DevOps. Agile methodologies have become some of the most widely used techniques and guidelines and have consistently been recognized as a compliment to DevOps efforts. This can be highlighted in another statement from Kim, which mentions that “we are in the midst of a "massive transformation" as companies shift from optimizing for cost to optimizing for speed.” All things considered, as the industry undergoes continuous transformation, you can expect to see DevOps along for the ride.
A Case Study: Transition to Devops
According to C. Aaron Cois, a professor at Carnegie Mellon University's Heinz College, in his blog post case study of AWS, “DevOps is fundamentally about reinforcing desired quality attributes through carefully constructed organizational process, communication, and workflow.” Amazon is one company that transitioned to DevOps very well and this is largely because of how the company forced them to communicate with each other and document their work.
In 2006 when Jeff Bezos authorized the release of Amazon’s new service, AWS, he also gave to his employees a list of guidelines detailing the introduction of DevOps. This list includes five steps that mainly have to do with how employees communicate internally. The most important part of devops is open communication between teams, and in this case, even among the whole company. The last step in the list explains that all employees must design their systems as if they were going to share them with the outside world using APIs, so that those systems are easy to read and documented well enough that anyone can understand it. At least, anyone with a software background.
This open communication between development and operations, and between these teams and the rest of the company, allowed the teams to make their own lives easier by making their code as easily maintainable as possible. Because the code was so well documented, new team members found it faster and easier to understand so they could solve problems quicker, without searching through a maze of code. This allowed both the development and operations teams to work as efficiently as possible to create and maintain AWS to begin with, and then every other aspect of the company later on. Amazon’s success with devops may be one of the best guides for any company or organization looking to introduce it to their workplace.
In Conclusion
Over the years DevOps has evolved and changed, and yet kept the same core goal: to make life easier for everyone in the organization, starting with the dev and ops teams. Not every organization will have as smooth a transition as Amazon, but once the transition is made everyone will reap the benefits. Even the customers! With developers spending less time puzzling out someone else’s code and operations spending less time fighting flawed systems, everyone will have more time to be more creative and concentrate on the customer experience, which is the most important thing.
0 notes
Text
The Impact of DevOps
To put it simply (or as simply as one can put it), “DevOps is the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity”.
In the modern software development industry, agile (a set of principles in development focused on creating customer value) methodologies are sweeping the industry. A core value of the agile method is to deliver value to stakeholders faster than what was previously possible. DevOps (the marriage of development and operations work) pairs perfectly with this idea, in that its purpose is to automate processes to facilitate faster development within software teams. In order to quantify the benefits of implementing these processes and technologies, a study was conducted by CA Technologies. According to the study, “about 75% of respondents reported improved employee recruitment and retention when using agile with DevOps, compared to 30% who only used agile. In addition, businesses saw a 45% increase in employee productivity, a 29% increase in customer satisfaction, and a 78% increase in customer experience when using the two.” These numbers are astounding and illustrate quite clearly how beneficial the combination of agile and DevOps can be. It is possible to implement DevOps practices without subscribing to agile ideas, but they work better in tandem than apart.
Perhaps what makes this such a lucrative pairing is the fact that by adhering to both agile and DevOps principles and practices, organizations are able to fill the gaps between customers, developers and operations found in the communication chain of a typical IT Process. As highlighted here, Agile helps address the gaps between Customer and Developer communication, while DevOps fills the void in Developer and Operations communication. Before the advent of early DevOps technologies like Puppet and Chef, software teams had a severe bottleneck in their release schedules. The rise in adoption of these and their modern counterparts, and thus the rise in adoption of DevOps, has enabled software teams to create deployment pipelines that are capable of matching the speed of agile development.
A key criticism offered by agile detractors is the scenario where a team shoots for a high rate of development speed in order to maximize the amount of value they can bring the customer, but in doing so loses out in many aspects of development that would have produced a higher quality, more stable product in the end. Thankfully, DevOps is able to mitigate this issue by simplifying much of the software lifecycle where a loss of quality would previously have been a large risk. With DevOps and agile working in tandem it is easier than ever to create well-balanced software teams around the world.
This blog is maintained by Benjamin Crossgrove, Jonah Fritz, and Katie Roy. We are 3 software engineering students attending the Rochester Institute of Technology. This semester, we are enrolled in a seminar focused on DevOps and the advantages it brings to the table. As we grow our awareness of the various technologies, processes, and principles that are helping cement this burgeoning field as a software engineering mainstay, we will continue to make new posts to improve our own understanding and to share our knowledge with others.
0 notes
Text
Hello World!
Tumblr is our chosen medium for posting our research findings.
1 note
·
View note