Tumgik
#coveralls github
seo-expert0012 · 2 months
Text
Coveralls: Everything You Need to Know
Coveralls are a type of protective clothing worn by workers in various industries to safeguard themselves from workplace hazards. They are designed to cover the entire body, providing protection from dirt, chemicals, heat, and other potential risks. In this comprehensive guide, we'll delve into the world of coveralls, discussing their uses, differences from overalls, and popular types available in the market.
What are Coveralls?
Coveralls, also known as boiler suits or overalls in some regions, are one-piece garments that cover the torso, arms, and legs. They are typically made from durable materials such as cotton, polyester, or a blend of both, providing comfort and protection in demanding work environments. Coveralls come in various styles, including insulated, waterproof, flame-resistant, and high-visibility options, catering to the specific needs of different industries and job roles.
Difference Between Overalls and Coveralls
While the terms "overalls" and "coveralls" are often used interchangeably, there is a subtle difference between the two. Overalls traditionally refer to garments that cover the torso and have straps passing over the shoulders, attaching to the trousers. Coveralls, on the other hand, are one-piece garments that cover the entire body from the neck down, including the arms and legs. Both serve the purpose of protecting clothing and providing additional safety features, but coveralls offer more comprehensive coverage.
Why are Coveralls Used?
Coveralls are used across a wide range of industries for several reasons:
1. Protection: They provide protection against dirt, chemicals, abrasions, and other workplace hazards, reducing the risk of injuries and contamination.
2. Comfort: Designed for durability and comfort, coveralls allow workers to move freely without restriction, enhancing productivity and overall well-being.
3. Safety: Certain types of coveralls, such as flame-resistant and high-visibility options, are specifically designed to meet safety standards and regulations, ensuring workers remain visible and protected in hazardous environments.
4. Uniformity: Coveralls contribute to a sense of unity and professionalism within a workforce by providing a standardized appearance for employees.
Popular Types of Coveralls
- Insulated Coveralls: Ideal for cold weather conditions, insulated coveralls feature added insulation to keep workers warm and comfortable during outdoor activities or in cold environments.
- Waterproof Coveralls: Waterproof coveralls are designed to repel water and other liquids, keeping workers dry and protected in wet or rainy conditions.
- Flame-Resistant Coveralls: Made from flame-resistant materials, these coveralls are essential for workers in industries where exposure to fire or sparks is a risk, such as welding or oil refining.
- High-Visibility Coveralls: Featuring reflective strips or bright colors, high-visibility coveralls enhance worker visibility in low-light conditions or areas with heavy traffic, reducing the risk of accidents.
Coveralls in English and Around the World
In English-speaking countries, coveralls are widely referred to as "coveralls." However, in some regions, they may be known by different names such as boiler suits (UK), jumpsuits (Australia), or overalls (North America). Despite these regional variations in terminology, the functionality and purpose of coveralls remain consistent across borders.
Coveralls in Pakistan
In Pakistan, coveralls are commonly used in industries such as manufacturing, construction, and agriculture to protect workers from workplace hazards. They are available in various styles and materials to suit different job requirements and environmental conditions.
Coveralls in the Tech World
In the tech industry, "coveralls" also refers to a popular code coverage tool used by software developers to measure the effectiveness of their tests and identify areas of code that require additional testing. Coveralls, along with other tools like GitHub and Codecov, play a crucial role in ensuring the quality and reliability of software applications.
Conclusion
Coveralls are essential protective garments worn by workers across diverse industries to ensure their safety, comfort, and productivity. With various types available to suit different work environments and requirements, coveralls play a vital role in maintaining workplace safety standards and protecting workers from potential hazards. Whether it's for insulation against the cold, resistance to flames, or visibility in low-light conditions, there's a coverall designed to meet the needs of every worker, ensuring they can perform their duties safely and effectively.
Tumblr media
0 notes
elmardott · 5 years
Video
youtube
Grazer Linuxtage 2019 :: Challenges to create your own Open Source Project
Den eigenen Source-Code auf GitHub zu veröffentlichen ist ein erster Schritt zum eigene Open-Source-Projekt. Aber bei weitem nicht der Einzige. Am Beispiel von TP-CORE geht die Reise durch die verschiedenen Stationen eines Open-Source-Projektes.
Das kleine Lizenz 1x1
Promotion auf GitHub
Veröffentlichungen auf Maven Central
Continuous Integration in der Cloud - Travis CI
Public Code Coverage mit Coveralls
Neben diesen Themen werden viele kleine Aspekte, die sich im Rahmen eines Projektes ergeben, angeschnitten.
1 note · View note
Text
Coveralls: Support Engineer Lead
Tumblr media
Headquarters: Venice, CA URL: https://coveralls.io
Coveralls is a profitable startup that since 2013 has provided a service that helps developers confidently deliver code by showing which parts of their code base aren’t covered by automated testing. Hundreds of thousands of developers rely on our code coverage tracking to make sure that their projects' test suites don't have blind spots, and that new code added to their project is properly tested. Coveralls is also something of a community service - developers working on open source projects, large or small, can use Coveralls for free. We're built on open source software tools, and we are proud to give back to the community by providing our service for free, forever.
We are looking to hire our first support engineer to help us shape our product as we grow / add new features, while making sure that the existing service meets our customers' needs. The position will be contract to start, but with the intention of making it full-time.
It isn't all just customer support though...we see this role as the main link between our customers and the development team. Since you will be the first to hear back from our existing and potential customers, the support role is a key element in our UX and product design cycle - so it is important that you be able to collect feedback about what works, what doesn't, and what might need improvement so that we can make Coveralls better for everyone.
As a small company, we will all wear many hats, but we aren't expecting that you'll magically become a full stack developer. We are just hoping to find someone creative, flexible, and interested in developing new skills to be able to handle support for a DevOps tool, and also contribute to helping us deliver a better product and experience for our customers. That means we want your input on how we can create and improve our own processes so that we all don't have to do the same things over and over - we are strategically lazy and hope you are too. If you come up with a way to spend a bunch of time and effort now so that we don't have to do something repeatedly in the future, we want those ideas!
What is the role?
You will be the first point of contact for customers, current and potential, and also to the greater developer community
Help create and update documentation, and internal processes, based on the issues that come up in the course of supporting customers
What sort of things would you be doing?
Manage our GitHub issue board
Respond to incoming support requests, technical and billing, and if needed escalate bugs/issues up the chain to the development team
Answering questions from potential new customers, and assist with customer onboarding
Help create and maintain product documentation
Reach out to other members in community to educate about product
What would make you a good fit?
Coding experience
As a support engineer, you need to have a general understanding of major programming languages, with solid familiarity of at least one (our service was written in Ruby), but open to anything as long as you are interested in expanding your knowledge
Excellent communication
This role requires a lot of written communication, so you will need to be an effective and clear communicator
Familiarity with DevOps services
You have experience with devops tools, e.g. CI (Travis, CircleCI, etc)
Customer support background
Experience with community management/outreach and customer support
Able to effectively work remote 
Comfortable working remotely; we're based in Los Angeles but remote-first. Daily communication done mainly via Slack, Zoom, and email
from We Work Remotely: Remote jobs in design, programming, marketing and more https://ift.tt/2FWoNk2 from Work From Home YouTuber Job Board Blog https://ift.tt/2NDb31N
0 notes
anglelyre2-blog · 5 years
Text
This Week in Jobs: Pre-turkey jobs and jitters
Editor’s note: Every week we ship an email newsletter featuring the region’s most exciting career opportunities. We’ve lovingly called it This Week in Jobs (aka TWIJ — “twidge” — here at Technical.ly HQ). Below is this week’s edition. Here’s the last one we published on the site; it’s meant to live in your inbox.
Sign up for the newsletter here.
It’s the calm before the storm, people. By this time next week you’ll most likely be knee deep in cranberry sauce, clutching your beer as Aunt Candy and Uncle Jim corner you with stories of their most recent bird-watching excursion in exquisitely slow detail.
So enjoy this week of pre-holiday madness. Revel in the silence. Soak up the moments when no one is asking why you haven’t had kids yet. And of course, take some time to search for that perfect tech career opportunity. We’ve got some gems for you below.
Center-city based Guru just launched a new voice recognition feature on its AI-driven team knowledge platform. The Guru AI Suggest Feature uses artificial intelligence to help professionals become more effective, rather than using AI to replace their jobs. Aww, see, isn’t it nice when people and robots put aside their differences and work together? Become one with the bots — apply for a gig at Guru here.
In “brilliant products we wish were around when we lived with our parents,” Lia Diagnostics just earned $2.6 million in venture capital for its revolutionary flushable pregnancy test. The women-led team is in the process of bringing its plastic-free test to the consumer market. If creating better products for women and the earth is your jam, stay up-to-date on job openings or reach out to the Lia Diagnostics team here.
Project managers, put one of these on your to-do list:
Custom software developer PromptWorks is looking for a Senior Software Project Manager. You bring Agile experience to the table, PromptWorks brings the ergonomic seating. Win win.
Engine Room is hiring a Project Manager to serve as the point of contact for clients.
Adtech company Vistar Media seeks an engineering Project Manager passionate about “greasing the wheels of productivity.” So pull on those coveralls and show ‘em you’re not afraid to get greasy.
A few careers for the engineers:
Motorcycle eCommerce retailer RevZilla is looking for a Cloud Operations Engineer with experience on Docker/Kubernetes infrastructure platforms. Don’t be shy—share that GitHub URL of yours. RevZilla is into it.
ShopRunner is hiring a Lead Software Engineer. Beyond providing generous parental leave and ample room for upward mobility, they also provide robust snacks. A ‘lil something for everyone.
IntegriChain seeks a Data Scientist willing to “boldly go where no IntegriChain data-focused resource has gone before.” Maybe bring a compass?
For pros ready to step it up a notch, allow us to show you in something in a director-level position:
The Rest
Thanks for joining us this week, friends! Happy job hunting!
Tumblr media
-30-
JOIN THE COMMUNITY, BECOME A MEMBER Already a member? Sign in here
Tumblr media
Source: https://technical.ly/philly/2018/11/15/this-week-in-jobs-pre-turkey-jobs-and-jitters/
0 notes
javascriptw3schools · 6 years
Photo
Tumblr media
RT @rwieruch: There it is 🎉 An extensive guide to testing in #ReactJs ✅ Mocha & Chai for test runner/assertions ✅ Sinon for async logic ✅ Enzyme for unit/integration tests ✅ Jest for snapshot tests ✅ Travis for CI ✅ Coveralls for test coverage ➕ GitHub Badges https://t.co/idFMc9Ali4
0 notes
webpacknews · 6 years
Photo
Tumblr media
RT @rwieruch: There it is 🎉 An extensive guide to testing in #ReactJs ✅ Mocha & Chai for test runner/assertions ✅ Sinon for async logic ✅ Enzyme for unit/integration tests ✅ Jest for snapshot tests ✅ Travis for CI ✅ Coveralls for test coverage ➕ GitHub Badges https://t.co/idFMc9Ali4
0 notes
bjonasn · 7 years
Text
Badges!? - we NEED stinking badges!
Badges!? – we NEED stinking badges!
Tumblr media
Spending a lot of time on Github, I observed that a lot of projects have cool badges on their pages indicating:
– build status – coverage – code climate
And so on…
I have been using Travis CI for my Github projects and it works like a charm. So when I fell over Dave Cross’ presentation and blog poston Travis CI and Github integration I had to read it case there was something useful I did not know…
View On WordPress
0 notes
mlbors · 7 years
Text
Angular 2, Travis CI, Coveralls and Open Sauce
In this post, we are going to see how we can make an Angular 2 app work with Travis CI, Coveralls and Open Sauce.
In a previous post, we saw how to set up a PHP project using Travis CI, StyleCI and Codecov. We are now going to do quite the same, but with a small Angular 2 app. Our purpose here is to test our app under multiple environments on each commit.
So the first thing we need to do is to sign up for a GitHub account, then we will have access to Coveralls with that same account. The next thing is to open another account on Open Sauce. We are now ready to begin! Let's create our repository, commit and push!
To achieve our goal, it is important that our project runs with angular-cli. We are going to assume that is the case and that Git is also installed.
Dependencies
For a start, we need to install several dependencies with NPM. Let's do it like so in the terminal:
npm install karma-coverage karma-coveralls karma-firefox-launcher angular-cli-ghpages --save-dev
Install dependencies
With that last command, we installed karma-coverage that is responsible for code instrumentation and coverage reporting. karma-coveralls will help us to transmit the report to Coveralls. karma-firefox-launcher is a Karma plugin that will help us with our tests. Finally, angular-cli-ghpages will help us to deploy our app on GitHub Pages.
Karma
Now we need to set a few things in the karma.conf.js file that is included in the root of our folder. The file will look like so:
module.exports = function (config) { config.set({ basePath: '', frameworks: ['jasmine', '@angular/cli'], plugins: [ require('karma-jasmine'), require('karma-chrome-launcher'), require('karma-firefox-launcher'), require('@angular/cli/plugins/karma'), require('karma-coverage') ], client:{ clearContext: false // leave Jasmine Spec Runner output visible in browser }, files: [ { pattern: './src/test.ts', watched: false } ], preprocessors: { 'dist/app/**/!(*spec).js': ['coverage'], './src/test.ts': ['@angular/cli'] }, mime: { 'text/x-typescript': ['ts','tsx'] }, coverageReporter: { dir : 'coverage/', reporters: [ { type: 'html' }, { type: 'lcov' } ] }, angularCli: { config: './angular-cli.json', codeCoverage: 'coverage', environment: 'dev' }, reporters: config.angularCli && config.angularCli.codeCoverage ? ['progress', 'coverage'] : ['progress'], port: 9876, colors: true, logLevel: config.LOG_INFO, autoWatch: true, browsers: ['Chrome', 'Firefox'], singleRun: false }); };
karma.conf.js file
Karma is a test runner that is ideal for writing and running unit tests while developing the application.
Protractor
We are now going to configure Protractor for e2e testing. There is already a file for that in the root of our folder, but is only suitable for local tests with a single browser. Let's create a new one a folder called config. We will name that file protractor.sauce.conf.js and it will be like the following:
var SpecReporter = require('jasmine-spec-reporter').SpecReporter; var buildNumber = 'travis-build#'+process.env.TRAVIS_BUILD_NUMBER; exports.config = { sauceUser: process.env.SAUCE_USERNAME, sauceKey: process.env.SAUCE_ACCESS_KEY, allScriptsTimeout: 72000, getPageTimeout: 72000, specs: [ '../dist/out-tsc-e2e/**/*.e2e-spec.js', '../dist/out-tsc-e2e/**/*.po.js' ], multiCapabilities: [ { browserName: 'safari', platform: 'macOS 10.12', name: "safari-osx-tests", shardTestFiles: true, maxInstances: 5 }, { browserName: 'chrome', platform: 'Linux', name: "chrome-linux-tests", shardTestFiles: true, maxInstances: 5 }, { browserName: 'chrome', platform: 'macOS 10.12', name: "chrome-macos-tests", shardTestFiles: true, maxInstances: 5 }, { browserName: 'chrome', platform: 'Windows 10', name: "chrome-latest-windows-tests", shardTestFiles: true, maxInstances: 5 }, { browserName: 'firefox', platform: 'Linux', name: "firefox-linux-tests", shardTestFiles: true, maxInstances: 5 }, { browserName: 'firefox', platform: 'macOS 10.12', name: "firefox-macos-tests", shardTestFiles: true, maxInstances: 5 }, { browserName: 'firefox', platform: 'Windows 10', name: "firefox-latest-windows-tests", shardTestFiles: true, maxInstances: 5 }, { browserName: 'internet explorer', platform: 'Windows 10', name: "ie-latest-windows-tests", shardTestFiles: true, maxInstances: 5 }, { browserName: 'MicrosoftEdge', platform: 'Windows 10', name: "edge-latest-windows-tests", shardTestFiles: true, maxInstances: 5 } ], sauceBuild: buildNumber, directConnect: false, baseUrl: 'YOUR_GITHUB_PAGE', framework: 'jasmine', jasmineNodeOpts: { showColors: true, defaultTimeoutInterval: 360000, print: function() {} }, useAllAngular2AppRoots: true, beforeLaunch: function() { require('ts-node').register({ project: 'e2e' }); }, onPrepare: function() { jasmine.getEnv().addReporter(new SpecReporter()); } };
protractor.sauce.conf.js file
We can notice that there are two environment values: SAUCE_USERNAME and SAUCE_ACCESS_KEY. We can set theses values in our Travis CI account in the settings sections of our project. The information can be found in our Sauce Labs account settings.
Protractor is an end-to-end test framework for Angular. Protractor runs tests against our application running in a real browser, interacting with it as a user would.
e2e configuration
In the e2e folder of our application, we need to place a file called tsconfig.json.
{ "compileOnSave": false, "compilerOptions": { "declaration": false, "emitDecoratorMetadata": true, "experimentalDecorators": true, "module": "commonjs", "moduleResolution": "node", "outDir": "../dist/out-tsc-e2e", "sourceMap": true, "target": "es5", "typeRoots": [ "../node_modules/@types" ] } }
tsconfig.json file in e2e folder
We also need to place a similar file at the root of our application.
{ "compileOnSave": false, "compilerOptions": { "outDir": "./dist/out-tsc", "baseUrl": "src", "sourceMap": true, "declaration": false, "moduleResolution": "node", "emitDecoratorMetadata": true, "experimentalDecorators": true, "target": "es5", "typeRoots": [ "node_modules/@types" ], "lib": [ "es2016", "dom" ] } }
tsconfig.json file
End-to-end (e2e) tests explore the application as users experience it. In e2e testing, one process runs the real application and a second process runs Protractor tests that simulate user behavior and assert that the application respond in the browser as expected.
Coveralls
In our folder, we are now going to create a file called .coverall.yml with the token corresponding to our respository that can be found in our Coveralls account.
repo_token: YOUR_TOKEN
.coverall.yml file
Travis CI
Now it is time to tell Travis CI what to do with our files. Let's create a file called .travis.yml and fill it like so:
language: node_js sudo: true dist: trusty node_js: - '6' branches: only: - master env: global: - CHROME_BIN=/usr/bin/google-chrome - DISPLAY=:99.0 cache: directories: - node_modules before_install: - ./scripts/install-dependencies.sh - ./scripts/setup-github-access.sh after_success: - ./scripts/delete-gh-pages.sh - git status - npm run build-gh-pages - npm run deploy-gh-pages - git checkout master - sleep 10 - tsc -p e2e - npm run e2e ./config/protractor.sauce.conf.js notifications: email: false
.travis.yml file
GitHub Token
Before we go any further, we need to create an access token on our GitHub account. We can do that in the settings of our account.
Scripts
In the previous section, we told Travis CI to use three bash scripts: install-dependencies.sh, setup-github-access.sh and delete-gh-pages.sh. We are now going to create a folder called scripts and create those three different scripts like so:
The first script, as its name lets us figure out, just install our dependencies.
#!/bin/bash export CHROME_BIN=/usr/bin/google-chrome export DISPLAY=:99.0 #Install chrome stable version sh -e /etc/init.d/xvfb start sudo apt-get update sudo apt-get install -y libappindicator1 fonts-liberation wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb sudo dpkg -i google-chrome*.deb rm -f google-chrome-stable_current_amd64.deb
install-dependencies.sh file
The second script ensures that we can access to our GitHub repository during the build. We can see in this script that a variable called $GH_TOKEN is used. This environment variable can be set in Travis CI by clicking on Settings in our repositories list.
#!/bin/bash set -e echo "machine github.com" >> ~/.netrc echo "login [email protected]" >> ~/.netrc echo "password $GITHUB_TOKEN" >> ~/.netrc
setup-github-access.sh file
The last script deletes the branch gh-pages to let us deploy our app on GitHub Pages (we can't do it if the branch is already here).
#!/bin/bash set -e git ls-remote --heads | grep gh-pages > /dev/null if [ "$?" == "0" ]; then git push origin --delete gh-pages fi
delete-gh-pages.sh file
Now, we need to tell GitHub and Travis CI that these files are executable. We can do that with the following command:
chmod +x the_file
Git command to change chmod
Perhaps the following command may also be needed:
git update-index --chmod=+x the_file
Git command to update index
package.json
We now have to make a few adjustments in our package.json file, more specifically, in the scripts section. We can see that use once again the $GH_TOKEN variable.
"scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test --code-coverage true --watch false && cat ./coverage/*/lcov.info | ./node_modules/coveralls/bin/coveralls.js", "lint": "ng lint", "pree2e": "webdriver-manager update --standalone false --gecko false", "e2e": "protractor", "build-gh-pages": "ng build --prod --base-href \"/YOUR_REPOSITORY/\"", "deploy-gh-pages": "angular-cli-ghpages --repo=https://[email protected]/YOUR_USERNAME/YOUR_REPOSITORY.git --name=YOUR_USERNAME --email=YOUR_EMAIL" }
package.json file
app.po.ts
We need to make one last adjustment in the file called app.po.ts that we can find in the e2e folder. Let's make it like so:
import { browser, element, by } from 'protractor'; export class BookreaderPage { navigateTo() { browser.ignoreSynchronization = true; return browser.get(browser.baseUrl); } getParagraphText() { return element(by.css('app-root h1')).getText(); } }
app.po.ts file
Here we go!
Now, we can push (again) our files and, if everything is alright, the magic will happen! We just have to check Travis CI, Coveralls and Sauce Labs.
0 notes
kalbarczyk · 5 years
Text
Tools to help with Python code quality
Recently we've decided to separate ObjectPath from the Asyncode Framework (ACF) and make it usable for a wider audience as a standalone tool. The problem we faced was that ACF's test suite has been written in BLSL (our own language that ACF interprets), which isn't suitable for Python environment (for which ObjectPath is intended). Therefore, we needed to start from scratch with testing the code base.
The positive thing about starting again was that it's given us a chance to try a few cloud-based tools that help to maintain code in good shape, including TravisCI, Landscape.io and Coveralls.
This post is intended to answer questions about why anyone should do automated code testing including coverage tests and code quality assurance. In other words it's stating the obvious for experienced devs.
The experience all of us has had (the problem)
Why bother with testing at all? The answer to this question is obvious for many. If however you are wondering if it's worth your while, consider the following situation. 
Imagine you have an idea for a cool piece of software. You write something overnight or during the weekend and you're so proud of yourself that the software solves a certain problem and saves hours of time. You're happy the job is done, show results to friends and even start to get requests for new features.
How cool! Isn't it? You have proven your creation is useful not only to you! You add new features, you fix bugs. After some time you run the new version of the software to deal with the problem that we began with in the first place and... it fails!
Why? The thing here is that every alteration to the code results in the software doing something slightly different than before. Consider the following scenario:
Let's say you want to create an intelligent way to add numbers that works also for numbers stored in strings (I have a good explanation why anyone would want to do that, if you ask:) ) and you implement it like this:
def add(x, y): return int(x)+int(y) add(2, 2) -> 4
It works, but after a while you think that it would be cool to support edge cases when add() takes True, False and None. You write something like:
def add(x, y) if x in [True, False, None]: return int(y) if y in [True, False, None]: return int(x) return int(x)+int(y)
And so you have a nice function that deals with all edge cases. Then someone else requests support for floats. Why not?
def add(x, y) if x in [True, False, None]: return float(y) if y in [True, False, None]: return float(x) return float(x)+float(y) add(3.1, 3.2) -> 6.3
You might think at first that you'll get away with using float() here because indeed:
add(2, 2) == 4 -> True
But just a few minutes after the release somebody files a bug because a function stopped working. After a brief investigation it turns out that the person used your function in a different way than you had expected:
add(2, 2) is 4 -> False
Now it's even hard to think about fixing the whole code. It has to be rewritten from scratch.
The time for writing tests
The code above doesn't make sense at all, because we just accepted all requests one after another without any consideration. The problem gets worse and worse, rendering the code complex and extremely hard to fix after some time. That's why before even opening the code editor we should define the scope of the problem, then write how the code should be used (in a real programming language) and only then implement it.
This is called test-driven development. Of course no one can think about every possible use case at the beginning of software development journey, but starting with any test can result in much better code quality. Then before accepting any enhancement request first ask about specific usage of your function. Instead "I need support for False, True and None" requester should provide exact code that requires that. For example:
d={} if random()>0.5: d["a"]=33 add(2, d.get("a"))
d.get() will be either 33 or None in this case.
If you have this piece of code, put add(2, None) is 2 to test suite like Python's unittest package and before any push test if this particular code results aren't broken.
Automating tests - TravisCI
Doing tests before every push can be troublesome. It's normal that sometimes we forget about this, in other cases we think that these aren't harmful changes and we don't do tests. TravisCI can help us with this (and also with a lot more things!).
All we need is to sign in with our GitHub credentials, turn on the Contiguous Integration on repo's that we want to cover, create .travis.yml file in repo and push the code to github. Travis will automatically test what we pushed and generate report. The cool thing here is that we can test our software in many environments at the same time. ObjectPath is tested in Python 2.7, 3.2, 3.3 and PyPy on each push! I don't even need to have PyPy on my box.
If we want to create package and upload it to PyPi it can be done automatically by Travis as well. In comparison to what it is done manually it's game changing functionality.
The code quality - Landscape.io
There are some tools for checking code quality available, but with landscape.io I don't even need to know them - this service has all the best of them. Using the service is as simple as with TravisCI - just sign in with GitHub.
The big drawback of underlying tools is that they are too picky. Comment block of code and your score is some points worse. It also doesn't know some obvious Python tricks, but in the end the quality of ObjectPath's code went significantly up because of automatic checks and easy to read reports.
You can think why to use Landscape instead of relying on PyCharm's integrated code checks. The answer is: do both. PyCharm hint you what to do, but Landscape gives you the overall code quality score and informs you when the score went up or down. Don't underestimate this feature!
Coverage - Coveralls.io
Not that easy to configure in case of Python, but definitely worth it!
Code coverage is very important concept in testing your code. All above tools aren't relevant until you have good code coverage. Because it's little hard to find simple definition of this concept I'll try to craft my own:
Code coverage is a measure that checks how many and which lines of code is tested in unit tests.
Tests are run in a environment that can check what lines of your code are used to run each test and then create rapport - in case of Coveralls nice HTML based report. Red indicates that the line was not used during tests, green is good. Red indicates that you are probably facing the problem from beginning of this article or that it's deprecated piece of code (I found many lines that were not needed anymore in ObjectPath using this tool).
WARNING! Green doesn't indicate that your code is well tested. It just tells that the line was used at least once during tests. Providing relevant tests is still in your hands.
What next?
These tools are just beginning in quality testing that you need to perform. Programming is a discipline that only human can practice and nothing is better for your code than audit by other person. Show your code to friends, ask them if they understand it. When you write commercial code professional code reviews or audits are good option. Always eyes of person who is not involved in the project can find bugs that you would be never aware of!
0 notes
vivocha-tech · 7 years
Text
Continuous Integration
In the previous posts we wrote about Testing the APIs, E2E Testing and Code Coverage. We wrote about why they are relevant and why they plays a central role in a serious software development cycle, helping developers to discover bugs early and to deliver high quality code.
In this post we briefly explain how to complete and to automate the development cycle adopting a Continuous Integration flow, and we provide a brief list of tools we are using to do that.
Thus, about Continuous Integration... from Wikipedia:
In software engineering, continuous integration (CI) is the practice of merging all developer working copies to a shared mainline several times a day.
[…]
Continuous integration involves integrating early and often, so as to avoid the pitfalls of "integration hell". The practice aims to reduce rework and thus reduce cost and time. A complementary practice to CI is that before submitting work, each programmer must do a complete build and run (and pass) all unit tests. Integration tests are usually run automatically on a CI server when it detects a new commit.
From our point-of-view, a feasible software development cycle, which includes the Continuous Integration practice, can be represented as in the next figure.
Tumblr media
Briefly:
during Fast Development Cycle (FDC), each developer works on his/her copy of the code base in fast iterations: writes code and tests, runs tests locally and checks Code Coverage, then iterates.
At the end of each significative reached result in FDC, developers Commit changes and push code to the shared repository.
The latter event triggers the Continuous Integration (CI) tool, which: checkouts the updated code from repository, builds it and runs all the tests; if all tests passed and Code Coverage thresholds are fulfilled then it increases the version of the code using Semantic Versioning. Finally, depending on the adopted company’s software delivery strategy, the CI tool can also make a new release, publish or deploy it to production servers.
In this way, the code is always updated, built, tested and merged in a consistent way, all the written integration tests are run over and over again at the end of each Fast Development Cycle.
Tools
Some tools we are using in Node.js-based flows at Vivocha Labs, include:
Istanbul/nyc: code coverage tool;
husky: git hooks, prevent bad commits and pushes. Executes pre-commit and pre-push tasks;
commitizen: commiting with commitizen, prompts you to fill out any required commit info fields at commit time, like commit type (new feature, bug fix, refactoring, etc…), description of the commit and so on...
semantic-release: automates node packages publishing to NPM, setting a version related to the git commit type and strictly following the SemVer specification. After every successful Continuous Integration build of your branch, it publish the new version for you to the repos.
Travis CI: continuous integration service, free for opensource projects. Triggered by a git push in a repository, it executes builds/testing retrieving code from a git repository and running tasks configured in your .travis.yml configuration file. You configure Travis with GitHub and NPM access tokens to allow it to publish and release new versions of your code to the repositories, through semantic-release tool.
Coveralls: Code coverage reporting service; it shows which parts of your code aren’t covered by your test suite. Also, it produces reports like nyc does, but they can be read online through Coveralls website in a nice layout.
Configuring the project
For a Node.js project, in addition to (obviously) installing all the dev dependencies for the modules mentioned above, the package.json scripts section could be configured as follows:
"scripts": { "test": "./node_modules/.bin/mocha -t 20000", "cover": "./node_modules/.bin/nyc --reporter=lcov --reporter=text npm test", "check-coverage": "./node_modules/.bin/nyc check-coverage --statements 100 --branches 100 --functions 100 --lines 100", "precommit": "npm run cover && npm run check-coverage", "commit": "./node_modules/.bin/git-cz", "semantic-release": "semantic-release pre && npm publish && semantic-release post", "report-coverage": "cat ./coverage/lcov.info | ./node_modules/coveralls/bin/coveralls.js" }
This configuration enables performing Fast Development Cycles + Commit activities, through the following steps:
Write Code & Tests;
npm run precommit;
If OK, git add files;
npm run commit;
git push.
The git push command triggers a build event on the configured Travis CI repository, which runs our tasks as configured in the project .travis.yml file. A .travis.yml configuration can be written as follows:
language: node_js cache: directories: - node_modules branches: only: - master notifications: email: false node_js: - '7' before_script: - npm prune script: - npm run cover - npm run check-coverage after_success: - npm run report-coverage - npm run semantic-release
This configuration drives the following , automatic, CI process:
Run all Tests and do Code Coverage; 
Check Code Coverage against the defined thresholds
If successful: 
Send Code Coverage results to Coveralls
Publish the new version of the software in the NPM registry
Create a new release in the git repository.
Adopting a Continuous Integration practice in your software development cycle closes the development/release loop, automating all the testing steps and also running integration tests every time an update in the code base is committed and pushed to the repository. The proposed methodology can also contribute to speed up a Continuous Delivery strategy, releasing software faster, more frequently and in a more secure way.
0 notes
mbaljeetsingh · 7 years
Text
Tips For Building Your First Laravel Package
Laravel is a powerful and modern framework. It has tons of different features, which make our work faster and easier. But you can’t push everything into the single box. At one time or another, we’ve all been in need of something not implemented in the framework out of the box. So, you’re writing code by yourself; it works, and you are happy. Yep?
But what if you need the same functionality in another project? Will you write it again? Copy-paste it? Both solutions are bad because you’re solving the same problem again and again. And what about bugs? Imagine finding a bug in that code a few days later. Now you have to fix it in N places. Ufff, not fun!
A package can be a solution. Write your code once and use it in any number of projects. Maybe you found a bug, or want to make some changes? Do it just once in your package code and then pull required changes in all of your projects. Sounds good?
The First Step
Before diving into the work with your head, check Packagist first. It is possible someone has already solved your problem. If you can’t find an existing package, try to explore the question a little bit deeper. Maybe you’re not using the most accurate keywords in your search. Don’t be afraid to spend another 30 minutes; it could save you 30 days eventually.
Development
You’ve searched, but no luck and now you’re willing to create a new Laravel package. What are the next steps?
First of all, take a look at the Laravel documentation. The Package Development page is a great starting point. You’ll find out in most cases, the heart of your package would be a service provider. Here you can register event listeners, middleware, routes, translations, views, etc. Maybe your package has its own config file, maybe even its own database migrations. You have to describe all of that in your service provider. It is the connection point between your package and Laravel.
Your head is full of knowledge; you are creating new GitHub repo and starting to write code. Great! What tips can I give to you?
I think one of the most important things, when you’re writing a framework package is to make the package close to the framework. In everything: architecture, naming, code style, documentation, every aspect of it. It will be much easier for people to understand and use your package if it looks like the part of the framework. But, it’s not always easy to choose those best names for classes, methods, variables, etc. I really like Jeffrey Way’s method, which I learned at Laracasts. When you’re feeling a name isn’t very good, delete it and write it as you’d want it to be in a perfect world, then use it. Simple enough, but it works. Naming is not the easiest thing in programming, and there are a lot of good books about it.
I’ve received the following feedback from someone using my package: “Nice readme file! It’s really easy to read it, it feels like I’m reading one of the Laravel docs!” That was my victory!
If people feel your package is a natural part of the framework — you’re on the right path.
Testing
Package testing can be really tricky. And the first question I had, before even starting to write tests was this: how should I test it? Should I create a new instance of my Laravel project, install my package there, and write my tests? These were the real usage conditions of my package, but my tests would be located outside of the package, which is really bad.
Luckily, there is Testbench Component to solve that problem. It provides a way to test your package without a standalone Laravel install. You can easily register your package service provider, aliases, set up the database, load migrations, etc.
The great thing about open source is you can use cool GitHub integrations for free. They can help you in different ways. My favorites are:
SensioLabs Insight – for code quality;
StyleCI – for code style;
Travis – for running tests automatically;
Coveralls – for measuring code coverage by tests;
Documentation
Your code is ready and, your tests are green. You think you’re done at this point? Nope.
Don’t be lazy; create a nice readme file. It can be really useful for you because you’re taking another look at your package in general. There have been so many times while writing a readme file that new ideas have come to me. And having a good readme allows others to be able to use your package as well. Isn’t it great to know your code is useful for someone?
I think a readme file is really important, and it should be as simple as possible. Use images, where possible. They are easier to understand. Write your documentation as close to Laravel Documentation as you can.
There is a really great article “How to write a readme that rocks“, take a look! There are a lot of cool tips and tools described there. For example, Michael Dyrynda’s Readme Generator, Grammarly, Hemingway App, etc.
Release
Don’t forget to add your package to Packagist, so it will be available through Composer. That will make your package installation really easy, and make it searchable through Composer. Use keywords in your composer.json file. You can also set up GitHub Topics to make your package even more searchable.
Use GitHub Releases for your package. Read and follow Semantic Versioning rules.
It doesn’t matter how cool is your package if people don’t know about it. Tell them!
Write short notes on community forums, for example, Laracasts Forum, Larachat, etc. Add a link to your package on Laravel News. Don’t be afraid to tell!
Conclusion
Creating your own packages can make your code cleaner, preventing you from repeating yourself over and over in different projects. I’m not saying you have to extract everything into the packages, no. It’s all about the balance, like in coding. Should I extract that code into the separate method? Or, maybe even into the separate class? Remember those questions? I think “Should I extract that code into the separate package?” is something similar to these questions. It depends. Feel it, and make a decision according to your context and experience.
There are a lot of aspects, which are not covered in this short article, but I really hope it will be useful for someone. Please, contact me if you have any thoughts or questions.
Wish you tons of stars on your own packages!
via Laravel News http://ift.tt/2kQczRF
0 notes