vrenjith
vrenjith
Random revelations
11 posts
Don't wanna be here? Send us removal request.
vrenjith · 5 years ago
Text
Minimalist Mesh for Micro Services
So the story goes like this:
You have container workloads running in production (nomad).
You are on a bare metal environment.
Multiple container networking software solutions are in use in different data-centers - contiv and flannel.
The perimeter is secured for the cluster (firewalls, WAF).
Service to service communication within the cluster is non-secure (the journey started before service mesh concepts was in place).
Customer insists that service to service communication is over https within the cluster when it crosses machine boundaries within the perimeter too.
Incremental approach for migration service by service is mandatory.
Options
Introduce a full-fledged service mesh
A complete networking and software stack upgrade is impossible without a downtime.
Replace the existing container networking with one that supports encryption
Do we have one such solution which is usable in production?
Solution
“Introduce a light weight sidecar proxy that can do this job”
Details
Nginx as a sidecar.
We added it as part of the base images we supply and with the single flag in Dockerfile, the sidecar is launched.
For launching the sidecar we extended the configuration file of containerpilot which we were already using.
The certificates are auto created during the launch of the container. How did we achieve this without a control plane? We off-loaded that into the startup scripts of the container itself to generate the certificate.
Well, the next question is, how can we use the same certification authority across the same cluster? The answer is to inject the intermediate CA and Key into the container during startup sequence using Vault and use that intermediate key to sign the dynamic certificate that is created.
The sidecar used the existing variables that specifies the service ports to run a reverse proxy to send the traffic to the internal application process inside container. 
The containerpilot configuration also switches it’s behaviour to register the service using the new TLS port instead of the non-TLS port that it was doing before.
Overall, we got our first service up in production in a week running TLS without the application team doing anything, other than setting a single variable in their configuration file.
0 notes
vrenjith · 7 years ago
Text
Etiquette in Channels
Tumblr media
These are a set of guidelines that helps everyone to play nice and fair while communicating especially on mediums like Slack, Teams or Mattermost, be it at your workplace or other online communities.
Remember, while we are communicating over these mediums, neither our body language nor our voice is heard by other individuals who are reading our messages. This requires us to put in an effort while writing replies or initiating new conversations.
Direct vs Indirect Messages
Always try to avoid direct usage of english. They might find your directness rude and possibly offensive. By adjusting your tone, you are more likely to get a positive response from your reader.
Consider these:
Direct – You are doing it wrong. Indirect and polite – The way it is being done might not be the right way. Do you mind checking it once?
Direct – You are using a non standard practice. You need to follow the steps as documented. Indirect – I’m afraid that what is being followed might not be the standard practice. It will be beneficial to follow the steps as documented.
Direct – It’s a bad idea Indirect – To be honest, I’m not sure if that would be a good idea.
We instead of You
Try to avoid the usage of “you” and substitute that with “we”. It can create wonders. The moment we use “we”, it becomes an inclusive conversation and the team feeling is higher and collaboration increases.
Consider these:
You – You need to look into the logs and figure out. We  – We need to look into the logs and figure out.
You – You have to request for it with another team. Raise a JIRA ticket. We – We have to request for it with another team. Please raise a JIRA ticket.
And instead of But
Avoid the usage of “but” and substitute that with “and”. Try it out and see how the negative connotation of “but” just disappears.
Consider this:
But – The logs are placed in a directory but you need to browse to the path to get the details. And  – The logs are placed in a directory and we need to browse to the path to get the details.
Complete and Simple
Write complete and simple sentences.
Do not pollute the conversation thread with one liners one after the other.
It is not an ideal situation for the reader to get 10 notifications on his phone for every line you type and press enter.
Instead send a single message with multiple lines and complete sentences.
Basic English Rules
Be very careful of capital letters, punctuation, spelling and basic grammar.
Nothing is more unpleasant than getting a reply which requires the reader to read multiple times to figure out where the sentence starts and ends along with multiple grammatical and spelling mistakes.
1 note · View note
vrenjith · 8 years ago
Text
Story Points :facepalm:
Tumblr media
Why is that many leaders still don’t get what is story point based estimation?
An attempt to explain the basics.
What Are Story Points?
Story point is a relative value of the size of a story in comparison with another story. For a team we can have multiple such base stories to ease estimation as every team member might not be aware of every type of work. The absolute values we assign are unimportant and what matters is only the relative values.
What Values To Use For Story Points?
In practice, we can use the fibonacci series (pronounced as fibi-no-chi), which takes these values - 0, 1, 2, 3, 5, 8, 13
What If The Story Point is Higher Than 13?
That indicates that the story is actually too big to be handled as a single item. Discuss and work with the product owner and split into multiple stories as functional slices. Do not split into pieces like "Write Code" and "Testing" as two different stories. That introduces waterfall into Scrum.
Who Does Story Point Estimates?
It's the Team. There are multiple ways to do this when there are many scrum teams. The core idea is that it's a relative value and hence to maintain uniformity it needs to be done in a uniform way across sprints.
Team representatives sit together and do this estimates during the Grooming meeting for the top 10-15 items for 3-4 scrum teams. (Use planning poker)
Delegate this to an individual who is well aware of the complexities and technical details of the project.
What's the Use of Story Point Estimates?
This is to achieve what is known as the burn down of Epics and Stories (this is very different than the Sprint burn down which is based on effort estimates of tasks which is in hours). This is critical in doing Release planning which in turn helps to make customer commitments.
Let's take an example.
An Epic has a total story point of 100. (Remember this value of 100 itself changes over times as team starts the implementation and more and more stories surface)
The team is able to burn on an average of 10 story points over the last three sprints (it's always a rolling average of last three sprints). This is also called as velocity of the team.
Let's assume that out of 100 story points, 40 is already completed.
So to burn the remaining 60, it will take 60/10 = 6 more sprints to complete the Epic.
0 notes
vrenjith · 8 years ago
Text
The so called Hi messages
Tumblr media
Many times we get greeted by just Hi or Hey messages from our colleagues and friends in our various messaging systems. Be it Slack, Mattermost, Teams, Hangouts or whatever messaging systems that you use. This is one of the most irritating (sorry to use that word, couldn’t find a subtle one), messages that you can ever receive.
Messaging systems are meant for asynchronous messages. Expecting that the person at the other end of the conversation is always available to answer the hi/hey messages is too much of an expectation. Remember messaging someone is NOT like making a phone call. You cannot expect an immediate response there.
Follow these simple etiquettes next time you message your friend or colleague.
Write the complete content that you are planning to ask the person in a single message (as much as possible)
May be you can start with a Hi, but make sure it is a multi-line message with the actual content. There is nothing more irritating than receiving 10 notifications in the mobile or laptop when someone starts messaging you.
If you do not get a response, do not start nudging them immediately. There are many reasons why someone decides not to respond; they might be busy; they might have to check something before they respond; they might be away from their mobile/laptop; they might be on vacation.
Happy messaging.
0 notes
vrenjith · 8 years ago
Text
Testing Thoughts
Tumblr media
I am not the original author of this post’s contents.
I recently found this email reply from John Mitchell while discussing the testing aspects of micro services in our project. It is so informative that I didn't want this to get lost in emails. Adding this here.
(Names have been changed to keep the focus on the topic under discussion)
Hi PersonA,
I think our discussions about the Quality Criteria were pretty clear…
And to be even more precise, they must be able to be automatically run as part of the appropriate phases in the pipeline.
APIs
Message / file formats, etc.
UI’s which they provide
Telemetry which they generate for business/application/operation/etc. monitoring/alerting/tracking/etc.
Martin’s slide that you pointed is no excuse to get out of any of the above.  He’s pointing out, as I did, that false metrics such as “code coverage” percentages are misleading and inefficient ways to measure actual quality of the software (as opposed to trying to force the team to spend time creating “tests”).
Again, as we discussed in our last meeting, if you achieve anything even remotely close to my list above (and on the quality criteria page) you will have created software that is quite high quality.
As I noted, people must stop over-indexing on this so-called end-to-end testing to the exclusion of the rest of the criteria.  That’s what’s lead to the current mess that we have with the AppA bundle today.  It’s precisely the fact that each of those various angles/levels of testing are done (automatedly) that provide what we have talked about in terms of being able to bracket problems for much faster triage and debugging, achieve the sorts of testability for multiple uses, provide reality based assurances to InfoSec, the app teams, the management, etc. that are stuff really does work.
Thanks,
John
0 notes
vrenjith · 9 years ago
Text
Mac OSx - Useful Utilities
Day-0
For getting the calendar along with time.
Download
BetterSnapTool
For those Windows users who is missing the maximize and automatic snapping of windows. It's worth it's price (Rs. 110).
Download
iTerm
A much better terminal than the built-in one that comes with Mac
Download
Clipmenu
Multiple clipboards to save you from losing your work.
Download
Skitch
Much better screen snapping tool.
Download
0 notes
vrenjith · 9 years ago
Text
Scrum - Collection
youtube
youtube
youtube
youtube
youtube
0 notes
vrenjith · 9 years ago
Text
Time Estimates and Story Points
Tumblr media
I would like to share an excellent write-up by the JIRA Agile Product Manager from Atlassian Shawn Clowes. This was written a while ago, but the depth and detail present in this reply from Shawn for one of the questions in the Answers site still amazes us. This was his explanation about time estimates and story points in the agile context.
Here it goes:
I'd like to provide a full explanation of why we we've offered 'Original Time Estimate' as an 'Estimate' value and not 'Remaining Estimate'. Some of my discussion refers to agile concepts that anyone reading probably knows well but I've included it because the context is important. Note that the discussion refers to the best practices we've implemented as the main path in GreenHopper, you can choose not to use this approach if you feel it's really not suitable.
Estimation is separate from Tracking
In Scrum there is a distinction between estimation and tracking. Estimation is typically performed against Primary Backlog Items (PBIs, usually stories) and is used to work out how long portions of the backlog might take to be delivered. Tracking refers to monitoring the progress of the sprint to be sure it will deliver all of the stories that were included. Tracking is often performed by breaking down stories in to tasks and applying hour estimates to them during the planning meeting then monitoring the remaining time in a burndown during the sprint.
Estimation is all about Velocity
The primary purpose of applying estimates to the PBIs is to use that information to work out how long it will take to deliver portions of the backlog.
In traditional development environments teams would estimate items in 'man hours' and these would be assumed to be accurate. They could then count up the hours in the backlog for a project, divide by the number of people on the team and hours in the week to reach a forecast date. Of course, these estimates often proved to be wildly inaccurate because they did not take in to account the natural estimation characteristics of the team (for over/under estimation), unexpected interruptions or the development of team performance over time. The inaccuracy of the estimates combined with the significant cost of the time spent trying to 'force' them to be accurate makes the 'man hours' approach difficult if not impossible to make work.
So in the Scrum world most teams do not try to achieve estimation accuracy, instead they aim to achieve a reliable velocity. The velocity is a measure of the number of estimation units that a team tends to complete from sprint to sprint. After their first few sprints most teams will achieve a reasonably consistent velocity. Armed with velocity and estimates on the PBIs in the backlog teams can look forward to predict how long portions of the backlog will take to complete.
The key is that it does not matter what the estimation unit is, just that from sprint to sprint it becomes reasonably predictable. For example, teams can choose to use 'ideal hour' estimates but it's neither necessary or expected that those hours will have any close relationship to elapsed time. If a team has 'man hour' capacity of 120h in each sprint but a velocity of 60h that makes no difference because the 60h velocity can still be used to estimate the number of sprints that portions of the backlog will take to complete and therefore the elapsed time. Many people then start wondering where 'the other 60 hours' went and implying that there is something wrong with team productivity. But that's usually got nothing to do with it, a team's estimates merely represent their view of how hard items will be and they're always polluted by the team's natural behaviour (for example over/under estimation) as well as organisational overhead etc. The velocity is all that matters from a planning perspective.
Since the units are not related to time, most teams now choose to use story points (an arbitrary number that measures the complexity of one story relative to others) as their estimation unit. Story points clearly break the mental link with time.
Inaccurate Estimates are good, as long as they are equally Inaccurate
Velocity will only reach a stable state as long as the team estimates each backlog item with the same level of accuracy. In fact, it's probably better to say that each item should be estimated to exactly the same level of inaccuracy. At the risk of repeating the obvious, the goal of velocity is to be able to look at a backlog of not particularly well understood stories and understand how many sprints it will take to complete. This requires a similar level of uncertainty for all of the estimates that are in the backlog.
One of the counter intuitive implications is that teams should estimate each item once and not change that estimate even if they discover new information about the item that makes them feel their original estimate was wrong. If the team were to go ahead and update estimates this 'discovery of new information' will happen regularly and result in a backlog that has some items that have higher accuracy but most that don't. This would pollute velocity because sprints with a larger percentage of high accuracy estimates will complete a different number of units compared to those with a lower percentage of high accuracy estimates. As a result the velocity could not be used for its primary purpose, estimating the number of sprints it will take for a set of not well understood stories in the backlog to be completed. Therefore it's critical to use the first estimates so that the team's velocity realistically represents their ability to complete a certain number of units of not well understood work far ahead in to the future.
But what about when teams realise they've gotten it wrong?
Consider the following scenario:
Issue X has an original estimate of 5 days.
The issue's estimation was too optimistic and they realise it's actually 15 days before the next sprint is planned.
Some people would argue that using the original estimate will endanger the sprint's success because the team will take in what they think is 5 days of work in to the next sprint when it's actually 15
However, the inaccurate estimate of 5 days is unlikely to be an isolated occurrence, in fact the estimates are always going to be wrong (some very little, some wildly so). Often this will be discovered after the sprint has started rather than before. As long as the team estimates the same way across the whole backlog this will work it self out over time. For example if they always underestimate they may find that for a 10 day sprint with 4 team members they can only really commit to 20 days of their estimation unit. If they have established a stable velocity this has no effect because from a planning perspective we can still estimate how much work we'll get done in upcoming Sprints with good certainty.
But doesn't that break the Sprint Commitment?
When it comes time to start a sprint, the team can use the velocity as an indication of items from the backlog they can realistically commit to completing based on the amount they have successfully completed in the past. However, many people immediately question how that can be right when the original estimates will not include information about work that might have already been done or discovered information about how hard the work is.
As an example, consider the following scenario:
An issue has an original estimate of 10 days.
The team works 5 days on the issue in the current sprint.
The team discovers a bad bug somewhere else in the project and decide that fixing that bug in the current sprint is far more important than completing issue X as planned.
The sprint gets finished and the issue returns to the backlog.
In the next sprint the team would be tempted to update the estimate for the issue to 5 days and use that to make their decision about whether to include it in the sprint. The implication is that they might not include enough work in the next sprint if they used its original estimate of 10d. However, the reason that the task was not completed previously is because of unplanned work, it's unrealistic to assume that won't happen again in the future, perhaps even in the next sprint, thus the 10d is a realistic number to use in absence of certainty. As a result the cost of the unplanned work that may happen is eventually accounted for in the original estimate. Even if the work does turn out to be insufficient for the next sprint the team will correct that by dragging more work in to the sprint.
In the same example, consider if this were the only issue in that sprint and will be the only issue in the next. If the issue is completed in the second sprint and we use the remaining estimate the velocity will be (0d + 5d) / 2 = 2.5d, but the team can clearly complete more work than that in future sprints. If we use the original estimates the velocity will be (0d + 10d) / 2 = 5d. The use of the original estimate accounts for the fact that the team cannot commit to 10d in every sprint because unplanned work will likely make that impossible, it also realistically accounts for the fact that unplanned work will not happen in every sprint.
Why not estimate on sub-tasks and roll that up for Velocity and Commitment?
Many teams break down stories in to sub tasks shortly before the sprint begins so they can use the stories for tracking. This raises the possibility of using the sum of the estimates on the sub-tasks as a way to decide which issues to commit to in the sprint (and potentially for velocity).
As described above, tracking is really a separate process from estimation and velocity. The estimates that are applied to the sub tasks are clearly higher accuracy than those that were originally applied to the story. Using them for velocity would cause the velocity to have both high and low accuracy estimates, making it unusable for looking looking further out in the backlog where stories have only low accuracy estimates. In addition, only items near the top of the top of the backlog are likely to have been broken to tasks, so using task estimates for velocity means that the velocity value could only ever predict the time to complete the backlog up to the last story that has been broken in to tasks.
Using the sub task rollup to decide the sprint commitment would also be dangerous because unlike the velocity value it does not take in to account the overhead of unplanned work or interruptions.
Conclusion
Many industry leaders are moving away from hour estimates of any sort. This makes sense because the main questions to be answered are 'How much work can we realistically commit to completing this sprint?' and 'How long will this part of the backlog take to deliver?'. A story point approach based on original estimates can deliver the answers to these questions without the anxiety around 'accuracy' that teams feel when asked to estimate in hours.
The GreenHopper team itself uses the approach described in this article and has established a reliable velocity that we have used to plan months in advance, even when new work has been encountered during those months.
We recommend this approach because while it is sometimes counter-intuitive it is also powerful, fast and simple.
All of that said one of the key precepts of Agile is finding the way that works for you. So GreenHopper does support the alternatives described above including the use of remaining estimates for sprint commitment, hours for estimation and hour estimates on sub-tasks.
0 notes
vrenjith · 10 years ago
Photo
Tumblr media
How To Become A Famous Blogger
0 notes
vrenjith · 11 years ago
Text
Using GCViewer For GC Analysis
GCViewer developed by Tagtraum Industries is one of the free tools that you can use to visually analyse the garbage collection logs. 
Running GCViewer
The latest version is now maintained by chewiebug and available for download at http://sourceforge.net/projects/gcviewer/files/
Download the gcviewer-x.xx.jar to your local system
Launch the viewer by running java -jar path_to_gcviewer-x.xx.jar You need Java version >= 1.7
Using GCViewer
Open the GC log file from the test run using GCViewer
Adjust the zoom levels (dropdown at the top) so that the graph fits the window and there are no scroll bars (to get an overview)
Tumblr media
Check the trend of the 'Total Heap' usage of the VM. As long as it does not show an upward trend, VM is considered to be fine.
Check the right side of the tool for the information related to the run - Summary, Memory and Pause.
Tumblr media Tumblr media Tumblr media
Summary
'Number of full gc pauses' are of concern and healthy VM should not generally be doing a full GC (which means it should ideally be zero). Full GCs result in the VM being inactive till the GC completes.
Memory
The 'Total heap' gives an indication on how much the VM memory is loaded.
Pause
Total time spend on 'gc pauses' is of interest.
0 notes
vrenjith · 11 years ago
Text
How Popchrom And Lazarus Saves My Day
What is Popchrom and Where To Get It
Save time and effort with Popchrom by creating your own shortcuts for text phrases! Whether it's a simple email signature or…
To transform your abbreviation place the cursor on it and hit your personal shortcut (default is 'ctrl + space').
Get Popchrom
What is Lazarus and Where To Get It
Lazarus Form Recovery is a free browser add-on for Firefox, Chrome and Safari that autosaves everything you type into any given web-form. We've all had the frustrating experience of spending ages getting a form entry just right, only to suffer rage and disgust when all that hard work is destroyed, whether it's a website timeout, a browser crash, a network failure, or just a cat wandering across the keyboard. Lazarus makes recovering from such mishaps painless in most cases, and we're working on the rest of them!
Get Lazarus
0 notes