brentpabst
brentpabst
Brent Pabst
151 posts
Transformational Architect and Strategic Technologist
Don't wanna be here? Send us removal request.
brentpabst · 9 years ago
Text
Got a #ContinuousX environment running on my local machine for NCDevCon... find out how!
Continuous X in a pinch
I had the great opportunity to speak at NCDevCon recently about how we implemented Continuous Delivery at Dude Solutions.  Last year, we published a post about our process and toolset.  Recently, we’ve made a lot of progress at the Dude in improving our #ContinuousX process, we’ve even added a bunch of DevOps engineers (#JoinTheDude).  I had a little bit of a different challenge for NCDevCon though.  How do you take the entire continuous integration and delivery system and demo it to a room full of people from a single laptop?
We touted the fact that we separated the build and deployment roles in our tools, which is great for our team, not so great for a single laptop demo environment.  Imagine spinning up eight or more virtual machines just to get the job done!  Needless to say I wanted to take a stab at making it as simple as possible.  I’ll try to run you through what I did…
VMs, Vagrant, Docker, Cloud Foundry, oh my!
I had fiddled around with Vagrant a bit during one of our hackathons to stand up an active directory Windows server.  It worked well at the time.  However, here at the Dude we’re having serious discussions around using Docker and Cloud Foundry.  I wanted to try and get as much of the environment running in Docker as possible.  Cloud Foundry was doable but can be a pain to run locally and Docker images for our tools are mostly readily available.
First I needed to find some good Docker images to use for the project.  To recap, the tools we use internally are:
Atlassian’s Bitbucket Server
Jetbrains’ Teamcity Server
Octopus Deploy
With Teamcity and Octopus I also fully expected to need additional servers to handle builds as well as deployment targets.  Here’s where the first hurdle appeared.  Bitbucket and Teamcity are both Java/Tomcat based apps with vendor-published Docker images.  Octopus is currently a Windows only tool.  The deployment targets (tentacles) are capable of running Windows or Linux but the core server needs Windows.  I also discovered that a core Teamcity plugin we use to handle communications with Octopus also only ran on a Windows box.
Alright, so I couldn’t just get by with Docker Linux images.  I checked out some of Microsoft’s Server 2016 documentation on using Docker on Windows but honestly did not have the time to try it out.  I ended up deciding to use Docker on OS X for all Linux apps and Parallels Desktop to host my Windows VMs.
Build All The Thingz
To start, I grabbed the latest install of Docker for OS X.  The install was straight-forward and the CLI tools were added to my local PATH.  A simple check of the Docker version from terminal and things were all set.
Tumblr media
Setting up Parallels was even more straight forward.  I grabbed a Windows 2012 R2 ISO from my Dude provided MSDN subscription and started spinning up VMs.  It’s also good to note that it’s very important to make sure you set reasonable CPU and RAM limits on your VMs.  Parallels by default allocates your entire machine to each VM.  I was able to get by with 1 CPU and 1 GB of RAM assigned to each VM.  Bad things happened to my Mac when I didn’t do this (see Cmd+Option+Esc).
Tumblr media
So I spun up all of the needed machines.  I ended up with a few.  You could change or scale this out as needed to support whatever test lab you needed or better yet, what your machine could run.
Tumblr media
Ok, but Docker?
Docker was the easy part.  Each of our tools has a vendor-published image.  This means that whenever Atlassian or Jetbrains publish a new version of the product a new Docker image shows up around the same time.  They even provide you with the basic commands to get started.  I ran into a problem with networking at first.  Docker creates a single virtual network (a.k.a. default0) that all of your Docker images run on.  By default, this is an isolated environment where containers cannot interact.  This doesn’t work well in a CI/CD environment where every tool is talking to another.
I created a new bridge network that allowed communication between the instances.  Moreover, since Parallels also creates its own subnet the parallels machines could talk to Docker containers and vice versa.  Pretty sweet!  Setting up the bridge network was as easy as running this command:
docker network create --driver bridge ncdevcon
Tumblr media
Then I started running docker run commands to begin standing up containers.  Both Atlassian and Jetbrains recommend mounting volumes from the host to store persistent storage elements like configs, etc.
Bitbucket
docker run --name=bitbucket \ -itd \ --network=ncdevcon \ -v /<your datadir>:/var/atlassian/application-data/bitbucket \ -p 8080:7990 \ -p 8081:7999 \ atlassian/bitbucket-server
Teamcity
docker run --name=teamcity \ -itd \ --network=ncdevcon \ -v /<data dir>:/data/teamcity_server/datadir \ -v /<log dir>:/opt/teamcity/logs \ -p 8085:8111 \ jetbrains/teamcity-server
Teamcity Linux Agent
docker run --name=teamcity-agent-2 \ -itd \ --network=ncdevcon \ -v /<data dir>:/data/teamcity_agent/conf \ -e SERVER_URL="http://teamcity:8111" \ -e AGENT_NAME="NCDevCon-Agent02" \ brentpabst/teamcity-agent
Tumblr media
Wait, that last Docker Image is from you!
Yep, it is!  My sample application for the demo was a .NET Core web app (grab the source here).  We’re also starting to toy around with moving to .NET Core so I wanted to base the sample app on it.  I ran into an issue however where the Docker image Jetbrains currently uses for the Teamcity Agent is based on Ubuntu 15.x.  .NET Core currently only supports Ubuntu 14.x or 16.x.  I spun up the Jetbrains image first and then ran a distro-upgrade on the machine to get it to Ubuntu 16.  Once I did that and installed .NET Core on the container I packaged the image and shipped it to the Docker Hub for others to use.  If you don’t need .NET Core on your agent, just grab jetbrains/teamcity-agent!
I should also point out at this time the reference to other containers in some of the Docker run commands.  Specifically, the SERVER_URL parameter passed to the Teamcity Agent.  Docker is able to self-resolve other containers in the same network.  Essentially the DNS server exposed to the containers resolves other containers in the same network, hence the custom bridge network I created first.  However, the port number is the port number the service runs on and not the port number that is exposed back to me, the user, running at the OS X layer.  A little meta for sure!
The End Result
Between Parallels and Docker, I was able to get enough containers/VMs running to give the demo.  There is always a fair amount of work that goes into configuring each tool, let alone getting the tools to talk to each other.  The cool thing with the Docker containers and the mounted folders is that the configuration can be copied and cloned to other containers or saved for later, regardless of how many times you rebuild the containers.  Parallels allows for some snapshotting but it’s not perfect.  Hopefully Octopus will head down the .NET Core path and have their own Docker image in the future so a few more steps could be removed from this process.
Missed the Talk?
Check the NCDevCon.com site for updates and eventually the video recording of the session.
You can also check out the sample app and grab the slide deck from https://github.com/brentpabst/ncdevcon16
3 notes · View notes
brentpabst · 9 years ago
Text
Raspberry Pi Weather Camera – Part 1
I’m a tinkerer.  I like to try new things and learn how they work.  I’m also one of those weird people who have an obsession with the weather.  So weird in fact that for the past year or so I’ve had a personal weather station reporting data around our home (view it on WeatherUnderground.com).
I’ve placed an AcuRite 5-in-1 Pro Weather Station on a 10-foot stainless steel pipe in our backyard.  This station includes a display unit we keep in the house.  The display outputs data via USB to any compatible PC.  The PC can then upload the data streams to different sources.  Back in April I purchased a Raspberry Pi 3 to obtain more fine grained control over the data streams.  Luckily, the folks over at WeeWx have built an awesome little Python application that can interface with various AcuRite stations.  Instead of a dedicated PC running 24/7 to upload weather data the Raspberry Pi now handles this small workload.
The weather station works great but I want to add more.  I’ve recently seen weather cameras like BloomSky pop up.  Essentially, these cameras take high definition still images and upload them where they can be viewed on a mobile app.  BloomSky takes that a bit further by then creating daily time-lapse videos by compiling all of the images into a short video clip.  Moreover, weather sites like Weather Underground also take still images to associate with each Personal Weather Station.  BloomSky charges a pretty hefty fee for their device and there is little to no integration capabilities currently available for their devices.
Enter Part 1
I figured this would be a fun project to learn some new technologies.  This series of posts will cover building both the hardware and the software needed to take weather photos.  So, here’s the plan:
Plan – Figure out what needs to be built, then design it
Buy stuff – Amazon to the rescue
Assemble – Build the hardware
Test – Make sure the hardware works
Code – Build the code needed
Pretty basic, and these five steps will be the basis for the next few posts.
The Plan…
So my initial thoughts have the design looking something like the image below.  Essentially the Pi will run off of battery power and recharge from a solar panel.  A small app on the Pi will wake up every few minutes and capture a still image from the camera module.  It will then upload that image to an AWS S3 bucket.  The plan is to then have two lambda functions or apps running in AWS process the images.  One will run periodically and upload the image to sites like Weather Underground.  The other app will run nightly and build out a time-lapse video.  I’m not really sure what it will do from there yet.
Tumblr media
Next Steps!
So based on the design I need to buy some stuff.  While Amazon does they’re thing I’ll think through the design a little more and check for spare parts I might need.  I’m most worried about ensuring the battery doesn’t overheat and that the solar panels have sufficient wattage to recharge the battery.  I’ll have to do more research into that.
1 note · View note
brentpabst · 9 years ago
Text
Automating API development is hard, but it saves us a ton of time and effort to focus on the important things that matter... our software.
Don't suck at building an API
The engineering squads, here at Dude Solutions, spend an enormous amount of time building out APIs.  Our primary consumer of these APIs are our first party applications.  However, our platform has grown and external parties are beginning to utilize our same APIs.  We kept bumping up against the same question as more and more entities needed access to our data and resources:
How do we make our APIs easily consumable?
There are a lot of factors that go into the above question so I’ll break down a few of the major considerations that we took into account.
Stakeholders. The obvious ones are our first party applications and external partners.  If they cannot get to the data easily it provides zero business value and therefore provides no business value to the Dude.  Aside from the obvious, there are our internal stakeholders.  Our engineering squads must be able to easily test the APIs.  They are the front lines at looking into the difficulty of consuming what we build.  The hard part here is that the closer you are to the problem the harder it is to see the fallacies.  Imagine you are in a cold room with a small heater with one other person who is much closer to the heat source than you are.  While you are trying to figure out why the heater isn’t more efficient the other person is content because they have better access to the source.  This is often how external parties feel when trying to understand how a platform works.  How do we get them closer to the source?
Contract first.  Our engineering teams write the documentation for the API first as part of the design then move on to implementation.  We create one definitive source for how our API works (the contract).  This documentation is used not only to internal stakeholders but it is also pushed (in the exact same format) to our external clients.  If there is pain in how our API works, we all feel it.  This strategy makes our teams extremely efficient.  Before adopting a contract first approach our teams ran into two issues.  First, we would create an initial pass at a design and put in on a wiki page.  The page would sit stale and never get updated or worse a different page with the updated content would be created.  The engineers building the UI would look at one page while those that were implementing the API would look at another page.  We also saw cases where a team would keep the design in their head but never on a written page.  This created tribal knowledge and technical debt because somebody would have to go back and document it.  The hard part here is that you have to reverse engineer the code to understand the reasoning behind certain logic.  The two issues cost our teams a lot of time.  We went from an agile team running in parallel to a waterfall team where steps were executed in series: design, review, API implementation, API testing, small tweaks, UI implementation, UI testing, small tweaks on both sides, done.  It was very tedious and curtailed our velocity.  Moving to a contract first approach allowed our teams to work in parallel.  The API contract would be created and reviewed by the team.  Then all development efforts (implementation and tests) could be created because they knew exactly how the interactions would work.
Cost. Engineering teams are your most expensive asset.  You need them focused on building great software.  More specifically, minimize the distractions and keep focus on your areas of specialties (great APIs and user experiences).  Features get complex.  Often times engineering teams build out micro services that are used by our external facing systems.  We needed a way for our APIs to communicate with each other.  For the first time we faced the challenge of eating our own dog food.  We had to decide, “what was more important, building great APIs or software to interact with our own APIs?”  We decided to stick to building great APIs.  We looked for solutions to auto-generate the code we needed to interact with our API.  If we had not decided on a contract first approach this would not have been possible.  Our definitive source of documentation for our API provided the blueprint for code generation.
Putting it all together.  Committing to a contract first approach to API design is a great idea but without the right tools and processes in place it is just that, an idea.  We looked at a few different tools for documentation but came across a company called Apiary, pronounced ˈāpēˌerē. It leverages an open source specification called API Blueprint.  The specification allows for the documentation of APIs in a format very similar to traditional markdown.  Apiary provides a set of tools on top of the blue print to edit and publish your API specification.  You can see a working example on our developer site, here.  Apiary’s ability to sync with GitHub means that we can open source our blue print as well as follow our normal development process (Git Flow) when modifying the blueprint.
Tumblr media
Generating the specification is only the first step.  We wanted to take away the burden of writing code to interface with our APIs.  We partnered with Apimatic to generate our SDK code.  Well worth the monthly cost considering how often we are adding to our platform.  Apimatic gives us the capability to pass them our API blueprint and they handle the heavy lifting of creating code to interface with our APIs.  The best part, they support a variety of languages and frameworks.  This means that it no longer matters what language your external partners are writing their code in.  There is no need for you to have experts for each language to build out an SDK.  Best of all, with a continuous integration system, like Team City, every change to your API contract is captured and generates a new build of your SDK in any or all languages that Apimatic supports.  
Tumblr media
Dude Solutions uses Team City for continuous integration.  Closing the feedback loop is a critical step in our development process.  The age old philosophy of “It works on my machine” is no longer an acceptable answer.  With feature branch builds and commit watching it is only seconds before you know if everything is working as expected.  We use the same process when generating our API blueprints and generating our SDK code.  Here is the process:
An engineer branches from our release branch to make the necessary changes to the blue print document
The changes are made either in a local editor or in Apiary’s online editor
A pull request in GitHub is created to merge the feature changes into the release branch.
Once merged, TeamCity sees the commit and starts an SDK build
We execute a tool called Dredd in dry run mode to validate our blue print
We call Apimatic, handing it the raw GitHub URL to the blue print file.  They return a download URL to a zip file
We extract the SDK source files from the zip file and compile it.
Then we package the binaries and send them to our NuGet package manager.
The SDK is now ready for consumption by our teams and external partners.
The engineering teams at the Dude were battling inefficiencies.  Implementing a contract first approach to API design allowed our teams to start API implementation, UI implementation, and automation testing all at the same time decreasing the amount of time spent revisiting the design.  Tools like Apiary, Apimatic, and Team City made this possible by reducing our overall development cost and decreasing our time to market for complex features.
Related Content:
Automate all the things : How Dude Solutions saved $20,000 in Development costs using APIMATIC
2 notes · View notes
brentpabst · 9 years ago
Text
Challenging Assumptions
Organizational transformation is often a significant challenge for a variety of reasons across all levels of the organization.  Transformation is stressful, tough, and quite honestly, can really suck.  The stress of transformation affects all levels of the organization albeit with different focuses and types of concerns.  From an architecture perspective, change is full of potential and opportunity.  I prize change!  As an architect, when change is suggested or needed it provides numerous architectural opportunities to identify, find, and build new solutions that accommodate, support, and challenge current assumptions.  Nevertheless, what about the things that people do not think needs to change?
Tumblr media
I have been lucky to participate in a few large organization transformations.  The most recent, which has been in process for about three years now, represents the ultimate in “green field” transformation.  Nothing is off limits.  We evaluated major changes to software, people, process, and most recently, hardware.  The largest hurdle is simply challenging current assumptions.  Just because things work or behave the way they do today, should not mean they cannot be reviewed, challenged, or changed.  Difficult conversations?  Sure, but ultimately they are the right conversations to have.
There are numerous situations, I have found, where processes or systems are in place because of some obscure business or personal requirement that existed five to ten years ago but has never been reviewed.  The best part of these kinds of scenarios is finding out the process is never used or was never even needed!  These optimizations become easy wins architecturally often leading to reduction of complexity, technical debt, and reclamation of people’s time through elimination of processes, an easy win for the organization.
Many architects, whether they realize it or not, like to build up the perfect system.  The right people, the right processes, and the right technology.  The thing that they easily lose sight over is that the “right” thing changes too!  As the business transforms what was contextually right last week may no longer be right.  Look at large United States metropolitan areas like New York City. The city planner or architect who originally laid out the roadways for the island of Manhattan probably thought they were pristine, perfect, and easy to remember.  Fast-forward to current day and the roads now scare some of the most proficient of drivers due to weird or poorly marked interchanges and intersections.  Growth physically changed the contextual meaning of right.  Just as with cities like New York, organizations are constantly growing and changing as well.
Tumblr media
The Dude
My current organization has been providing SaaS solutions to our clients since before “the cloud” was an architectural design pattern.  Our current production systems are measured in thousands of servers.  Logins hit a decent sized pool of servers.  I challenged the notion that this was appropriate two years ago. The folks over at StackExchange are, in my opinion, doing a great job of telling folks you do not need large infrastructure to support large workloads.  Instead, you need smart infrastructure and quality software.  We just measured our new login service handling upwards of 200k requests across two rather small servers last week for the first time, without blinking.  That was a good win for the organization.  We did the “right thing” for today and probably the next few years.  I’m already working with folks on the next version of the service.
Tumblr media
I argue that all organizations are constantly in a state of transformation.  After all, systems continuously evolve and change over time. The rate of transformation may differ significantly, thereby affecting the level of change the organization perceives. Regardless, do not allow yourself to pass over components or parts of your organization simply because “it’s always been done that way” or “if it ain’t broke don’t fix it.”  Evaluate the current state; ensure it aligns with the desired future state, if not, plan the realignment or other transition states. Most important, do not make change for the sake of change; make it for the betterment of the organization and the people who keep it alive.
0 notes
brentpabst · 10 years ago
Text
Some great information on why we’re holding Tech Day at Dude Solutions...
Announcing Dude Solutions Tech Day 2016
Intro
Just over 18 months ago, five Dudes charted a course into the unknown.  We were given the unique opportunity to operate as a startup, but without the normal startup financial issues (AKA zero money).  Today, we announce a conference that chronicles the first chapter of our journey.  Dude Solutions Tech Day serves as an open retrospective of that first chapter with our local community.  Join us on February 20, 2016 to gain insights from our journey.
Agenda
Software powered startups generally have to build three key things from scratch to be successful.  
Team
Product
Process
Under normal circumstances, startups are forced to do that with minimal financial backing.  At The Dude, we were lucky enough to have the opportunity to build all three of those from scratch, supported by healthy financial backing.  To deliver across each of those areas required dedicated focus and energy over the past 18 months. Each area therefore deserves dedicated focus in our conference.  Within each area, multiple talks will be delivered to provide insights into our wins, losses, and learnings in each of those areas.
In true agile fashion, we continuously iterate on the details of each session as we get closer and closer to shipping our conference.  Check out our agenda and export more details to come.
Who can attend?
Anyone!  While a large portion of the content is software-focused, we believe our unique approach can be applied to any group of people trying to “get stuff done”.  Two thirds of our content focuses on team and process, further proving that there is value for any team there.
Sounds great! How do I sign up?!?!
We only have 70 total seats, so make sure you don’t wait around too long before signing up!!
Why do this?
We’ve all heard the saying that “it takes a village to raise a child”. When you turn that into an analogy about software, our products/projects are the children, and the whole world is the village. The info we’ll share in our sessions are our attempt to throw our ideas, theories, mistakes, and whatever else we can think of back into the village. We hope it helps, even if it is just a little bit…
The other motive is content marketing.  You probably weren’t expecting that honest of an answer, were you? We are doing some really great stuff here at The Dude, and we need amazing engineers to help us do that. Hopefully some smart folks out there are searching for a new way to build their widgets and find our learnings helpful. After sitting in on a couple more sessions, they realize that we may be a great place for them to come work. Voila. Content marketing FTW!
Bonus Content!!!
youtube
2 notes · View notes
brentpabst · 10 years ago
Text
An article I wrote about how Dude Solutions handles continuous delivery.
Delivering the Dude
Agile Continuous Delivery
The Dude Circa 2013
Let’s go back to a few years ago when Dude Solutions was preparing to expand and grow thanks to significant financing from Warburg Pincus.  Our development workflow was pretty common.  We had a version control system (VCS), a task management tool, and a pseudo-automated deployment mechanism.  Most of the tools were based on Microsoft’s Team Foundation Server (TFS).  We used the built-in functionality of TFS to handle source control and task management.  At the time, the Dude ran development projects using a traditional waterfall SDLC as it supported the way we funded development projects.  In addition to TFS, we used a handful of homegrown tools and utilities to assist in automatic deployment of code from TFS to various development environments.  We also ran CruiseControl.NET to deploy some of our .NET applications and services.
Introduction of Continuous X
At this point, you may be thinking to yourself, good lord that must be horrible to work with.  The reality is yes, it is a hodgepodge environment but it worked for us.  At the time, it did a superb job at supporting the needs of the development team and the company.  There is always room for improvement however and that time quickly caught up to us.  Our process looked something like the diagram below.
Tumblr media
I had already started refreshing myself on continuous integration and now continuous delivery.  The cool thing is that the concept of continuous deployments had really started to catch on at the end of 2013.  I credit Amazon and Microsoft with a lot of this push given their Platform as a Service (PaaS) offerings in AWS and Azure.  Their online tools allow you, with the click of a few buttons, to roll out new versions of your code.  Many of the continuous integration toolset vendors started to catch on as well.  I built out a proof of concept environment in December of 2013 to test a couple of tools to see how it could assist or replace our existing homegrown deployment tools.
DudeStorm = Continuous Delivery
One of the cool things about working in the technology team at Dude Solutions is the periodic meetups we hold with all of our technology teams and tribes (#JoinTheDude).  We hold these meetups using a traditional conference style and call them DudeStorms.  At one of the 2014 DudeStorms I presented to senior technology team members the proof of concept and rollout plan to replace most of our build and deployment tools.  The result, a green light to roll it out!
This was important for us as a company.  We had just closed new funding and during the summer of 2014, we began to transition the company to true Agile development and building our next-generation software platform.  Making all of this successful required brand new people, tools, and processes.
What We Did
We ended up ripping out and replacing our entire development tools and processes.  We replaced TFS with the Atlassian suite of tools (Confluence, JIRA, Slack, and HipChat).  We also added two new tools, TeamCity from JetBrains and Octopus Deploy from the company with the same name.  One of the interesting decisions we made was to focus TeamCity in just providing build services and allowing Octopus to focus on deployments and environment management.  There are many opinions on this but we think that if you really want to support true separation of concerns and ease of use without a bloated build server environment these should be two different systems.
We ended up hosting all of our new development environments in AWS, our first time doing anything in AWS.  There’s enough lessons learned from that build out process for a whole slew of blog posts!  In the end, we created a common build and deployment pipeline with various tools injected along the way.  For the first time we had clear separation of concerns in our build process and clear definition of environments and release promotion between each.
Tumblr media
The heavy lifting for most of our tools is TeamCity.  All of our build logic, scripts, and manipulations are triggered from TeamCity.  Our build processes are fairly simple and broken down.  We use runners for the various code platforms we use.  Right now, that consists of .NET and a few Gulp builds.  TeamCity also talks to Octopus to hand off deployment packages and ask for automated deployments to occur to deployment environments.  This is where some cool stuff happens!  The communications look something like this:
Tumblr media
After the deployment completes TeamCity then spins up an agent (in AWS) to run automation tests against the deployed environment.  This is great because it lets us test the deployment of the package as part of the build process so any deployment issues cause the entire pipeline to go red and act as a sort of Andon Cord.  This mechanism allows us to also deploy the package and then point the automation test suite against an actual environment instead of some local web server running on a build agent that will inherently not look like an actual deployment environment.
Lessons Learned
Piece by Piece
Setting up a full development pipeline is no small task.  Each tool along the way has its own set of buttons, features, and options.  Each requires you to learn how it works.  Setting up a full pipeline can take as many as one system to learn and potentially “n” others.  We had to learn how to use five different tools and this simply took time.  I took about two weeks of dedicated time to learn each tool.  I also started at the front of the pipeline and worked my down the line.  We were able to get developers moving quickly just by giving them Stash access first, then we moved on to TeamCity.
Sh*t Happens
As with any software integration project stuff breaks, a lot.  We have five different systems talking to each other.  They do not always like to talk to each other.  Our most infamous event was a misconfiguration in TeamCity that caused the service account it uses to access Stash to be locked out which requires manual administrator intervention.  Every time we unlocked the account it would just lock up a minute later, that was fun to figure out.
Waterfall your Build and Deployment Configs
This sounds somewhat counterintuitive but if you do not put upfront thought into how you will structure your build and deployment configurations, you will end up with a bunch of refactoring and breaking changes.  Since we got to start from scratch it worth the extra upfront time to plan out how things would work.  I am talking a whiteboard, maybe one of the software and QA engineers, and an hour or so thinking through various scenarios.  I am confident this saved us a month’s worth of rework after we got the pipeline working as desired.
Variables, Variables, Variables & Templates
DO NOT HARDCODE ANYTHING!  Use the variable capabilities provided by the tools to their fullest extent.  Spend the time to create the variables and inject them into your configurations.  You will reap significant timesavings down the road.  We are now able to spin up complete configurations in about ten minutes by duplicating existing configurations and updating variables.  Ta-da!
Hire a DevOps Engineer
Depending on your team size, do not underestimate the value of having a dedicated person assigned to manage these systems.  Regardless of whether you call them DevOps or Build Engineer we have had great success with folks that know servers well and can script the heck out of anything.
Extras!
Continuous Delivery is sometimes misunderstood.  Here is a good short video that explains what it really means:
youtube
5 notes · View notes
brentpabst · 10 years ago
Photo
Tumblr media
It’s time to see women of all sizes across all media and culture. It’s time to represent. #plusisequal http://thndr.it/1HXOmrG
0 notes
brentpabst · 11 years ago
Text
Removing Erroneous Certificate Stores with PowerShell
We're in the process of scripting out the installation of certificates at work and I made a mistake and successfully created the certificates, but under a new store that won't help at all.  I removed the certificates using MMC but that doesn't help in removing the stores that remain.
Turns out you can easily do this in PowerShell, however I couldn't find a whole lot of links on the Internet on how to do this.  So, this is all it takes:
Fire up PowerShell in Administrative Mode
cd cert:\<Root Store>.  Either CurrentUser or LocalMachine
Get-ChildItem.  To see the list
Remove-ChildItem "Name of store you want to remove"
It really is that easy.  If the store still has certificates in it you will be prompted to recurse through each item.  Be careful, if you get the name wrong you could delete one of the real Windows certificate stores, that would be very bad!
0 notes
brentpabst · 11 years ago
Photo
Tumblr media
Two squads up and running.
0 notes
brentpabst · 11 years ago
Photo
Tumblr media
Guess its official!
0 notes
brentpabst · 11 years ago
Video
#JoinTheDude We've got an awesome place to work with some great people... are you our next coworker?
0 notes
brentpabst · 11 years ago
Photo
Tumblr media
Hey Delta... Where's that TV I'm supposed to have and you keep advertising?
0 notes
brentpabst · 11 years ago
Text
Fluffy Radio
I love music of all types, sizes, and shapes.  Sure, there are some “song” that I cannot stand or burn holes in my ears, but hey, who does not?  Many years ago, I started an Internet radio station with some friends and had a blast.  We streamed music to the world, but primarily around the college campus.  After leaving, I did not have enough time to dedicate or spend on keeping things running.  More important, Congress and the Copyright Royalty Board really suck.
Fast forward a couple of years and I’m about to gain back a little bit of free time thanks to finally wrapping up my school studies.  I need a hobby and this seems like an easy way to kick that back off.  Better yet, it allows me to try and some cool new technology and mobile development tools.  So with that being said… here comes Fluffy Radio.  Yes the name is awesome!
The official kick-off will come sometime in early 2015 after I have chance to get things ready to roll.  If you have music or programming suggestions feel free to leave comments.
0 notes
brentpabst · 11 years ago
Text
NancyFX, AngularJS, Active Directory... oh my!
It has been a long time coming and I finally got a chance today to play with some new tools to help rebuild uManage.  It's needed a refresh for a while and I just have not been able to get around to it.  Spent some time tonight with NancyFX and the self-hosted model, which looks pretty cool.  The guys over at Octopus Deploy have a great model and tool with this stack that I think will work well with uManage.
0 notes
brentpabst · 11 years ago
Photo
Tumblr media
Doesn't get much better or German than this.
0 notes
brentpabst · 11 years ago
Photo
Tumblr media
Pretty sure this is the worst thing you could possibly buy someone.
2 notes · View notes
brentpabst · 11 years ago
Text
Participants Wanted! No Seriously, I Need Your Help.
I've been preparing to run a research study as part of my dissertation for a little over a year now.  I've finally gotten all necessary approvals and need to find a panel of participants to help conduct the study itself.  There are a few requirements to participate.  If you are interested feel free to contact me through Facebook, Twitter, or LinkedIn.  You can also leave a comment below.
Requirements:
Must have been involved with the implementation or currently work, assist, or participate in ERP (enterprise resource planning systems) for small businesses in the United States
Either an employee of an organization who implemented an ERP, act as a consultant for ERP implementations, or work for an ERP software vendor
If you're interested in participating or at least learning more let me know!
0 notes