smithatlanta
smithatlanta
Mind shaving(and random development thoughts)
26 posts
I help teams migrate their applications to the cloud(not just lift and shift but actually re-architecting them to use the cloud products appropropriately)
Don't wanna be here? Send us removal request.
smithatlanta · 6 years ago
Text
So I’m going to get certified(or try to) this year.
I remember back in the day when I was doing Windows SDK programming then Visual C++ programming and Microsoft came out with their MCSD certification and I balked at them.  
Well times have changed and now that I’m doing more Cloudy things, I’ve decided to try and complete the Professional Cloud Architect for GCP(https://cloud.google.com/certification/cloud-architect) and the Professional Certified Solutions Architect for AWS(https://aws.amazon.com/certification/certified-solutions-architect-professional/) this year.
I’ve gotten a big head start from Google by them having many of their training courses on Coursera(https://www.coursera.org/specializations/gcp-architecture).  I’ve also been doing alot of the Qwiklabs and some of the Codelabs.  I’ve got my GCP exam scheduled for mid May so I’m hoping to knock that out and start studying for the AWS exam immediately afterwards and complete that certification by late July.
In doing these certifications,  I’ve decided to blog on a weekly basis about taking a simple one instance architectural design and expanding it out to fit into a cloud architecture(both AWS and GCP).  
Each week I’m going to expand out the architecture until I have something that’s secure, scalable, global, and resilient.   
Hopefully I can stick to it.
0 notes
smithatlanta · 6 years ago
Text
My take on Plume wifi mesh networking after nearly 2 years of use.
I want to start off by saying that the support staff at Plume(plumewifi.com) are first class.  I’ve submitted a variety of tickets asking questions ranging from pod signal strength to algorithm questions and they have always been very helpful.
My take is on my pre super pod setup although I’ll discuss the super pods at the end since I’ve added 2 of them to my setup since Plume announced them.
I started out by buying one of the 3 pod setups Plume offered and running it in parallel with an Asus AC1900 router.  I have a 2 story house with a basement and the cable modem is located in the basement so my main backhaul(wired) pod and the router were both located in the basement.  I had one non wired(satellite) pod on the main floor in the middle of the house and the other satellite pod on the top floor in the middle of the house.
I initially ignored what Plume had said about needing a pod in every room so I was initially underwhelmed with my results compared to my old AC1900 setup.  If I was in a room near the main floor or basement pod then the speeds were great but if I moved 30 feet or more away from a pod then the speeds dropped off alot.  
One of the main reasons I moved to this setup was to increase my range upstairs since my AC1900 router was in the basement and I had really spotty access up there.  Once again if I was near the pod I would get pretty good speed but not the same speed I could achieve on the main floor or the basement.
One really cool thing about Plume is that they have an awesome iOS application that provides alot of insight into how they are mapping your pods.  I quickly realized that you want to do your best to keep the hops from your hardwired pod(s) to your satellite pods to one hop in order to get better speeds.  My current setup was two hops(hardwired basement pod -> main floor pod -> upstairs pod) so I needed to make some modifications.  
At this point I could have just shelved it and sold everything but I decided instead to invest in three more pods to see if I could improve things.  Since I realized the key to getting ideal speeds was reducing the hops to the wired pods, I ran some ethernet cable up to my main floor in two spots and kept the current setup I had.  So I now had three hard wired pods and three non wired pods. One wired pod was in the basement and two of the other pods were spread out across the main floor.  One of the non wired pods was on the main floor and the other two were upstairs.  I also turned off the Asus AC1900 as well because I wasn’t sure if it was interfering with the pods.  
Once I moved to this setup everything improved substantially.  I was getting 500Mbps in most of the house but there was one or two places I still wasn’t getting decent access so I bought one last pod and that got me pretty much where I wanted to be.  The key word being “pretty much”.
One really useful feature that Plume added to their mobile app in the last year was how strong the signal was between pods.  You have 4 ranges(excellent, good, fair, and poor) which can provide you with alot of feedback as to whether you have placed one of your pods in a good location or not.  Every night(or when you unplug a pod or two) Plume will run an optimization on your network to try and improve these signals(key word being try).  I have moved pods around to spots in my house that I thought would improve the signal but actually made it worse.  I’m still trying to figure out what constitutes a good spot or a bad spot but I know that you want to keep the pods away from things with motors such as ceiling or oscillating fans.  You also want to move them away from any electrical interference such as your TV or stereo as well.  Another thing that comes into play is the number of other wi-fi signals in your area as well.
With all these things coming into play, I imagine its pretty hard to get things  perfect but it’s definitely better than my single AC1900.  It’s also a good bit more expensive, though.  
0 notes
smithatlanta · 7 years ago
Text
My Favorite Sessions From Google Next 2018(that I attended)
Leverage AI on the Cloud to Transform Your Business
https://youtu.be/32Q3U6-6eMo
- This video answers the following question.  What kinds of problems can I use ML to solve for?  Everyone should look at the slide at about 3 minutes in.
Architecting Live NCAA Predictions: From Archives to Insights
https://youtu.be/rd3KFJx7ubI
- This video describes how Google used all the NCAA tournament data and ML to drive the dynamic advertisements we saw during March Madness this year.  
Continuous Deployment Platform for ML Models
https://youtu.be/kfXFyv1zNuw
A Modern Data Pipeline in Action
https://youtu.be/EN_RJ428i1g
From Zero to ML on Google Cloud Platform
https://youtu.be/QU7_eU8HzAQ
- This video gets into all the ML products Google has released for application developers, data scientists, and all those people in between.  Lots of good demos.
ML in Production: Architecting for Scale
https://youtu.be/ZNqY4CsE4FA
Cloud Functions Overview: Get Started Building Serverless Applications
https://youtu.be/JenJQ6gc14U
- This video provides you with a good overview of what kinds of headaches Serverless computing solves and where Cloud Functions fit in.
Machine Learning with Scikit-Learn and Xgboost on Google Cloud Platform
https://youtu.be/uSjgTFdEbSY
0 notes
smithatlanta · 7 years ago
Text
How to rotate a Key Pair on an AWS instance
This is something I don't have to do too often but if your company has policies on key rotation then this might be useful.
I'm assuming you already have the ability to ssh onto your current EC2 instance with an existing key pair you created in AWS.  
This will walk you through creating a new key pair in AWS and getting it applied to that instance and removing the old key pair from the instance.
The first step is to ssh on to the instance that you want to replace the key on.
The second step is to login to your AWS console and go to the EC2 service.
Click on the Key Pairs menu item on the left side.
Click Create Key Pair and go through the steps to create a new key pair and download it to your  computer.
On the command line type the following to get the public key from the AWS Key Pair you created:
ssh-keygen -f your_download_key.pem -y
It should produce something like this:
 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDq8nNujE18FYcBOz3w6M2wDzvmYgHaAreK3QqCAOXCo/UGS1wBNY/KCQvQRSkpZvNVlv26P/4KhlEArwq75cqpFwejcdsDgVMaUoAE6DzotuaLa+cGUxuMXcSYsebyYGZtJMZKlXHrme9Qxb3+95aC291KbEHpzuoNreScKg/qEqu6W2dDZRsdL9GT2uJA5b3kT7fszKjTNbZaaeTdWrOMdpnrA+ZZm7izHt5c0wWnoIOrnYk6G+xEsdSSKnmtRKMdtH6OPXBSrfGG0V16D8PAVk466DXu7plztmdSZnSDKnaRveJhZPcBnANPmOuLnCYZ/jR3qYkARSwlV1yDbyId
Copy this output to your clipboard.
Go back to your EC2 instance and edit(via vi or your favorite editor) ~/.ssh/authorized_keys .
Go to the bottom of the file and paste the new public key info from your clipboard and save the file.
Try using the new key to ssh onto the instance.
If it works, you are good to go and you can delete the old key pair from AWS if you are sure you don't need it anywhere else.
One item to note on the AWS console’s EC2 instance screen is that the Key Pair name will remain as the old key pair name since there is no way to update that metadata.  If you are afraid you will forget the key pair name you used, you might try adding a tag with the key pair name.
0 notes
smithatlanta · 7 years ago
Text
How to create a KMS encrypted S3 bucket and how to access it from another account(because I always seem to mess up one step).
I'm going to preface this post by saying that everything I'm describing here should be automated via either terraform or cloud formation templates.
The flow will be to create a KMS key, then create an S3 bucket with that KMS key, then create a user in another account that can put, get and delete from that KMS bucket.
 KMS key creation
Login to the AWS console then go to the IAM service.
Click on Encryption keys on the left side of the page.
 Click on Create key and enter the alias and the description and make sure the KMS radio button is selected as the Key Material Origin under the Advanced Options.  Click Next Step.
 Enter all the required Tags and click Next Step.
 Choose the IAM users or roles that can administer the key and click Next Step.
 Choose the IAM users or roles that can use the key and click Next Step. We'll come back to this later on.
 Click Finish and make a note of the ARN(something like arn:aws:kms:us-east-1:1234567890:key/12345678-9012-3456-7890-123456789012 )
 S3 bucket creation
 First off login to the AWS console then go to the S3 service.
 Click Create bucket and enter the Bucket name and click Next.
 Click on Default encryption.  Click on AWS-KMS and select your ARN(or alias) and click Save.
 Click Next.
 Adjust any permissions and click Next.
 Click Create bucket.
 Test some s3 operations with a user in the bucket’s account
 At this point you should be able to do the following as a user on this account:
aws s3 ls s3://test-bucket-1234
aws s3 cp test.txt s3://test-bucket-1234/test.txt
If you look at the object in the S3 console, you should notice that the Encryption property on the object is set to AWS-KMS.
 Create a user in another account with s3 access
 Login to another AWS accounts console then go to the IAM service.
 Click on Users on the left side of the page.
 Click Add user and enter a User name(i.e. srv_test_kms) and check the Programmatic access check box and click Next Permissions.
 Click Next: Review(we will add an inline policy in another step).
 Click Create user.  Make a note of the Access key ID and the Secret access key
Applying the user policy to give user access to bucket
 Apply a policy similar to the following to the user just created to limit access to the bucket in the other account:
 {
   "Version": "2012-10-17",
   "Statement": [
       {
           "Sid": "VisualEditor0",
           "Effect": "Allow",
           "Action": [
               "kms:Encrypt",
               "kms:Decrypt",
               "kms:ReEncrypt*",
              "kms:GenerateDataKey*",
               "kms:DescribeKey"
           ],
           "Resource": "arn:aws:kms:us-east-1:1234567890:key/12345678-9012-3456-7890-123456789012"
       },
       {
           "Sid": "VisualEditor1",
           "Effect": "Allow",
           "Action": [
               "s3:ListBucket",
              "s3:GetBucketLocation",
               "s3:GetObject",
               "s3:GetObjectAcl",
               "s3:PutObject",
               "s3:PutObjectAcl"
           ],
           "Resource": [
              "arn:aws:s3:::test-bucket-1234",
              "arn:aws:s3:::test-bucket-1234/*"
           ]
       }
   ]
}
Providing the user with access to the KMS encryption key on the other account
 Login to the account with the S3 bucket / KMS key and go to the IAM service.
 Click on Encryption keys and select the key that you created earlier.
 Click over to the right of the screen where it says "Switch to policy view".
 The policy will initially look something like this:
 {
 "Version": "2012-10-17",
 "Id": "key-consolepolicy-3",
 "Statement": [
   {
     "Sid": "Enable IAM User Permissions",
     "Effect": "Allow",
     "Principal": {
       "AWS": "arn:aws:iam::1234567890:root"
     },
     "Action": "kms:*",
     "Resource": "*"
   }
 ]
}
You will need to modify this policy to add your users ARN from the other account.  When completed it will look something like this:
 {
 "Version": "2012-10-17",
 "Id": "key-consolepolicy-3",
 "Statement": [
   {
     "Sid": "Enable IAM User Permissions",
     "Effect": "Allow",
     "Principal": {
       "AWS": [
        "arn:aws:iam::1234567890:root",
        "arn:aws:iam::0123456789:user/srv_test_kms"
       ]
     },
     "Action": "kms:*",
     "Resource": "*"
   }
 ]
}
 Click Save Changes.
 Providing the user with access to the S3 bucket on the other account
 Login to the account with the S3 bucket / KMS key and go to the S3 service.
 Click on the S3 bucket you created earlier.
 Click on the Permissions tab for the bucket.
 Click on the Bucket Policy and add a policy similar to the following to the bucket:
 {
   "Version": "2012-10-17",
   "Statement": [
       {
           "Sid": "Example permissions",
           "Effect": "Allow",
           "Principal": {
               "AWS": "arn:aws:iam::0123456789:user/srv_test_kms"
           },
           "Action": [
               "s3:ListBucket",
              "s3:GetBucketLocation",
               "s3:GetObject",
               "s3:GetObjectAcl",
               "s3:PutObject",
               "s3:PutObjectAcl"
           ],
           "Resource": [
              "arn:aws:s3:::test-bucket-1234",
              "arn:aws:s3:::test-bucket-1234/*"
           ]
       }
   ]
}
 Testing puts, gets, and deletes with the user on the account that does not contain the S3 bucket
 Using the access key / secret key for that user you should be able to do the following commands.
 aws s3 ls s3://test-bucket-1234
The following command will appear to work but you'll notice an issue when you go look at the file in the S3 file explorer.  The encryption and tags properties will have "Access Denied".  What happened here is that the acl's are for the user uploading the file and not the bucket owner.
 aws s3 cp test.txt s3://test-bucket-1234/test.txt
To fix this issue, copy the file using the following:
aws s3 cp test.txt s3://test-bucket-1234/test.txt --acl "bucket-owner-full-control"
aws s3 cp s3://test-bucket-1234/test.txt test.txt
If you do the following, you should get an access denied error because the user does not have the DeleteObject action in the bucket policy above:
aws s3 rm s3://test-bucket-1234/test.txt
0 notes
smithatlanta · 7 years ago
Text
Yik Yak thoughts
On my drive home from work today I had a random thought that led me back to something I learned at Yik Yak. Then my brain went into diffuse mode and I thought about several other things as well. Since I left Yik Yak 2 years ago now, I thought it would be a good time to write down some of this stuff before I forget it.
So here are the topics.
- Drinking from the fire hose.
- Deployment evolution.
- How I spent my Christmas in 2014.
- So you thought that selfie of your butt was on the feed and got past moderation.
- Conversations are hard when you don't know who you're talking to.
- The simplest photo upload workflow ever.
- Why developers don't know crap about experimentation.
- Canaries, Dark launches and Mount Everest.
- You get smarter working around smarter people.
I know they sound kind of odd but each one will make sense once you read it. I will post once a week about each of these as time allows.
0 notes
smithatlanta · 8 years ago
Text
My 2017 year in review(so I can remember what I did this year)
This was the year of doing things I’ve never done before.  Normally my review is a “I learned this technology or wrote something in this programming language” but this year was different.  Both at work and outside of work I was forced to leave the comfort of my introvert lifestyle and branch out to take on some leadership roles I wasn’t sure I could do.
Scout related stuff
First off lets talk about Scouting.  In 2016 I took on the role of Committee Chair at my sons Cub Scout Pack.  I knew what I was getting into.  We had about 60 scouts and an active group of volunteers.  Well in the fall of 2016 the pack near us dissolved so we took on another 35 scouts so we now had 95 scouts.  It wouldn’t have been a big deal except I also had taken on the responsibility of handling the fall camping at the district level along with 2 other people.  It seemed to be a fall of record numbers because we had 1500 people sign up to camp as well.  
So I was way over my head having never done much people managing.  So the first thing I learned from all this was to ask for help.  You may not always get the help you need but sometimes you do and it’s always a good first step when you know you don’t have the expertise or experience to do something.  
I was able to get volunteers to help run the pack activities as well as many of the camping activities so I could focus of managing the large group going camping.  We ended up having a successful fall camping event and I learned a great deal about managing people.  Since then I’ve helped to run two more large scout camping events using the knowledge I gained from running that event.  My next challenge is to help the pack that dissolved in 2016 to re-build itself and become once again self sufficient.  We also have the new challenge of possibly adding another 20 girls to our pack next year so it should be an interesting 2018 in scouts.
Work related stuff
At work I started off 2017 by trying to digest all the technologies that came out of AWS ReInvent.  AWS ReInvent is a big conference in Las Vegas where Amazon Web Services discusses all their new products.  There were a substantial number of new AWS products released in 2016 and I wanted to make sure I understood where I could use each one of them since my job depends on me knowing this kind of stuff.  I went back and counted nearly 30 new products and I went through each one making notes and trying to test out those products that were GA(generally available). 
After reviewing these products in December and January, I was pulled in to do a presentation on Redshift using Kinesis Firehose.  We used Redshift at Yik Yak but I hadn’t used it at my current job so I spent a week or two making sure I understood those products well and that I could create a Redshift stack using Terraform.  During this time I realized that I didn’t know as much as I’d like to know about all the “big data” offerings from AWS so I spent another week or two trying to understand EMR, Athena and Data Pipeline as well.  
I was then asked to help out with a couple of analytics projects using Redshift so the time I spent researching Redshift came to good use(although now that I know more about the actual amount of data going into those Redshift clusters, I would have probably been better off cost-wise to use Postgres).  During this time I met with many of the big data teams and learned about many of the challenges they were having.  I also learned a great deal about all the offerings from third parties as well as the big 3 cloud providers.  I feel much more confident discussing big data architectures now than I did this time last year.
Another item that I was tasked with in 2017 was managing the architectural reviews of all the applications being migrated to the cloud.  We came up with 3 required artifacts for every review(high level doc, architectural diagram, monthly / yearly costs, etc) and I came up with a checklist for those reviewing each architecture.  So far we’ve had about 10 applications reviewed and they have ranged from AWS Lambda stacks to AWS ECS clusters to GCP Cloud Functions to GCP App Engine stacks to Azure App Services.  It’s been pretty exciting to see what some of the teams have come up with and I think the teams are getting more accurate with their cost estimates as they are reviewed more.
Looking ahead
Overall, 2017 was a year of learning and iterating.  I think 2018 will be similar.  In 2018 I would like to strengthen my knowledge in the data science area and start to understand all the machine learning options available in all of the big 3 cloud providers.  I’ve played around with all the managed services(Vision and Image API’s, etc) but I’d like to learn more about Tensor Flow, Keras, and things in that realm.  
I also need to spend more time understanding Blockchain and how we can use it at work.  I think 2018 will be the year companies will start to take a more serious look at this technology as something more than just something for mining crypto currency.
In 2018 I’d also like to do some more side projects in the IOT realm.  I have a slew of raspberry pi’s(and other IOT devices as well) collecting data around my house and I’d like to surface more of this data using some of the new tools just announced at ReInvent this year.
I’m also going to try and stick to these 5 “difficult” items. https://www.inc.com/neil-patel/5-difficult-things-that-powerful-people-do.html .  Number 2 and number 3 are the tough ones for me.
1 note · View note
smithatlanta · 8 years ago
Text
Analytics / Statistics refresher for developers.
Back in my early days of creating corporate applications using the waterfall approach, our main form of feedback was asking the user what they thought about the feature we just implemented and then we assumed(and hoped) they would use that feature. If the functionality ran slowly we would try to log timings to a log file and then hope we could find the log to determine what’s up(and sometimes if we were lucky we could profile the code). Obviously using this type of methodology and feedback loop has many, many issues. The focus of this short blog post is to make sure you don’t do what I did back then. The first thing I’ve learned over the years is that you should attempt to measure every part of your application. Not only should you measure every click your users make but you should also know how long the different functions of your application take to execute and how many times certain critical functions are executed. So as you add a new feature to your application you should have a line item somewhere in your design document that says “How am I going to prove to the business that this feature was worth the money we spent on it(maybe clicks, swipes, time on page, etc) and how long do the functions in that feature take to execute." The first part of the question is analytics and the second part is statistics. Of course you could say both are statistics but it's easier to differentiate between the two types of data this way(at least in my view of things) Lets discuss analytics. These are things like user location, logins, clicks, swipes, session times and anything else that might provide more understanding into what your users are doing in your application. This type of data can be streamed or batched up and there are several tools that fill this gap. In AWS for streaming data you can use Kinesis Firehose(https://aws.amazon.com/kinesis/firehose/) and for batch you can use Data Pipeline(https://aws.amazon.com/datapipeline/) to write the data to S3, Redshift, and Elastic Search. In GCP for both streaming and batch you can use Cloud DataFlow(https://cloud.google.com/dataflow/) to write the data to Cloud Storage, Cloud Pub/Sub, Cloud Datastore, Cloud Bigtable, and BigQuery. Once the data is loaded into one of these tools then the business can make a better determination on where new functionality should be added and if a piece of functionality is even being used at all. Now on to statistics. Statistical data is for the developer kinda like analytical data is for the data scientists. It’s how you determine whether the function you wrote is being used and performing consistently. You could write your own stats to a log file or maybe a database but there’s a nice tool written by Etsy called StatsD that makes it very easy to keep track of this kind of information. StatsD allows you to do counting, timing, gauges, and sets. You can either create your own Graphite server to receive the stats or if you are paying for DataDog, you can take advantage of their DataDog agent to send stats directly to a DataDog dashboard. Of course you also create custom stats in AWS Cloudwatch Metrics and StackDriver as well. Once you have these stats in place, it should make it much easier to diagnose slow running functions. I know these two items seem like pretty basic concepts but I see so many developers cut out these things when they have to hit a deadline. Don't cut them out! My next post will delve a little more deeply into the analytics side of things since I'm learning more about that part of things at the moment.
0 notes
smithatlanta · 8 years ago
Text
The hardest job I ever had(and the one that changed my life the most)
Back in the summer of 1987 I was a neophyte to the world of hard work. I had just graduated from high school and was headed to the University of Georgia in the fall but I had a whole summer of nothing to do so my father suggested I apply for a job at the company he worked at.
My father was the head of Human Resources at this company so the odds were pretty high that I would get a job. Did I mention that he worked for the largest dairy in the area so they processed quite a large amount of milk as well as other products.
So I went ahead and worked on my half page resume which included my 1 month pizza job and 3 summers of cutting lawns and submitted it for a summer job. And guess what, I got the job!
So working at the dairy starts pretty early. I actually had to be at work at 6am. So that was a bit of a shock from not even having to be at school until 7:30. So my first day was going thru a little orientation in the morning and then I met my boss at lunchtime to go over what I would be doing that summer. I found out I would be working on the five gallon bulk machine as a catcher.
The five gallon machine would fill a bag full of five gallons of milk(all percentages and chocolate of course), orange drink, fruit punch, lemonade, and on certain days frosty mix for Wendy’s. Once the bag was full, it would be dropped onto a conveyer belt that would travel to the ‘catcher’ who would attempt to grab the bag, place the bag into a specially designed milk crate that would have to be picked up and stacked five high on another conveyor that would go into the cooler to be put on the milk trucks for delivery.
So my first full day was quite a learning experience. I realized quickly that the five gallon machine would randomly not put the lids on the bag tightly and when you went to grab them you could get doused with whatever product was in the bag pretty easily. I also learned that 5 gallons of liquid in a bag is pretty heavy and when wet could be a pretty slippery thing to grab. And after a couple of spills, it got pretty easy to slip in the area you were working in.
But as bad as the job sounds it did have some perks. I got to drink fresh, ice cold milk right off the line and all the ice cream I could eat. I was also done with my shift at 3pm so I had my afternoons free to do a whole lot of nothing.
So this job was hard labor. I would grab, pull and stack nearly 1500 five gallon bags a shift. I would come home completely wiped out and my mom would stop me at the door and make me strip down because the smell of 1%, 2%, whole, and chocolate milk as well as the assorted orange, fruit punch, and lemonade mix as well as the stench of a teenage boy was a little overwhelming. For some reason though it didn’t bother me so much.
As the summer progressed I continued doing this job because I was making ok money and I didn’t want to let my dad down because he got me the job. One funny memory I had was when I decided to walk down to the Krystal hamburger restaurant to grab lunch during my 30 minute lunch break. I remember walking in, ordering, and sitting down to eat my lunch. I started noticing people moving away from where I was sitting and then the manager walked up to me and asked me if I wouldn’t mind taking my food to go because customers were complaining how I smelled. I quickly grabbed my food and left because I was kind of embarrassed. All the other times I went there I got my order to go.
At the end of the summer I was very happy to be going off to school and not continuing my work on the line. The biggest kick I got was on my last day on the job. The plant manager took me off to the side and wished me the best and told me that most full grown men only lasted a week or so as the ‘catcher’ and that he typically only hired temps for the job. He couldn’t believe I had gone the whole summer without quitting.
Now that I’m much older and look back at that summer 30 years ago, I know that working my butt off for those 3 months had a huge impact on the future me. Learning how to grind through things even though you are mentally and physically wiped out has helped me to get through many a task I’ve run into in my work and personal life.
The last thing I want to do is thank my Dad. I don’t know for sure if he told the plant manager to put me in that job but I want to thank him for giving me that opportunity to work. I definitely learned the value of persevering through hard times. :).
Thanks, Pop!
2 notes · View notes
smithatlanta · 9 years ago
Text
2 months in after cutting the cord
After cutting the cord back in November I had my doubts as to whether I would be able to stick to it but I’ve found out that I really don’t miss anything at all.
I do have a couple of things I need to edit on my last post.  I ended up returning the $95 Amazon basics external antenna and used the $35 Amazon internal antenna instead because the external one was overkill.  I receive all the local OTA channels cleanly with the $35 antenna so why waste the money.
Now as far as content goes, I have to admit that we watch more stuff on Amazon Prime than anything else.  I had no idea how good their quality of programming was.  If I could only pick 2 of the following services(Amazon Prime, NetFlix, Hulu, HBO Now and SlingTV) I would go with Amazon Prime and SlingTV.
I have had a couple of people tell me that they have had issues with their Kindle Fire TV’s.
I have the following model of Fire TV:
http://www.amazon.com/gp/product/B00U3FPN4U/ref=s9_acss_bw_fb_odsbnd4s_b3?pf_rd_m=ATVPDKIKX0DER&pf_rd_s=merchandised-search-3&pf_rd_r=0C4RDKWWRB2YDQ78J68H&pf_rd_t=101&pf_rd_p=2360663422&pf_rd_i=8521791011
And I have it wired directly into my router(not wireless).  I think having it wired in makes a huge difference.  I had issues streaming shows on an Apple TV earlier last year so I ran ethernet to the device and it worked much better.  
0 notes
smithatlanta · 10 years ago
Text
I’m cutting the cord and this is what I’m doing
I decided about a month ago after talking to some friends who had cut the cord to cut the cord myself and here’s what I’m planning to do and what I’ve done.
I wanted to make sure I could get the 4 free broadcast networks OTA(Over the Air) so I went here: http://www.antennaweb.org/Address.aspx and decided to try out this antenna here: http://www.amazon.com/AmazonBasics-Ultra-Thin-Indoor-HDTV-Antenna/dp/B00DIFIO8E/ref=sr_1_sc_2?ie=UTF8&qid=1448667631&sr=8-2-spell&keywords=hd+anten .  I ran a channel scan and was surprised that I picked up the 4 broadcast networks and many more.  I’m going to end up using this antenna upstairs on our main TV: http://www.amazon.com/AmazonBasics-Ultra-Thin-Indoor-HDTV-Antenna/dp/B00DIFIO8E/ref=sr_1_sc_2?ie=UTF8&qid=1448667631&sr=8-2-spell&keywords=hd+anten
My next thoughts were focused on what services offer what.  I’m still learning about things here but it looks like Hulu($7.99) has more TV programming while Netflix($7.99) has more movies.  You can try this service here: http://www.canistream.it/ to see which services have what you want.
We love HBO so it’s great that HBO Now($14.99) is available. We will definitely be purchasing this since we’ve gotta watch Game of Thrones.
I have to have my football in the fall so I thought this would be the one hole I couldn’t fill but SlingTV($20): https://www.sling.com now streams ESPN and ESPN2.  I can also add ESPU, ESPNNEWS, and the SEC Network as well for $5 more.  BTW, You also get AMC, HGTV, IFC and all the Turner Networks as well for $20.
I will be using an Amazon Fire TV to stream everything: http://www.amazon.com/gp/product/B00U3FPN4U?refRID=G7YCQV3JFMQASF7SQ7RH&ref_=pd_cart_vw_2_3_p  I am using the Fire TV because I also already have an Amazon Prime account so I can take advantages of the offerings they have as well.  I will probably use an AppleTV in the basement since I already have one.
So now that I have the content and the device to stream it, who will provide my internet access.  I am going with Comcast Business Class and using their Deluxe 50 plan.  I could go with the Deluxe 25 plan but I work alot from home and want the faster speed and I think the extra $10 is worth it.  You need to go with Comcast Business Class to get unlimited internet.  Normal Xfinity plans are limited to 300gig per month and you’ll reach that very quickly streaming movies.
My wife also likes to stream music from the Xfinity receiver to our stereo so we are going to replace this with a Chromecast Audio streamer here: https://www.google.com/intl/en_us/chromecast/?utm_source=chromecast.com since we already each have Google Play Music Unlimited accounts.
One time costs:
$40 for antenna to get ABC, CBS, NBC, and Fox(and many weird channels, too)
$75 for Amazon FireTV(this was a Christmas deal)
$35 for Chromecast Audio Streamer
Yearly costs:
$100 Amazon Prime(Free shipping, Free tv and movies, Free monthly books)
Monthly costs:
$109 Comcast Business Class 50/10 dedicated(unlimited internet)
$7.99 Hulu
$14.99 HBO Now
$20 SlingTV
0 notes
smithatlanta · 10 years ago
Text
So what’s it like going from corporate life to startup life after 13 years
I’ve been wanting to write something about this for a while but it’s been super busy as startup life can be.  So this is it. 
I left a large media company last November after deciding to test the waters(not because I disliked the company but felt like it was the right time to make a change).  After looking back, it was the best decision I made for my career and my mindset.
Why did I leave?  I think I had grown too comfortable at the last company.  I was still learning new technologies and using them on a daily basis but I felt like I was on auto-pilot(kinda like a robot).  When I left the corporate world and jumped back into startup life it was like someone took the shock paddles to my chest.  I felt a new responsibility to make this work and I couldn’t just do what I had done before to make things work(kinda like trying to appease a newborn during those first 3-4 weeks when you bring them back from the hospital).  I had to adapt and learn quickly.
So is startup life perfect?  No, it’s not.  But if you like coming in every day and having a different fire to put out then you’ll thrive.  There is no way you can auto-pilot at a startup.  These fires can also impact your family life.  Fires don’t care if you are at your son’s football game or not.  You may have to run back to your car and make a quick code change which your spouse or kids may not like.  It doesn’t happen too often but it is a reality when you have 3 back end developers that run the show and “the app” is your responsibility.  If it goes down you may be looking for a new job and you will impact everyone in the company.  No pressure.
That doesn’t sound that great.  Why would you want to do that? Well I love what I do.  I love it when I see people using the parts of the app that I coded and I see every day how it impacts peoples lives.  I can’t say that about any other apps at any other company I’ve worked at.  Oh yeah, I’ve also learned a new skill or two as well.  All joking aside,  I’ve learned more about architecting and scaling out an application in the last 8 months than in the last 20 years of my career.  You just don’t get those skills at most corporations unless you work at Facebook, Google, Twitter, or some other large social media company.
So are there any perks?  Yes.  This startup provides free lunches and unlimited coffee, drinks, and snacks to its employees.  It also provides standing desks and the latest high end apple laptops.  It also provides you with an opportunity to make a huge impact at the company.
So leave your corporate life and make the move to startup life.  The water is warm.
0 notes
smithatlanta · 11 years ago
Link
I started looking into this last year but got side tracked with life and projects at work. I finally decided this past week to try and get this working after a co-worker said he needed this for an application he was working on. So here are the steps I took.
Step 1. Build a packer json file.
{...
1 note · View note
smithatlanta · 11 years ago
Text
Using the Node AWS-SDK with Riak-CS
I went through several hours of messing around with a couple of node modules that seemed to work but I had issues with so I ended up using the Node AWS-SDK as is.  Here's some examples of a PUT, DELETE, and GET.
PUT
fs.stat(parsedFile.path, function(err, file_info){ if(err){ throw err; } var AWS = require('aws-sdk'); var s3 = new AWS.S3({ endpoint: "http://<host name without bucket prepended>", accessKeyId: <accessKeyId>, secretAccessKey: <secretAccessKey> }); fs.readFile(<full path to file>, function(err, data) { if(err){ throw err; } var params = { Bucket: <bucketName>, Key: <fileName>, Body: data }; s3.putObject(params, function(err, result) { if(err){ throw err; } }); }); });
DELETE
var AWS = require('aws-sdk'); var s3 = new AWS.S3({ endpoint: "http://<host name without bucket prepended>", accessKeyId: <accessKeyId>, secretAccessKey: <secretAccessKey> }); var params = { Bucket: <bucketName>, Key: <filename>, }; s3.deleteObject(params, function(err, result) { if(err){ throw err; } });
GET
var AWS = require('aws-sdk'); var s3 = new AWS.S3({ endpoint: "http://<host name without bucket prepended>", accessKeyId: <accessKeyId>, secretAccessKey: <secretAccessKey> }); var params = { Bucket: <bucketName>, Key: <filename>, }; // streaming from riak-cs to express response s3.getObject(params).createReadStream().pipe(res);
I hope this helps anyone else out there.
0 notes
smithatlanta · 11 years ago
Text
Layoffs... What can you do?
This is the 4th time I've been thru layoffs.  The first layoff was a complete surprise but the next two had tell tale signs that they were coming.  I've realized that the biggest sign of impending change is a major slowdown in workload.  I remember going to my bosses at the last 2 companies and asking for work and their responses were to go find a technology I'm interested in and learn about it.  I thought initially "Hey, That's Cool".  But when you go back a week or so later and the response is the same, you start to wonder "Now how is this really helping the company".  I'd suggest documenting everything during this down time.  Your boss is much more likely to give you a good recommendation if he doesn't have to open up the code to figure out what the heck the code is doing and where it's been deployed.
I've learned several things after going through these layoffs. One thing is that you should always keep your resume up to date.  It doesn't take much time to run through it once a year and update it.  It's also good to just review what you've done each year.  I have a hard time remembering what I did 6 months ago much less 6 years ago.  Keep that resume up to date.
Another good thing to do is to be nice to recruiters that reach out to you.  Things can change quickly in the technology industry so you always want to have someone to reach out to when bad things happen.  It only takes a minute or two to respond to an email that you aren't currently looking but that you appreciate them searching you out.  They'll remember that when you start looking.
Another thing that I've started doing the last 2 years is attending Meetup groups.  This is a great opportunity to network and keep up to date with technologies you are interested in.  Several of the groups I attend even open their meetings by asking whose looking for work and whose hiring.  Meetups are also a great opportunity to work on your presentation skills.  
The final thing I recommend is to learn something new every month(if not every day).  If you are a .Net developer during the day, learn Go.  If you are a Java developer during the day, learn Python.  If you write tons of SQL in a relational database during the day, learn how to do map reduce in Hadoop.  Do something you don't do at work.  It will make you well rounded and will provide you with a different point of reference not to mention that you might find a way to do something much better.
Now on to the task of dealing with the knowledge that layoffs are coming.  This is tough.  It's kind of like pulling the bandaid off slowly.  Do you wait to see how things pan out or do you look for work and leave before the layoffs occurs?  If you leave early, you could possibly miss out on a severance or possibly an opportunity to move in to a higher position that may have been vacated.  Of course if you don't leave, the market could be flooded with individuals and it could be tougher to get a job if you are let go.  I guess it comes down to a personal decision but you really need to weigh how much you like working at the company.  You also need to think about things like 401k matching, insurance costs, and flexibility of work schedule.  You also need to take a step back and look at the company objectively.  Is the company going to be around in 10 years?  Is the company going to continue to be a player in the market that it's in?  
The best you can really do is prepare for the worst, hope for the best and live every day like it's your last.
0 notes
smithatlanta · 11 years ago
Text
Dealing with Docker Data volumes in MongoDB
If you tried to do what I said in my previous blog post, you'll realize that the data volume does not get copied if you try to export  / import a mongoDB container.
In this instance, you'll want to create a dedicated data container(https://docs.docker.com/userguide/dockervolumes/).  In fact for any data store you may want to use this pattern.  It will allow you to do updates and not lose your data.
// Here's how I created a data container for mongoDB
docker run -d -v /data/db --name dbdata dockerfile/mongodb echo Data-only container for mongodb
// And then how I used it in my mongoDB container
docker run -d -p 27017:27017 --volumes-from dbdata --name db1 dockerfile/mongodb
You can test it by creating a collection in mongoDB, then stopping the container and removing it.  Then create another mongoDB container via the second command above and then look for the collection you  added previously.  You'll see it. :)  I think you may be able to export the data container then import it on another machine.  
If you think about, it's pretty cool.  The data container is separated from the implementation container.
0 notes
smithatlanta · 11 years ago
Text
Copy a Docker container from one machine to another
I ran into a situation this weekend where I needed to copy an ElasticSearch Docker container(already populated with data) from one Mac to another so I did some research and this is what worked for me.
1. Get a list of all my containers docker ps -a
2. Take the container and make an image of it docker commit <container_id> <mynewimage>
3. Save the image to a tar file docker save <mynewimage> > <mynewimage.tar>
4. On the new machine, load the tar file as an image docker load -i <mynewimage.tar>
5. Run the image as a container(Elastic Search ports for example) docker run -i -p 9200:9200 -p 9300:9300 <mynewimage>
The size of these images could be large depending on your datastore size so you may want to do a docker rmi <mynewimage id> on the machine you are copying from to free up drive space.
I've now found a use for the Terabyte of space I now have on Dropbox.  I use it to store these tar files(gzip'd of course).
0 notes