Everything that I'm working on. Heavy on the programming; heavy on the opinionated commentary.
Don't wanna be here? Send us removal request.
Quote
Austerity is not eight years of spending cuts, as in the UK, or even the social catastrophe inflicted on Greece. It means driving the wages, social wages and living standards in the west down for decades until they meet those of the middle class in China and India on the way up.
http://www.theguardian.com/books/2015/jul/17/postcapitalism-end-of-capitalism-begun
Well that’s a slap in the face.
0 notes
Link
This isn’t the first time something like this has happened.
Perhaps the first step isn’t for Firefox to block Flash, but instead all these horrible ad networks doing it, since they are the primary vector for attack.
Flash is a problem, but no where near as huge as Yahoo/Google/etc. spreading malware..........
1 note
·
View note
Text
Why you shouldn’t use cloudflare
This is on support.cloudflare.com. Random errors are not uncommon. I would never use cloudflare in production.
0 notes
Text
nginx upstream proxy with trailing slash
For whatever reason when using nginx as a reverse proxy when combined with upstream and proxy_redirect off, nginx would 301 redirect requests without a trailing slash to the name of the upstream.
That is...
upstream my_upstream { ... } location /something { ... proxy ... } Would result in requests to http://example.com/something getting permanently redirected to http://my_upstream/something/ instead of http://example.com/something/
The solution is to create these redirects manually with additional location { ... } blocks that do the redirect.
location ~* \/(something|somethingelse)$ { return 301 http://$host$request_uri; } location ~* \/(something|somethingelse) { ... proxy_pass http://my_upstream; }
1 note
·
View note
Link
Created an example repo that shows how to set up an API web server with restify that can trigger web hooks via the free hook.rest service I created.
0 notes
Text
It’s time for SaaS freeware
The time is right. For what you ask? Despite the image above... not warez. I think it’s time to take free software to the cloud, of course.
Servers are nearly free
Servers now cost less than a cup of coffee and you can throw in an enterprise grade infrastructure with a few clicks of the mouse.
Gone are the days of needing a team of sysadmins to keep a service humming. Today services nearly run themselves.
It’s time for free services
It's because of this that I've decided to put an idea into words and release something I've been working on as the first donationware-as-a-service (DaaS).
Not a freemium service. Not growth hacked. Not ad supported. Free.
What is DaaS?
So what is it? Well... you may be able to guess from that initialism up there that it's software-as-a-service but instead of subscription payments, it uses the donationware model: if you find it useful pay what you want -- or don't.
Why?
Because I built the service I wanted and I'm releasing it for free in the hopes it'll be useful to others.
If that sounds familiar it's because it's the open source mantra, but this time applied to a software service.
Standardizing DaaS
I think as cloud services continue to mature and prices continue to fall, DaaS will begin to flourish and I’m writing some guidelines that I’d like to see put into practice.
0 notes
Text
Maybe this question has been answered but...
What happens to all these IP blocking/throttling/whatever technologies when I can literally have 16m IPv6 addresses on my machine.
IPv6 is going to bring the scraping/spamming wild west.
Heuristic is the only way forward in that world.
0 notes
Text
Tracking down misbehaving elements causing overflow/scrollbars
Working on a mobile site and using the more-awesome-than-annoying Chrome mobile emulator in dev tools.
The site had overflow and nothing I did would fix it -- not even `html, body { overflow: hidden !important; }`
Wth.
The problem turned out to be that on refresh the zoom reset itself to 1.1 instead of 1.0.
The zoom in the emulator is kinda buggy and I’ve had it happen before, but not noticing this time lead me to create a snippet to help find elements that overflow, so I’m posting it here in the hope someone gets some use out of it even if it didn’t help me.
0 notes
Text
Stored procedures are bad and you shouldn’t use them
They’re pretty much like eval. If you don’t know if you should be using them then you probably shouldn’t be using them.
Application code has no business living in the database.
0 notes
Text
So, I wanted to persist/save iptables rules on ubuntu...
I made the mistake of using ubuntu for no other reason than I’m using vagrant and that’s what they use in their documentation.
I’m usually a CentOS kinda guy.
Whenever I set up a new server I like to set iptables rules, because.. you know... firewalls are a good idea.
And when I set these rules, I want them to load every time the server starts. Because, you know... firewalls are a good idea.
The good people over at CentOS (RH) seem to agree and have made this very easy. `service iptables save`. Boom. Done.
Now... let’s talk about ubuntu (debian). Fuck debian.

So how do you get the rules to save?
Well, you could read this ridiculously long community entry on the SO>ubuntu wiki. I did, and tried the first version to find it didn’t work.
Why didn’t it work? I have no idea, but it didn’t. I didn’t try the second, because... it’s the second, and it seemed hackish to me.
Let’s give lucky #3 a go because I’ve seen iptables-persistent mentioned elsewhere.
The first thing you’ll notice about iptables-persistent is that it doesn’t install silently, and thus can’t be scripted as-is. Groan.
Really?
After some searching, I learned that you’ll need to do this to get it to STFU.
It seems to work, except the apt-get install breaks the vagrant provisioning process by causing it to hang. It’ll still go on provisioning, but vagrant never sees it complete. (this is a clue i think, but i’m still coming up empty)
No manner of bash trickery can unfuck what the iptables-persistent install fucks during the vagrant provisioning process.
And that’s where I’m at. I’ll post an update once I have it fixed.... .. . ... .. ......
Or I’ll just rewrite everything to work on CentOS instead, which I probably could’ve done by now if I only knew all this going in.
UPDATE:
The final vagrant provisioning hanging was related to the SSH connection going away after the firewall rules were applied. (i.e. firewall blocked ssh but vagrant still thought it was connected.)
My iptables rules included a default DROP policy which seems like it was at least part of the problem. After removing this I was able to get things working, though I do not like it not being there...
I wasn’t able to find any info on why this doesn’t work on ubuntu, but I did find this page on debian’s wiki that doesn’t include a default ACCEPT or DROP policy at all?
I care not, because this is my first and last experience with ubuntu server. :)
===
Here’s some reading material on saving iptables rules:
http://askubuntu.com/questions/339790/how-can-i-prevent-apt-get-aptitude-from-showing-dialogs-during-installation
https://www.thomas-krenn.com/en/wiki/Saving_Iptables_Firewall_Rules_Permanently
http://askubuntu.com/questions/339790/how-can-i-prevent-apt-get-aptitude-from-showing-dialogs-during-installation
1 note
·
View note
Text
Dyn traffic director pricing
I’ve been in the market for a good DNS geo / failover / health check service.
I’ve been pretty happy with dyn as a regular dns provider, so I figured I’d look at their traffic director service.
tl;dr; It’s $300/mo per hostname with a 12 month commitment. Yes, seriously.
To sign up and give it a shot, you’ll need to talk to a sales rep. ...or you could just do what I did and sign up for a 14 day Manages DNS Express Trial and add it as a service to one of your hostnames.
What’s the difference between the Managed DNS and Managed DNS Express services? Shut your stupid face.
I’ve never used any other similar service, but I came away pretty impressed with traffic director.
But I still won’t be using it. $300/mo costs more than my 4 global datacenters of load-balanced linodes all together. That’s just crazy.
Probably going to use Route53 instead. Seems like it had some early performance issues, but has been getting better and lately nothing but good things. (Source: Internet)
0 notes
Text
Something is fundamentally wrong with nodejs and you probably don’t know about it.
How’s that for linkbait? But seriously... this is just bad and there isn’t a solid way to work around it.
What’s the problem?
Node will always only return the first matching record for any given hostname no matter what.
Huh? When you make a require(’http’) request to a hostname e.g. www.google.com, what happens is node uses dns.lookup to resolve that hostname to an IP.
But... dns.lookup will “Resolves a hostname (e.g. 'google.com') into the first found A (IPv4) or AAAA (IPv6) record”. (emphasis added)
This makes sense if your hostname to IPs are 1:1. But most sites use multiple A records and multiple IPs because this.
As an aside, this means that if your DNS has entries for both v4 and v6, dns.lookup will never return the v6 version, which is probably what you’re expecting.
But what if that single IP is down?
With all modern web browsers and a chunk of libraries, it’ll just go to the next IP in the list and try that one instead. It’ll most likely be up and you’ll most likely be happy that you got what you requested.
However, in node... you’ll get an ECONNRESET error.
At least it’s not cached
The good news (?) is that node also has no DNS caching, which means next time you make your request to google.com, it’ll be likely that the DNS server will shuffle the results, and you’ll get a different IP. If not though, you’ll never be able to connect.
(Yup. No cache. Which is the answer to your “why are my http requests taking so long?” question. This.)
Not getting fixed any time soon
So what does this all mean? Well... since it seems unlikely that this will ever be fixed (also this), there isn’t really a good option.
Libraries use the core modules, so nothing you do will fix things for your deps. You could monkey patch dns.lookup, but that is a pretty terrible idea since it could break things in a very subtle way when code expects the documented behavior.
So that leaves having each library owner build in their own solution at the top.
Which is exactly what was suggested 3 years ago.
Which is stupid. Period.
Conclusion
I’m not saying net.js -> dns.lookup should try every IP in series/parallel by default.
But at the very least, there should be some option or hook or something that lets you toggle this behavior on when you want it -- which is probably just about all of the time.
If 0.12 can up the agent socket pool from 5 to Infinity -- O.o -- I think this can be fixed...
#node#nodejs#node.js#dns#request#lookup#net#socket#connect#a records#multiple#dns round robin#ECONNRESET#cache#resolve
0 notes
Link
I accidentally just hit Ctrl + Z in the terminal and... holy shit. How did I not know this was a thing?
Ctrl + Z will instantly *suspend* the process. Then you can `fg` to start it back up.
I feel like I vaguely remember someone talking about this 10 years ago because I remember the bg/fg stuff... Oh well.
0 notes
Link
A protip by suvash about osx, mac, quicklook, finder, plaintext, and preview.
Been suffering through a lack of this on a new computer for months because I remember it being more difficult than this to set up.
It’s not.
Make Finder in OSX quicklook/preview all unknown files as plain text by default. It’s as easy as copy/paste.
0 notes
Link
Getting a list of issues opened by a user (e.g. yourself) on github. As easy as `author:[username]`.
0 notes
Text
Dont use nano; the nodejs couchdb client
If you’re here, you’re probably asking yourself... “should I use nano?”, the couchdb client for nodejs.
No. You shouldn’t.
I have a number of reasons why but don’t have the time to outline them here right now. That said, I did take the time to write this post, which should in itself tell you something.
Just use request instead and write your own wrapper. You’ll thank me. I promise. (This was advice I myself ignored and here I am.)
0 notes
Text
Increasing maxfiles and maxproc on OSX 10.10 Yosemite (for real)
This is the answer, with a couple notes, which are included below.
Things changed in Yosemite from previous versions with regard to how to increase file and user process limits.
Lucky for us, there are an abundance of helpful support documents from apple for how to perform this task. (Spoiler alert for that link: there are zero helpful support documents from apple.)
After following along with that link at the top, I was able to get it working with a couple tweaks:
I had to restart before the change would take effect
I ignored the bash_profile ulimit modification
And finally.... joy:
#osx#yosemite#10.10#mac#macbook#ulimit#maxfiles#maxproc#max files#file descriptor#256#apple#love how things just work with apple products
0 notes