Don't wanna be here? Send us removal request.
Text
Syncing strategies with Docker and Mac
osxfs
This is the old default way of mounting the local directory inside a Docker container on Mac.
Due to the operating system differences, Docker for Mac cannot run its apps and containers the same way as it does so on Linux. It works this problem around by running a tiny Linux kernel in a virtual machine and then running all the containers there instead. This means we cannot natively mount the host filesystem in the container since it's a completely separate operating system. To work around this, Docker for Mac developers came up with osxfs.
Unfortunately, this solution is insufficient for anything other than very very simple applications. The moment you run a mature Rails application, this approach completely falls apart - it not only uses quite a lot of CPU, but it's also painfully slow. We're talking several minutes to even launch the app and then several seconds for every request - completely unusable!
gRPC FUSE
Knowing the problems of the osxfs approach, Docker developers tried to find an alternative. After a brief trial run with mutagen (more on that later) they decided to instead roll their own solution and gRPC-FUSE was born.
Unfortunately, it turned out the cure was worse than the sickness. gRPC-FUSE is still painfully slow slow but also unreliable, using lots of CPU, randomly failing and keeps getting broken between even patch versions of D4M. Not only that but the main reason it was created was because of reliability and CPU issues of the previous solutions and not to drastically improve the I/O performance so it doesn't help us much with larger applications either.
NFS
Another common alternative that works without bringing any external tools is using NFS. It's supported by default by Docker and requires just a bit of extra setup to work.
Usually, it's also pretty reliable but unfortunately has one big drawback - the disk access time for the app running in the container is still slower than native which causes both the boot time and every request to take more time. As such, it's fine for moderately sized projects or apps that do not require much I/O but it definitely was not enough for Rails.
The other problem is that it does not propagate any filesystem events, meaning you need to rely on polling to detect changes within your application (spring, Rails autoloader, etc.) which is always slower and requires more CPU.
Mutagen
Like I've mentioned before, Docker for Mac developers tried using Mutagen for a while. Mutagen is a fast file synchronization utility that was inspired by Unison and partially Syncthing and tries to require as little configuration as possible. You can either set up synchronization sessions yourself or use their docker-compose integration.
The way it works is that creates a Docker volume and then listens on file changes on the host machine and whenever it detects any file changes, it synchronizes the two. This allows us to get as close to native as possible read-write I/O for the app running inside the container by sacrificing the hard drive space (we store the files twice, once on the host and once on the external server).
Unfortunately, Mutagen is not a silver bullet either. To use docker-compose integration we need to dig into beta versions (beta3 at the time of writing this post) that are not production-ready and are still buggy. Sometimes conflicts get resolved in an unexpected manner, sometimes mutagen also messes up when doing heavy git operations (like rebase) and can in extreme cases ruin your local code copy.
There are also several unresolved issues related to random high CPU usage. I was testing this solution for months before giving up because sometimes my laptop was getting so hot I was not able to hold it on my lap anymore, not to mention other apps kept being affected, slowing my entire environment to a crawl. I still believe mutagen will in the future be the best solution for syncing but it's simply not there yet.
docker-sync
The last and also very popular solution is docker-sync. It's a tool that was built back when osxfs was the only option and it was aimed at solving the problem in the same fashion as later Mutagen.
There are several strategies for docker-sync:
unison - uses Unison for watching file changes and syncs them back to the volume in a similar fashion to how mutagen does it
native_osx - tries to merge the best of unison and osxfs by mounting the volume as osxfs container and then copying files from there to regular Docker volume with unison; unfortunately, it's no longer reliable due to Docker for Mac bugs related to switching to gRPC-FUSE
rsync - super simple strategy that is unusable for anything other than testing - it's unidirectional only and never deletes any files
Since native_osx is currently unreliable and rsync is not very useful, the best strategy to use is Unison. Unfortunately, it's not without its pros and cons. The biggest issue is that unison, by default, does not include a way to monitor filesystem changes on Mac (a tool called unison-fsmonitor). There is an implementation of that written in Python called unox that uses Python's watchdog library but regrettably, it's pretty slow - for our codebase sometimes it takes up to 5 seconds between the file changes until it's synced back to the container.
There is an alternative though - we can use another unison-fsmonitor implementation that was written in Rust and as a result is much faster and uses fewer CPU resources than unox. Starting with 0.7.0 version docker-sync will no longer complain if this implementation is installed so you should definitely try it out and see if it also improves your developer experience.
As you can see there is no good solution when it comes to Docker and Mac, only trade-offs. This is the unfortunate side effect of not being able to run Docker in the same native containerized fashion like on Linux and it does not seem that we'll ever nail it perfectly (it's a similar problem as with WSL). Hopefully, this research will help you pick the right choice for your particular set of requirements. Good luck!
0 notes
Text
Improving developer experience with custom CLI tools
As developers we often have to manage complexity in code or architecture but what happens when complexity creeps in on us in our favorite safe and comfortable space, development environment?
When we started our journey of migrating from Vagrant to Docker and Docker Compose we felt that problem hard. Our application is running in several different countries, each of which has a slightly different configuration, different ENV variables, and a separate database. When we had all of this tucked inside Vagrant it was relatively easy to manage but after the switch the commands to even start working got unwieldy.
We tried to fix the problem by creating a Confluence page with the most common snippets and then copypasting them every day when we needed to start the app, stop the app or access the console but the problem kept growing and became more and more annoying. At that point, we realized that we could use one, standardized command-line tool that will allow everyone to be productive without having to be an expert in all the fine details of our local stack.
Since we’re huge fans of Kaizen days (1 day where the entire developer team can decide to improve whatever they want in our application or stack on their own), we decided to build that tool.
We called it Helpling Toolbelt.
The rewrite
The 1.0 version of the tool was written using Thor and it was relatively simple, just our Confluence snippets wrapped in a simplistic command runner but our hunger for amazing and useful solutions kept growing and could no longer be tamed by something this simple. It was time to rewrite.
Helpling Toolbelt (or HT as we call it) 2.0 was rewritten from scratch using libraries that felt more adequate to the task. We switched from Thor to dry-cli (then hanami-cli) as it allowed us better control over longer and more complex commands with options and arguments. We also introduced plenty of TTY Toolkit libraries like tty-prompt, tty-spinner or tty-markdown for nice changelog printing after an update. We are still using colorize gem instead of pastel though, mostly due to the fact it was used in the 1.0 version and we never felt the need to update.
But let’s not talk about the technicals all this time. What does Helpling Toolbelt bring us internally to the table and why do we bother with developing, upgrading, and maintaining it for so long (and plan to do so for the foreseeable future)?
Starting the app
Since we’re using Docker Compose with multiple projects, each project being a separate country, to start an instance we had to copy-paste this snippet of code every time:
COUNTRY="de" docker-compose up -f docker-compose.docker-sync.yml -p de up
As you can imagine, this was far from perfect – not only you had to copy-paste this every time (or set a custom alias), if there was something wrong happening with starting the instance, you would have to sort it out yourself. This may not be a problem for our full-stack and backend developers but if someone’s a front-end developer with almost no knowledge of Ruby, it was really cumbersome and ended up wasting everyone’s time.
Initially, Helpling Toolbelt did just that – after running ht up it ran this exact snippet and only replaced country with whatever country developer wanted to run. A major part of the HT 3.0 version was changing the way our command ht up works.
Rather than only start docker-compose and let the developer deal with the technicals, we decided to switch to a multi-step approach. Currently, our tool does several sanity checks:
checks if you’re connected to our VPN
makes sure the testing database is fetched and in the latest version
starts docker-sync or mutagen (more on that later)
starts the containers
ensures that all Ruby gems are up to date
waits for Nginx to respond with 200 meaning the app is completely up and running
runs any pending database migrations
starts spring server for faster command running (more on that later)
As you can see, using tty-spinner gives us a very polished feeling while keeping all the important information intact. There’s obviously --verbose command for more complex debugging too that disables hiding the information.
Front-end apps support
Our application is not just a Rails monolith. Over time we extracted parts of it into separate React and React Native (Expo) apps. This means the local environment got even more complex but fortunately, we solved this by introducing a suite of tools to make things easier.
There are several common things that you need to take into account when running an app in a multi-country setup. There are ENV variables that you need to set, caches that sometimes require busting and webpack instances that need restarting. We made it all work under one command called ht fe up.
The way ht fe up works is once again multi-step:
When running ht fe up on a project that does not exist locally, we fetch it automatically from GitHub (to simplify things you need one directory for all projects, ht asks you for it and you can change it at any time). We also run a yarn check --integrity on the node modules and run yarn (install) if any of the modules are missing.
We introduced our custom .htconf.yml file that includes specific project configurations which enables us to also clear the cache when needed (when ENV variables changed, for example, because of changing a country, or when there are any node modules changes) and has the information on how to start a project. The last part is a simple watcher that starts webpack and then keeps checking if ENV configuration changed so it can restart the process when needed.
All that in one simple command!
Different synchronization strategies
You might’ve noticed the non-standard docker-compose.docker-sync.yml file in the Starting the app section – that’s because we currently support three different syncing strategies internally – NFS, docker-sync and mutagen. Each of them has its own set of pros and cons and depending on the developer, the work they’re doing and their individual preferences we decided it’s best to keep our options open. This also allows us to experiment with different syncing strategies and see which one works for us best – for example, mutagen support was added only recently and it’s still considered experimental since it requires some custom hacks to work properly.
Connecting to remote servers
Our infrastructure consists of different types of servers and there are different methods of accessing each. We made it simple with a set of ht remote commands. We have a separate command for console and ssh but also log for server logs. This makes it much easier to figure out what needs to be done to debug an issue on a testing instance or check a specific bug on staging.
Making things fast(er)
As you can imagine, a Rails application is not exactly a demon of speed, and adding Docker on top definitely doesn’t help. How did we improve the speed of common developer actions?
The most important trick was to run the spring server. But how does one run the spring server using Docker, isn’t every docker run a new container? Well, yes, but you can spawn a session on already running containers with docker exec!
The way it works is that when starting a country with ht up we connect to our www container with docker exec and we spin a spring server there that we detach from the running terminal (so it keeps running after we close the session):
docker exec -it (www container id) bash -c "nohup spring start 2>&1 >/dev/null"
Then, every time someone needs to run a command like ht bundle install or ht rake some_command we check if the www container is still running. If it is, we once again connect with docker exec and prefix the command with spring and if it’s not, we run it normally (so you can still run bundle install even when your server is not running).
The difference was massive – running ht console (Rails console) now takes several seconds instead of almost a minute!
Setting up the local environment
One of the major problems of projects that exist for a long time is the number of things you need to set up on your machine before you’re even able to launch the app. Ruby, additional libraries, setting /etc/hosts or even installing homebrew on Mac – all of those take time, require explaining, and often manual hand-holding. One of the important features of HT these days is the autoconfiguration of the environment. The major idea is as follows: if you need something to start the app, the CLI tool should install or propose installing it for you.
The “Don’t make me think” mantra applies here too!
Packaging the tool
One of the frustrating features of using Bundler in your CLI tool and developing it either as a gem or even just a normal Ruby project dropped in the directory is that it’s now dependent on the Ruby version you’re running locally. This can cause friction if you’re using a tool like rvm or asdf and can have multiple Ruby versions around. To solve this problem we’re creating a binary package using ruby-packer that we’re building on our CI before every new version release. This has the extra benefit of no longer requiring Ruby locally to work.
Interestingly enough, it also seems to make the tool slightly faster (around ~40ms of boot time).
There are various different tools and utilities that are part of Helpling Toolbelt that were not listed in this blog post. Attaching to running container to connect to binding.pry, running rspec tests with open VNC connection to selenium (with an information box reminding you on how to connect with a clickable link to the VNC session), a development database image manager, a markdown-formatted CHANGELOG that shows up whenever the tool updates (with ht update doing all the work)… There are almost 60 subcommands available right now and new features are added (or removed) periodically.
All of this may sound like a lot but keep in mind we’ve been adding new features and organically growing this tool for 3 years now. Which brings us to the main point of this post – the sooner you introduce something like this in your project, the better! Happy coding!
This post has been originally posted on Helpling Engineering Blog
0 notes
Text
Using redux-thunk efficiently with async/await
A very common issue I see with people writing code using redux-thunk is falling into callback hell.
// actions/auth.js const login = (dispatch, getState) => { const { username, password } = getState().signInForm dispatch({ type: SIGN_IN_REQUEST }) fetch('myapi/sign_in', { credentials: 'include', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ username, password } }).then(data => { data.json().then(({ data, errors }) => { if(errors) { dispatch({ type: SIGN_IN_FAILURE, errors }) } else { dispatch({ type: SIGN_IN_SUCCES, data }) } }.catch(exception => { dispatch({ type: SIGN_IN_FAILURE, exception }) }) }) }
The problem begins when you have to do more than one asynchronous thing at the same time. The code easily grows or you split it into multiple smaller actions that are dispatching other actions and them dispatching yet another while making the entire logic harder to comprehend. The entire execution of the application becomes a graph that you have to mentally rotate inside your head and sometimes because of the splitting you're losing important information and have to pass that in a completely weird way, like by triggering a store change to pass data to the component via props and then continuing the flow inside componentDidMount or componentWillReceiveProps.
But there is an easier way!
One of the commonly unknown features of redux-thunk is composition. The return of the function passed to the dispatch will be returned from that dispatch, or in simpler terms:
const foo = dispatch(dispatch => { return 'bar' }) // foo === 'bar'
This means that if you return a Promise from a dispatch, you can then act on it and chain your callbacks to it with .then. This alone allows you to easily orchestrate multiple operations into one action creator but it gets even more powerful with async/await syntax:
const login = async (dispatch, getState) => { const { username, password } = getState().signInForm dispatch({ type: SIGN_IN_REQUEST }) try { const { data, errors } = await signInApiCall(username, password) if(errors) { dispatch({ type: SIGN_IN_FAILURE, errors }) return false } dispatch(setCurrentUser(data)) await dispatch(fetchAppSettings()) // can throw an exception on failure await dispatch(updateLanguagePreferences()) return true } catch(exception) { dispatch({ type: SIGN_IN_FAILURE, exception }) return false } }
You can then use this code in your component and keep the navigation logic away from your actions and in the components, making it more reusable:
async onSubmit(e) { const { login, navigate } = this.props e.preventDefault() if (await login()) { navigate('/sign-in-success') } }
This will also make your code easier to follow and simpler to understand which is usually a good idea since you'll be the one maintaining it in the future ;)
0 notes
Text
Ready, set, wash your hands - gamification of office etiquette
We've all been there - someone left a dirty coffee mug on display, turned off the light in the bathroom when we were there or forgot to replace the toiler paper roll with a fresh one when you most needed it. We are all people and office etiquette can sometimes be a bit lacking, especially when we're in hurry our absent minded.
After being fed up with being left in the darkness for that one time too many, I've decided to jokingly remind people on basic office rules that should be upheld by everyone. What started as a little tongue-in-cheek joke turned into a pretty interesting side project so I've decided to write this blog post and tell a little more about it.
The Idea
This was not my first office project like this so I was tempted to just rehash my old idea of making climatic posters. After a bit of consideration I've decided against it as I wanted to proceed with something more original. In the mean time I had a great idea for an office game based on Lifeline but with more complicated, spatial-based mechanics (I'm not writing more about it here because I still hope I'll build it one day) that would take way too much time.
I've decided to mix those two and that's how it all began. Mechanics of the game were super simple - there were 8 stickers hidden around the office, each with a QR Code, an image and a (hopefully) funny description. They looked like this:
Once you scan the QR Code you are taken to a website that adds a badge to your backpack. Once you collect all of them you get an extra final badge. Every badge has an icon, a name (not visible on stickers) and a description.
The Design
Since I'm usualy a web developer, I don't have a huge graphical talent so I had to improvise. All of the icons are based on the flat design with shadow, something that was super popular relatively recently (not sure about now). I also very generously stolen borrowed a lot of shapes from the amazing Font Awesome 5 (buying the commercial license as early bird on Kickstarter was one of the best decisions ever!).
The App
The main focus of the badge-collecting app was performance. I wanted it to feel instantaneous with almost zero loading time, even on slow internet connection. When I started working on the project my idea was to just host a bunch of static HTML pages but I've been coding in React for so long it was a chore. I've also tried experimenting with both preact and inferno but neither felt right and the bundle size was still not satisfactory (after all, the best code is no code at all)
In the end I went with a pretty interesting approach: on the server side I'm using React with Razzle to render the pages, then I push raw HTML to the client and apply super-minimal vanilla JS on top of it. This was the perfect mix because it allowed me to code in something I feel proficient in (and use all component-based good stuff I've learned to love) while at the same time what was delivered to the user did not require 100+ KB bundle that would slow everything down to a crawl.
The extra bonus of it all was that it was now impossible to search the app code for the badge URLs ;)
Animations
The other selling point of the app were animations. I put a lot of effort to make them silky smooth 60 fps. The most interesting part was animating SVG using only CSS transforms - I have created every icon in Affinity Designer, then exported to SVG and used SVGOMG to minify the file and simplify paths. I then modified all files by hand, adding HTML IDs and wrapping certain elements in groups with <g/> tag. This allowed me to highly optimize all animations to my needs.
There were several tricks that I used to orchestrate the animation properly:
Because the animations had to run the moment page was loaded, the delayed loading of the images ruined the composition so I have inlined all SVGs using svg-inline-loader instead of <img/> tag.
When animating multiple objects I've learned that it is much much easier to set the entire animation length to something longer (like 10s) and then control the duration and delay using @keyframes and percents rather than raw seconds. This made everything more predictable because it felt like moving clips on a timeline in video editor and the individual animations were never out of sync. The frustrating part was that increasing or decreasing the overall time of the animation required recalculating all % values but if I had to maintain it longer, I'm sure I'd just fix this with custom Sass mixins.
The trickiest part was animating the shadow on the badge. At first I thought it'll be just a case of rotating a square and using it as a mask but it was really unpredictable and didn't look great. In the end, I used a neat little trick with clip-path: instead of animating a square I'm actually animating a 5-sides polygon. Why 5 sides? Well:
The text rendering was surprisingly difficult to control and font loading was the most unreliable part so I've created all texts as SVGs with fonts converted to paths. This took less space than loading an entire font, I was able to inline it and also I got predictable scaling as a bonus.
Responsiveness
The last goal for this app was to achieve full responsiveness on all possible devices. This was done using viewport width and height units and abusing the fact both SVG and texts are fully scallable. The app works both in the portrait and the landscape mode on either mobile, desktop or tablet and everything is dynamically scaled to 100% width and height.
There is an interesting bug sorry, "feature" on iPhone where the 100% viewport height is actually a bit larger than viewport due to the browser UI on top which means you have to scroll a bit to see the whole picture. Fortunately it wasn't super bad for this case but it was definitely a bit frustrating.
The Final Result
As the stickers are no longer in the office and I've decided to make the whole thing public, you can check out the badges yourself (apologies for SSL being broken but it seems that Heroku ACM only works for paid dynos and I've switched the dyno to Free already):
Squeaky Clean
Fear of the Dark
Om Nom Nom
It's All About Paperwork
Let's Make It Clean
Blah Blah Blah
Rinse And Repeat
Jenga!
and the final badge: Office Revolution!
Unfortunately I won't be open sourcing this as I didn't make a list of all icon inspirations and I'd rather not breach any license agreements, sorry! The most interesting part is in CSS anyway and you can scrap that yourself as a homework (or ping me on Twitter and I'll send you the SCSS file) ;)
Now go forth, figure out your own fun stuff and troll your coworkers. Enjoy!
0 notes
Text
Debugging tests with DevTools in Javascript
If you're testing your javascript tests with console.log then you've been doing it wrong for too long!
Install this Chrome extension first. I recommend you disable the AUTO mode and stick to MANUAL (or disable AUTO when you end your debuggin session) as it unnecessarily draws CPU power when not in use.
Then add this to your package.json in scripts section:
"test-debug": "node --inspect-brk ./node_modules/.bin/jest --runInBand"
(If you're not using jest then replace it with whatever command you have in "test")
Now when you want to debug your tests you just leave debugger statement anywhere you need and run them with npm run test-debug (or yarn run test-debug). If you want to run just one file, add it to the command:
# yarn doesn't require -- npm run test-debug -- __tests__/foo.js
The --inspect-brk stops the execution of the file and waits for you to connect with DevTools so don't forget to unpause it when you're ready.
Enjoy your debugging!
0 notes
Text
Testing React components
If you're writing your front-ends in React, are you writing tests?
Testing is hard. We all know it. It's practically a separate skill that needs to be constantly trained and improved in addition to your usual programming abilities. That's why people find it scary, especially when they're coming from back-end environment. In this blog post I want to prove to you that testing your React frontend may actually be easy to set up and quite pleasant to do.
What this blog post does not touch is all the rules and best practices when testing. I'm afraid there are entire books dedicated to this topic and it takes years of practice to start noticing certain patterns on your own. I will, however, try to help you with beginning your journey.
Before we begin, let's discuss our tooling. We are going to use Jest as a platform. The biggest advantage of this one over other tools is that it's (almost) zero configuration - we can drop it into our project without extra complications and it will take care of configuring and compiling our tests with Babel for us. If you're using create-react-app, it's already added to the project so there's one less thing for you to worry about ;)
Other than Jest, I recommend amazing Sinon.js for all your stubbing, mocking and XHR faking needs and Enzyme for rendering your React components and poking them, checking how they work and why. The last ingredient of our secret testing sauce is enzyme-to-json - it facilitates seamless integration between Jest snapshot testing (more about that in a moment) and Enzyme.
Add all of these to your project and keep on reading:
yarn add --dev jest sinon enzyme enzyme-to-json
Different kinds of rendering
Before we start our test writing adventure, there's something you need to understand first. When we are testing our React components, we are not actually rendering them in the browser (OK, we do, but only in very specific kinds of tests). Often we don't even want to touch the DOM and generate HTML as it doesn't make sense and only makes our tests slower.
There are three ways we can render our components using Enzyme:
Shallow rendering
Shallow rendering is the kind of rendering that you want to use as often as possible. It takes your component and renders only the first layer, the native components (like <div></div>) and placeholders for other components with passed props but without actually rendering them (or running any lifecycle methods). It also doesn't touch DOM and opts for returning simple objects instead.
This is the best kind of rendering you want in your tests - it's fastest and it keeps your tests decoupled from their children components so if any of those children components break, rest of the tests are not making it harder to pinpoint the problem.
Since this kind of rendering does not generate DOM, it does not run all lifecycle methods so you don't have to worry about your componentDidMount crashing. This also means that if for some reason you're doing something other than additional manipulation of rendered output (for ex. fetching data in componentDidMount instead of componentWillMount), it will never get executed in your tests.
The entire API is available in documentation and you will probably want to keep this tab open until you're confident enough that you remember what you need from it.
Full rendering (mount)
If for some reason (for example: because you're testing a component that is wrapping external library, when you're testing browser events or when you want componentDidMount to run) you need to access to full DOM API, Enzyme has your back. Full rendering using mount() renders your component and all of its children while using JSDOM (browserless Javascript implementation of DOM) to make sure all those extra manual addEventListener work properly.
You might be inclined to use this as often as possible but unless you have a very good cause you should avoid doing full mount. It's much slower than shallow rendering and it introduces coupling between your parent component test and it's children. Imagine the situation where you always do a full mount and then introduce a bug in an <Icon/> component. Suddenly all your tests have crashed and it's much much harder to figure out where, when and why it happened.
The API is similar to shallow and it's also available in the documentation of Enzyme.
Static HTML rendering
This is the last kind of rendering. It uses ReactDOM static rendering underneath so it works like server-side rendering but the result is then wrapped in Cheerio.js library to make it easier to traverse and analyze the HTML result.
In this case only the example usage is in docs and you should consult cheerio documentation instead. I didn't feel the need yet to use this kind of testing but it might be useful in your particular case so it's good to know that you have this option.
Unwrapping components
One of the things that most people starting with testing components is tripping on are components that are wrapped in HOC (Higher-Order Component). If you're using something like Redux or MobX, you're probably used to exporting your component wrapped in either connect() or @inject. Those functions work by creating an extra component above yours that passes extra data with React contexts.
Unfortunately, this will cause a problem when using shallow rendering because it will only render an empty container and not the thing we want to test.
How can we avoid this problem? There are three ways:
Dive!
The solution most people try at first is to use .dive() to render one component deeper. Regrettably, this solution may be very confusing and throw hard to debug errors as it does not respect context being passed from the wrapping component. I would avoid this if possible.
Use escape hatches
I think almost every library I've seen so far has implemented some kind of escape hatch to give you access to the original component. The problem with this solution is that you always need to remember what the escape hatch was and in which order it should be applied. For example, if you use withRouter and inject on one component, you would have to unwrap it like this:
const TestedButton = Button.wrappedComponent.WrappedComponent
As you may imagine, this is not a perfect solution and it does not scale very well.
Export unwrapped component
My favorite method of dealing with this extra complexity is to export the component without any extra wrappers in addition to the one treated with inject, connect, withRouter etc.
The only real drawback is that we cannot use @decorators on entire class and we need to do the wrapping ourselves on export:
export MyComponent as UnwrappedMyComponent export default inject('store')(withRouter(MyComponent))
I do not believe this to be a huge hassle (the export is only a bit more verbose) and it allows us to test actual components, not coupling with the rest of the system.
All right, we have the theory, we have the links to documentation, we're ready to start writing tests.
But what exactly do we want to test? How do we want to do this? What kinds of tests are there?
Types of component tests
There are multiple naming conventions regarding tests so don't treat the following list as the one and only possible way of categorization.
Snapshot tests
These are the simplest to write tests that can be used for a quick (and brittle) way of having as much test coverage as possible. They work by serializing the result of rendering to a JSON file called snapshot and then, during future runs of the test, using it to compare with future render results to make sure they are unchanged.
import React from 'react' import { shallow } from 'enzyme' import toJson from 'enzyme-to-json' import { UnwrappedButton } from 'components/button' describe('Button', () => { it('matches the snapshot', () => { const component = shallow(<UnwrappedButton store={{ label: 'foo' }}/>) // this line is important expect(toJson(component)).toMatchSnapshot() }) })
I've mentioned that these tests are brittle. Because of the way they work, they are going to fail the moment anything in your component changes visually which means even fixing a typo or changing the className is going to break them. They are still very useful as a sanity check, especially when doing larger refactoring.
The other thing you need to be very careful about is that all props passed to your components will also be serialized. This means if you pass a huge object (like entire, non-mocked Store) during your tests, the resulting JSON might be huge which will make the test run (seemingly) forever - we had this problem where someone in a test passed a non-mocked store which kept an instance of HLS.js. The resulting JSON was 5 MB (!)
Rendering tests
Rendering tests are (in this case) tests that test... rendering. More specifically they're the more precise version of snapshot tests that render a component and then poke it around to check if props were passed correctly and all UI elements necessary were rendered and are available.
import React from 'react' import { shallow } from 'enzyme' import toJson from 'enzyme-to-json' import { UnwrappedButton } from 'components/button' describe('Button', () => { it('renders label', () => { const component = shallow(<UnwrappedButton store={{ label: 'foo' }}/>) expect(component.find('button').text()).toEqual('foo') }) })
They provide a better insight into what is happening inside the component which means they're more useful when working in teams - if someone needs to know how a component works they can just look at these and behaviour (more in a moment) tests and do an educated guess which is much harder in case of snapshots.
Behaviour tests
The most important of all tests and the ones that you absolutely have to write, even if you slack off and ignore all the other categories are behavior tests. They're the bread and butter of your application test suite - they test how your application behaves when user interactions happen.
const sandbox = sinon.createSandbox() describe('EmptyForm', () => { afterAll(() => sandbox.restore()) it('triggers form object onSuccess on submit', () => { const component = shallow(<EmptyForm/>) const instance = component.instance() const stub = sandbox.stub(instance.form, 'onSuccess') .returns(true) component.find('form').simulate('submit') expect(stub.calledOnce).toBe(true) }) })
Behaviour tests mostly simulate browser events and focus on checking if event handlers are attached properly. Side effects, like network requests, timers etc. should be mocked/stubbed to avoid test coupling.
There is an important caveat when writing behavior tests with Enzyme - you may think that .simulate() is simulating actual browser events but that is not entirely true. What it does is it finds the event handler and calls it, passing any extra data we provide to it. It does not support things like event bubbling or calling onChange when simulating keypresses. If you need any of those advanced features you need to code them yourself.
If it makes it easier to wrap your head around it, remember that those two are more or less equivalent:
component.simulate('change', { target: { value: 'abc' } }) component.prop('onChange')({ target: { value: 'abc' } })
Integration tests
Integration tests are testing communication between components. They are the ones that benefit most from the full mount as they need to actually run more than component in a nested tree and see how all parts fit in together. They are basically behavior tests but for groups of components.
describe('Article', () => { it('displays comments after clicking a show comments button', () => { const article = mount(<Article store={mockedStore}/>) article.find('ShowComments').simulate('click') expect(article.find('Comment').length).toBe(5) }) })
Most of the time every part of the integration test can also be written as a series of smaller behavior tests with behavior between them mocked. These tests should check if that mocked behavior is actually connected properly.
System tests
The last on our list are system tests. They are very similar to integration tests but instead of running in a simulated environment, they're are running in an actual browser. They are also, unfortunately, the slowest of the bunch so it's a good idea to separate them from your main test suite and launch for example only on CI server (instead of every time you change something).
import Nightmare from 'nightmare' describe('App', () => { it('renders the initial app', async () => { const nightmare = Nightmare() return nightmare .goto('http://localhost:4000') .type('#search_form_input_homepage', 'github nightmare') .click('#search_button_homepage') .wait('#r1-0 a.result__a') .evaluate(() => document.querySelector('#r1-0 a.result__a').href) .end() .then((link) => { expect(link).toBe('https://github.com/segmentio/nightmare') }) }) }); });
The system tests require an extra library that takes care of opening your application and passing your commands to the browser. The two that I found most interesting are Nightwatch.js that uses Selenium and Nightmare that runs on Electron. Historically Selenium was usually a bit tricky to properly configure but a lot has changed since PhantomJS got abandoned and Chrome headless became the new standard so your mileage may vary.
Configuring and running system tests can be complicated depending on the environment so I've released jest-str, a simple system test runner that contains preconfigured presets for popular boilerplates (at the moment of writing this blog post there are two - for create-react-app and razzle). If you want your favorite boilerplate to also get an official preset, feel free to send a PR :)
Great, we now know the different ways of testing components. If you've been careful with reading code examples, you probably get a general idea already of how we're doing it but let's get more specific.
Using Jest and Jest matchers
As you might've noticed already, Jest uses a spec-like syntax to define tests. They can be grouped using describe blocks for easier navigation, marking boundaries and behaviors being tested and for better error messages. The tests themselves are in it blocks and wrap tested values in expect().
There are 26 matchers which may seem overwhelming at first so when beginning your journey, focus on these 4:
toBe - checks for strict equality (===), useful for comparing numbers, true/false values, exact strings etc.
toEqual - performs deep value comparison, useful when checking for objects that have the same fields but are not necessarily the same object (ex. expect(myObj).toEqual({ foo: 1 }))
toThrow - checks if the function passed to expect() have thrown an exception (ex. expect(() => something()).toThrow(/fail/)), keep in mind to always create an anonymous function in expect() to avoid problems with scoping
toMatchSnapshot - used for snapshot testing
After your tests get more complex and you get the general gist of it, you will notice that you need more than what's above. It might be then very helpful to keep this documentation page around.
Using spies and stubs and sandboxes
Sinon.js gives us tools to observe the inner workings of our application and change it when necessary. Those tools are spies and stubs.
Spies
Spies are functions that keep track of how many times and with what arguments they were called. They can also be used as a wrapper around the original function and return its value. Spies are most useful for testing behaviors, for example, to check if the component has triggered an action in store.
I didn't want to artificially make this blog post longer so you should just check out the documentation to see exactly everything that's possible with them.
Stubs
Stubs are spies with controlled behavior. They include the same API as spies but they can also be used to return specific values, run their own fake function, call original function (or wrap it), throw errors, reject promises etc.
They are most useful when disabling parts of the application we do not want to test at the moment (XHR, complex operations, side effects etc) or when we want to make sure a certain path in code is being run (like making sure an if()somewhere gets true/false or when testing error handling).
Sandbox
Mocking functions inside stores is nice but what about test isolation? If we replace a method inside a pre-existing object with a stub, it may leak to the other tests, making them dependent on the order they were run. This is not an ideal scenario but fortunately, Sinon implements sandboxes.
A Sandbox is basically an orchestrating object that keeps track of all the spies and stubs and allows us to reset them after every call. It's usually set up like this:
const sandbox = sinon.createSandbox() describe('something', () => { afterEach(() => sandbox.restore()) // ... })
When using sandbox, remember to create spies and stubs using sandbox.spy() / sandbox.stub() instead of sinon.spy() and sinon.stub().
Stubbing network requests
There are multiple techniques to achieve that, depending on the library you're using for network requests.
If you're using something like $.ajax then check out Sinon's fake XHR server.
For axios you might be inclined to use moxios, the official testing library from the same author. I wouldn't recommend it as moxios is rarely updated and lacks some features that can be necessary for more complex flows. Personally, I use axios-mock-adapter as it has much better and more powerful API.
Finally, if you're using Fetch API, consider fetch-mock. It has a feature set similar to axios-mock-adapter. Keep in mind that fetch() is a browser API and you will a polyfill like isomorphic-fetch to make it work!
General guidelines on testing
There are a couple of things that you should remember during the tests. If you don't, you will trip and hurt yourself and drop testing front-ends again.
Avoid testing wrapped components. I mean it, always sure your components are unwrapped, otherwise, you will encounter a world of pain and arcane errors. This is where most of the people I've talked with bounced off testing altogether.
The only good moment to test wrapped components is during integration tests with full mount.
Mock your stores whenever possible. It's tempting to just instantiate your entire Store class in tests but it introduces tight coupling to the current state of your store and makes your tests both tightly coupled to it and, in case of complex apps, significantly slower.
Keep tests simple. You may be tempted to test for 10 different things in one test case (it will be faster!) but it will once again make your tests more brittle and prone to failing. It's better to have laser focus so every time your test suite fails, you have a clear message of what went wrong, where and why.
Group your tests. There's nothing worse than a huge file full of ungrouped, unordered tests. Be careful not to overdo it though, one or two levels of nesting are usually more than enough.
Keep your tests in one describe() block. It will make your life easier with before / after blocks, especially when it comes to sandboxing or stubbing requests.
And that's it! As you can see, testing your application components is not that hard and hopefully, you should have a better idea on where to begin now :)
Keep your code bugs free!
2 notes
·
View notes
Text
RubyMine and Jest
Do you use RubyMine and write Jest tests? Is this driving you nuts?
Go to Preferences > Languages & Frameworks > JavaScript > Libraries. Click Download and select TypeScript community stubs on top. Scroll down a little bit, find jest and Download and Install it.
Make sure you enable it, on Libraries list it will show up as jest-DefinitelyTyped.
So much better...
0 notes
Text
Use promisify() for more readable code
If the function or method you're using accepts callback with two arguments in this exact order - error and positive result, then you're dealing with a node style callback pattern. This is actually a very common pattern for async methods before the advent of promises. The good thing is, you can convert this automatically to Promise by using promisify. Native utils.promisify is available only in Node 8+ so if you're not using a third-party Promise implementation like Bluebird you probably need a polyfill like utils.promisify.
Be careful as different libraries implement different argument set for promisify. Previous version of this blogpost assumed they're all more or less identical and had code mistakes because of this.
Here's an example of for where promisify can simplify code:
const client = s3.createClient({ s3Options }) client.s3.headObject({ s3Params: { Bucket: 'some-bucket', Key: 'filename.jpg' } }, (error, data) => { console.log(data.ContentLength) })
As you can see the async function in this example is using the pattern of (args, callback) where callback receives (err, value). This means we can use promisify!
const promisify = require('util.promisify') // import promisify from 'util.promisify' const client = s3.createClient({ s3Options }) // be careful: if function uses `this` inside, you need to bind it yourself correctly: const headObjectAsync = promisify(client.s3.headObject.bind(client.s3))
Now we can use it either with plain Promise, or even better, async/await syntax:
const getContentLength = async function(filename) { try { const data = await headObjectAsync({ s3Params: { Bucket: 'some-bucket', Key: 'filename.jpg' } }) return hdataead.ContentLength } catch(error) { console.error(error) } }
Now, isn't that more readable? :)
0 notes
Text
How to make Touch Bar actually useful for web developer
When the new Macbooks with Touch Bar were released in 2016, I was struggling. The new Touch Bar was pretty useless in my flow, I kept accidentally touching action keys and missed good ol' ESC. It's been over half a year now and barely anything changed in this area - most of the applications I use daily either don't support Touch Bar or just add few gimmicky buttons to cross out a TODO task called "support latest MBP". This situation sucked and I wanted to do something about it and I finally found a way how.
Everything in this blog post is possible because of a great tool called BetterTouchTool. It's an amazing piece of software and author is a decent human being so spare those extra few bucks and buy it, even if just to support fellow developer (I have no connection to the person and this is not a paid advertisement, there are no paid advertisements on my blog)
There are two things that I've done to make my Touch Bar a bit better but the idea is generic enough anyone can modify it to suit their needs: I've added Webpack and Codeship buttons that show me the status of both my development build and CI tests.
Finally something I can use this "Touch Bar" thing for...@codeship pic.twitter.com/ZPZVKNneae
— Michał Matyas (@nerdblogpl) July 4, 2017
We are going to begin with configuring BTT, to make it take over our Touch Bar and replace it with its own menu. After installing BTT, open it and go to Gestures > Touch Bar. Click "General Touch Bar Settings" and configure it like this:
This will completely replace the Touch Bar and make BTT control it system-wide. You might be tempted to leave Show MacOS Control Strip checked but this will cause an ugly close button to show up on the left side of Esc, moving the key to the right and screwing with your muscle memory. If that's OK with you then sure, go ahead but it was personally driving me crazy ;)
We are now ready to create our first Widget. Unfortunately, since the Touch Bar support is new, our selection of widgets is really limited. I wish there was a more universal and nice looking way of adding custom hacks but since there is not, we're going to use Run Apple Script and Show Return Value to do our bidding.
For starters, we are going to need a simple script that will ping Codeship and return status. Since I don't really enjoy writing AppleScript, I've decided to side-step the problem by writing a Ruby script instead. The script is available as gist, and it was written in a way that will work with system Ruby. It should also be pretty straightforward to set up, just copy everything to a file on your hard drive and insert your API key in 6th line. It does very simple caching to make sure we are not hitting Codeship servers any more than we need to.
After you've configured and dropped the script somewhere in your home directory, we need to configure it properly in BTT. Go back to Touch Bar section of Gestures, click + Widget, select previously mentioned Run Apple Script and Show Return Value and click Advanced Configuration. In Name type anything (might be Codeship), copy script as return do shell script "/usr/bin/ruby /Users/me/codeship.rb" (or wherever you've put the script) and set the script to execute every 1s. You can also add Codeship logo and set Alternate color - on the right, under Result just type 'failed' as the regex and set icon and color as you wish.
And that's it!
Like I mentioned the general idea is very simple. I've used the same technique to add webpack status as well: first I installed bitbar-webpack-status-plugin (I could've written my own but judging by the amount of code, I'd just double someone's work), then I wrote a tiny script that translates -1/0/1 to human readable messages and voila!
Let me know if you find any other good tips and hacks that could be applied with this in comments :)
0 notes
Text
Non-blocking command-line I/O in Ruby using nio4r and Open3
If you're writing an integration with external tooling using command line, you probably want to have control over how long the CLI command is going to run. You can of course do like some do - spawn another thread and keep a counter there while monkey-patching a core Ruby class in unrelated gem because why the hell not...
You can also use included Ruby library called Timeout - that is of course if you're not worried about crashes or stopping your code in the middle of doing something else.
And of course you can use asynchronous non-blocking evented I/O. If you know how to do it, obviously.
While experimenting with implementing my own library for communication with ffmpeg I was looking for a solution that didn't involve either Timeout module or spawning another Ruby thread for no good reason. I came across nio4r - nonblocking low-level I/O gem based on libev / Java NIO that is being used as a building block for larger software packages like Celluloid or ActionCable. Imagine my surprise when I couldn't find anything - really, anything! - about how to use this library to do something other than listening for incoming TCP connections.
Something like monitoring I/O of a spawned process for example, with the ability to timeout it if it doesn't respond in time while at the same time keeping the amount of threads to minimum?
require 'nio' require 'open3' selector = NIO::Selector.new stdin, stdout, stderr, thread = Open3.popen3("ffmpeg -i file.mp4 file.mp3") monitor_stdout = selector.register(stdout, :r) monitor_stderr = selector.register(stderr, :r) monitor_stdout.value = proc { puts "Got some data: #{monitor_stdout.io.read_nonblock(4096)}" } monitor_stderr.value = proc { puts "Got some error: #{monitor_stderr.io.read_nonblock(4096)}" } timeout = 30 # seconds loop do begin ready = selector.select(timeout) raise 'Command timeout' if ready.nil? ready.each { |m| m.value.call } rescue EOFError break end end
There, isn't that better?
There is one important caveat to consider while using this approach. While using synchronous read/write with Open3, your callback / block of code will yield on every read line. Unfortunately this approach does not wait patiently for the IO to pipe anything but rather uses your system polling mechanism (epoll/kqueue/inotify etc.). This means if the CLI tool that you're using is not playing it nice and doesn't buffer the output (ffmpeg for example doesn't) then your callback my be yielded with partial chunk of data rather than something more tangible, like a single line of output. Be wary and test often.
If you need more than just a code example to get you going, consider reading excellent post by Tony Arcieri, A gentle introduction to nio4r: low-level portable asynchronous I/O for Ruby.
Hope I could help!
0 notes
Text
Testing command line integrations with URLs as input
If you're using some kind of external integration that uses command line to communicate, rather than with direct API then testing that integration may pose problems. You can of course use excellent aruba gem but it will not save your bacon if the library uses URLs as input.
This situation is exactly the problem I had when trying to add remote URL support to streamio-ffmpeg, gem we're using to communicate with ffmpeg and ffprobe in one of our projects. I wanted to be able to test if ffprobe fetches metadata and ffmpeg transcodes from URL correctly and I needed something simple and fast so it could be added to open source project without adding unnecessary maintenance burden.
Webrick to the rescue!
Remember WEBrick? That default Ruby server that you used for about 15 minutes when starting with Rails, just before you switched to puma or unicorn? It turns out it's part of the standard library since MRI 1.9.3 and it can be used for much simpler things than just starting a whole app. It can be used in any script at any time to provide simple HTTP server with file download support. Exactly what we need!
Let's copy some code from documentation and try it out:
context "given an URL" do context "that is correct" do before(:context) do @server = WEBrick::HTTPServer.new(Port: 8000, DocumentRoot: "#{fixture_path}/movies") @server.start end after(:context) do @server.shutdown end let(:movie) { Movie.new("http://127.0.0.1:8000/awesome%20movie.mov") } it "should be valid" do expect(movie).to be_valid end end end
We run this test suite aaaand... it flops. We forgot that running WEBrick::HTTPServer#start starts our HTTP server in the same thread as the tests, effectively stopping test suite from running. Don't worry, it's very simple to fix:
before(:context) do @server = WEBrick::HTTPServer.new( Port: 8000, DocumentRoot: "#{fixture_path}/movies" ) Thread.new { @server.start } end after(:context) do @server.shutdown end
We don't need to keep the handle to the thread we created and call Thread#join in the end as shutting down the server or finishing the test suite will kill the thread anyway. The only problem left is a lot of unnecessary messaging that comes from WEBrick and pollutes our test suite output. Let's fix that:
before(:context) do @server = WEBrick::HTTPServer.new( Port: 8000, DocumentRoot: "#{fixture_path}/movies", Logger: WEBrick::Log.new(File.open(File::NULL, 'w')), AccessLog: [] ) Thread.new { @server.start } end after(:context) do @server.shutdown end
This is very simple and naive way of testing command line tools with URL inputs. It does not proxy anything, meaning if the library we're using sends HTTP requests to hosts other than the ones we provide, we're not able to catch and stub them. It is however simple and effective means for simpler cases and integrations so feel free to build upon it if you need something more complex.
Happy hacking!
0 notes
Text
My Spotify work-oriented sleep-killing playlist
I've been updating this playlist slowly for the past two years now and all I've got is 13 followers and worn out headphones.
1 note
·
View note
Text
Office savoir-vivre
If you are working with more than few people in one space, it is often common to run into several problems with local meatware. This post is a guideline to help you in making your non-virtual part of the life smoother.
Wash your #&(@*$ dishes
The problem with putting your dishes to the sink at the end of the work day and hoping that magical gnomes will come out from under their hiding places to clean up after yourself is that it's not high school anymore, creatures from fables and legends don't exist and it was your mom doing the hard work all along.
Not to mention the more dishes you put into the sink, the harder it is to actually clean them, because you know, piles stack up, block access to the faucet etc.
If you have a dishwasher in your office, don't think for a moment this doesn't apply to you. Make using dishwasher a team responsibility, for example as part of the daily entering/leaving office routine. And don't you ever forget to get all the dishes out in the morning, otherwise you'll end up with half-clean half-dirty guesswork mess.
Keep your coffee maker clean
Do you have a coffee maker in the office? Of course you do have a coffee maker, we are not animals. But do you know that this complicated piece of machinery is doing a horrible job at keeping itself actually clean? Yes, yes, those automated cleaning programs most of the time are doing "more or less OK job" at making sure you don't get food poisoning. Not to mention they actually have to be run once in a while.
And then there's the milk foaming system.
Have you ever had a chance of smelling an old milk that was kept out of the fridge too long? Now imagine for a moment that in most automatic coffee machines the same magical fluid is not cleaned up properly at the end of the day. It rests peacefully at the corners and hard to get places and dries, getting all crusty, just waiting to drop into your mug with the stream of freshly foamed milk.
Are you grossed out yet? Then better grab a sponge and start scraping.
Keep the paper rolling
Since we're at the subject of food poisoning - did you ever get stomach ache from undercooked or unsanitary street food? Did you feel that rush of adrenaline as you sprint to the bathroom, unzipping your pants just as you open the door?
Now imagine some a-hole used up all the toilet paper and "forgot" to put out fresh one. Do I really need to keep going?
You are "someone else"
Everyone generates trash, right? You drink a Red Bull and you have a leftover can. You ate a sandwich and now you have wrapping paper in your hand. But when you finally go to throw it out, the trash can is already overflowing with garbage and it's pretty obvious that it needs to be taken out.
So you just put your paper on top of it and run away, right? You are not sucker, someone else will do it.
Taking out the trash is not "someone else's" job. The bag is not that heavy and it doesn't make you lesser person if you actually take the initative, grab it and toss it into dumpster outside. It has to be done eventually and letting it pile up and stink is not going to make the problem go away. Be a responsible adult, it will not make your arms come off.
YANA (You Are Not Alone)
If you have the immense pleasure of working in a modern XXI century cattle house open space then you must know how wonderful it is at making sure people can communicate freely and collaborate more openly in the spirit of bonding and friendship and magic.
Unfortunately if you have 20 people in the room and two of them are loudly collaborating, this means 18 people don't.
If you are talking with someone it doesn't matter if you're working or telling jokes, if you are in open space you are loud enough for everyone to hear you. This means the laughter of your buddies is probably interrupting somebody's work, crippling their concentration and often kicking them out of the zone. Sure, everyone has headphones these days but sometimes when you have to think really really hard, compensating other people idle chatter with loud music is not exactly the perfect solution and sometimes it can even backfire (if you want to know how it feels, try reading a novel and listening to audiobook at the same time).
If you haven't guessed already this list is only half serious. I've been to many places and saw many offices and the problems I've listed are common in most of them. It's suprising how often seemingly mature people act like spoiled inconsiderate children when anything cleaning-related is involved.
At one point someone has to grow up and start being responsible adult. Might as well be you.
0 notes
Text
wroc_love.rb 2016
Hey, this is a very late after-wroclove post. Funny thing is I had it written two weeks ago already but I simply forgot to post it. Oops! Anyway, here it goes:
The first day of conference was mostly centered around amazing React.js/Redux workshops by Marcin, author of one of the best Rails books we've ever read called Rails meets React.js (not to mention an amazing person!). During those workshops we've learned a bit more about ES6 and Redux, a JavaScript architecture based on ideas from functional programming and immutability of state. I must say that I wasn't convinced at first but after seeing how it actually can look and work in practice (thanks Wiktor!), I must say I'm at least a bit intrigued. The important lesson from the workshops was also that more and more people stop using CoffeeScript and move on to Babel/ES6 and considering the development of CS also slowed to a crawl, it's about time we start moving on or risk being left out. Too bad, since we really loved CoffeeScript at Untitled Kingdom :(
After workshops, Andrzej gave his talk "From legacy to DDD". It was not new material so you can watch it already on YouTube (and we highly recommend it). It gives you the idea of how you can start publishing events in your application with no drawbacks or removing any existing codebase and then leverage those events later on and progressively make your application better. He also emphasised how important it is to **not* go async until your application actually requires it as microservices-based architecture, while having huge benefits of modularity and strong boundaries between components, also have huge costs like risk of failure due to message passing or not being able to keep data consistent and having to maintain eventual consistency. Microservices-based architecture should be your final destination but writing your application using DDD from the grounds is hard and it's much easier if you first learn your domain and problems that come with the territory.
The last one big talk that day was live coding session in Opal.rb by Elia.
Even though I must admit I find writing Ruby code much more pleasant than JS, I am still far from convinced that Ruby-transcompiled-to-JS is the solution. One of the things that caught my attention in a negative way was how problematic was finding mistakes and typos in the code. What debugger spewed out and the actual problem were sometimes two completely different things and it made it hard to go through the code and reason about it. I'm afraid that even passing 11 thousand RubySpecs is not going to help you with that :P
Second day of the conference started with Something Completely Different - Basia's talk about R programming language. Unfortunately she missed the great opportunity to make a running joke. Too bad! The talk was very interesting and even though I was so sleepy I had hard time following, I think I'll give R a change next time I need to analyze any larger data set.
Next one was something I was eagerly waiting for and I really can't wait until they put it on YouTube so I can link it here - [Robert Pankowecki] did an amazing talk about Saga pattern. Basically it's one of the cornerstones of CQRS and in nutshell it allows you to process user stories by passing coordinating and routing messages between bounded contexts and aggregates while at the same time making it possible handle failures without writing if (or rescue) ladders.
I still don't feel like I'm the expert in this field (DDD is hard) so I'm not going to go into details but Robert gave us some amazing ideas on how we can leverage event sourcing in our apps into next logical step. He also gave us an idea on how to create async handlers and reminded us on what kind of pitfalls we need to remember to avoid, like making sure your async code is not being run before transaction is commited.
Next item on agenda was panel about code editors. It felt less like a panel and more like an ad stream, to be honest, with multiple people taking turns in showing you their favourite stuff from various code editing tools. There was "no blood, no leader, no winner". There were also two presentations after this one, one about NixOS and one about ROS but unfortunately I was not on either of them (sorry!) and can't really comment.
Last but not least were obviously lightning talks:
I really enjoyed the talk from Gosia about keeping junior programmers happy. She pointed out that people are often discouraged if not motivated enough and programming is very hard so we should always make them feel their efforts are appreciated, something that is sometimes easy to forget. I also absolutely loved Nicolas'es Sonic PI kata - Sonic PI is a great software to combine programming and creating music and I wholeheartedly recommend playing with it!
I even won a t-shirt for sharing horrible code (sorry for the creepy smile, I always look bad on photos):
Third day started strong with Oskar's talk about Ruby Object Mapper (ROM). It turns out that ActiveRecord is not (gasp!) the only database solution for Ruby and it can be successfuly used in production. After that Kacper told us about Consumer-Driven Contracts and provided us with some additional research material.
After much turbulence the topic of Friday's panel was "Rails Deployment". There were many questions asked during this panel but the overall feeling was that unless you have a dedicated DevOps in your team it's almost always better to with cloud solution like Heroku or Amazon. The main reason is that it takes time to configure server and ever more time and effort to maintain it and keep it secure and up to date and that eats up time we could spend on actually developing things. The additional price of hosting solution is a bit painful but it does pay off in the long run. Other interesting opinion shared by all the panelists were that it doesn't matter what kind of deployment management tool you use - Chef, Ansible, Puppet etc. all share the same goal and solve similar problems so it's more of a matter of taste than anything.
There was also one person invited from the crowd who were a huge Amazon fanboy (his words, not mine!) and what he said about OpsWorks and all of the Amazon tooling made me rethink how I feel about it - I really hate overly complex tools when I don't see immediate benefit of that complexity and all of the Amazon solutions I've been working so far has been a tangled mess of arcane config files, horrible interfaces and complex relationships between those services. Considering how many smart people there were in the room I guess it's time to clench my teeth and try again ;)
Next talk felt probably the most professional and RubyConf quality. Part of the reason I felt that was probably sound of Peter's amazing voice but almost every slide felt eye-opening as well. He talked about LSP - Liskov Substitution Principle and what consist a violation of that principle. The slides, additional materials and soon the video of the talk are available on his website and I urge to listen, read and learn as much as you can to improve your code and make it more SOLID :)
Last talk was about payments - Sebastian told us a story about a female developer and problems that arise when dealing with payments and multiple gateways. It was less about the code and more about the storytelling which was a nice change of pace after three days of hard knowledge.
Then it was time for lightning talks! Unfortunately I don't have a photo of the whiteboard because my phone was acting up and I had to wipe it and go back to stock Android. I think that one of the biggest highlights of the lightning talks was very brave presentation by Tatiana about procrastination and using meditation to cope with it. Procrastination and depression are two very hard and very painful topics in development world, something we'd rather forget existed and it's nice to see people not afraid of talking about it and reminding people we're not alone. It was also nice to learn about systemd syntax and how to use it properly to write your own daemon.
And that's it! Three very intensive and fun days spent with bright and amazing people. Huge and personal "thank you" to all the organizers, volunteers and sponsors for making this happen!
See you on wroc_love.rb 2017!
1 note
·
View note
Text
Escaping CarrierWave versions
This post was originally posted on Untitled Kingdom Company Blog
For one of the projects we have been working on lately, we had to be able to upload videos to the server, convert them to formats supported by browsers with <video> tag and serve them to the user. Since Ruby and Ruby on Rails are our tools of choice we used CarrierWave for file uploading and storage. Since CarrierWave supports processing of input files into various versions, we decided to give it a try.
Since processing video files takes a lot of time, we had to do it in the background so we used carrierwave-backgrounder. When looking for a way to do video processing, we found the gem carrierwave-video that used streamio-ffmpeg for transcoding and it seemed fine for our task.
Our Gemfile looked like this:
gem 'carrierwave', '0.10.0' gem 'carrierwave-backgrounder', '0.4.2' gem 'carrierwave-video', '0.5.6'
To encode videos to proper formats we used encode_video method from carrierwave-video:
version :mp4 do process encode_video: [:mp4, { progress: :on_progress }] end version :webm do process encode_video: [:webm, { progress: :on_progress }] end
After some time there was a need in the project to be able to cut out small pieces of videos and serve them independently. At first we tried to do it the same way as previous processing:
version :mp4 do process encode: [:mp4, { progress: :on_progress }] end version :webm do process encode: [:webm, { progress: :on_progress }] end def encode(format, opts={}) encode_video(format, opts) do |movie, params| params[:custom] = "-ss #{model.playback.start_in_hms} -t #{model.playback.duration_in_hms}" end end
Unfortunately this had a very serious drawback - since all of the versions were being processed from one file inputted to the uploader, they had to be transcoded every time. That was far from perfect as it took additional time and unnecessarily used resources so we hacked our way a bit: since the original video was already in the formats we needed, we could just pass those versions instead and copy the audio and video streams.
To make the code less cluttered, we sprinkled it with a bit of DSL:
support_formats custom: proc { |model| "-ss #{model.start_time} -t #{model.duration}" }, source: proc { |model, version| model.parent.file.versions[version].file }, mode: :copy def support_formats(support_opts={}) FORMATS.each do |version_name, opts| opts = opts.reverse_merge(support_opts) (conditional = opts.delete(:if)) && (conditional = conditional.to_sym) uploader = version(version_name, if: conditional) { process encode_format: [opts] } uploader[:uploader].class_eval <<-RUBY, __FILE__, __LINE__ + 1 def full_filename(file) file + ".#{version_name}.#{opts[:extension]}" # forcing the extension, otherwise ffmpeg got confused end RUBY end end def encode_format(opts={}) cache_stored_file! if !cached? if opts[:mode] == :copy opts[:video_codec] = 'copy' opts[:audio_codec] = 'copy' end opts[:custom] = opts[:custom].call(model) if opts[:custom].respond_to?(:call) source = source.call(model, version_name) if source.respond_to?(:call) source = file if source.nil? source = source.path if source.respond_to?(:path) # etc end
This setup kind of worked, but it posed a lot of problems: we didn't have control over what versions were transcoded and we had to recreate each version if any of the transcoding failed. More than that, if any transcoding happened during deployment of new version, sidekiq had to be killed and restarted and it didn't have a way of going back from where it started so the whole processing had to be either redone or ditched altogether and marked as crashed.
We have tried various solutions of mitigating this problem but unfortunately using carrierwave-backgrounder made everything more messy. That gem was great for simpler logic but unfortunately choked a bit when we tried extending it more. This also caused the logic of processing to be divided between non-logical parts, like processing code ending up partially in sidekiq worker (because it was easy to set custom worker when mounting an uploader), non-obvious or custom callbacks being thrown all over the place or processing starting non-asynchronously if we weren't careful enough. The API got brittle and the whole codebase gradually became a mess.
class StreamUploadWorker < ::CarrierWave::Workers::StoreAsset def perform(*args) set_args(*args) if args.present? video = constantized_resource.find id # custom callbacks in model run_callback video, :before_upload super(*args) video.reload run_callback video, :after_upload_success rescue run_callback video, :after_upload_failure video.broken! if video.respond_to?(:broken!) # logging end def run_callback(video, callback) video.send(callback) if video.respond_to?(callback) end end
The last straw, however, came with a new requirement. The project has matured enough to generate significant traffic and serving multimedia content from a dedicated server no longer seemed like a viable solution. We needed to have files on both a local server (for processing) and on some kind of cloud solution (for hosting). As CarrierWave versions are nothing more than differently named files in a directory, using them seemed like a bad idea considering the amount of patchwork needed. It was time to clean the house.
We solved the problem by ditching versions altogether. We created a separate model for storing files that could either have local or remote (fog) uploader attached. Then we wrote our own transcoding logic, top down, easy to understand, using streamio-ffmpeg directly with specified file as the source and putting it in the path CarrierWave expected.
class FileObject < ActiveRecord::Base belongs_to :owner, polymorphic: true class Local < FileObject mount_uploader :file, FileObjectUploader end class Cloud < FileObject mount_uploader :file, AwsUploader end end class Processor # partial, example code def recreate_versions!(video, file) formats_with_options(video, file).each do |format, opts| if video.parent.present? original = video.parent.version(format) file = original.file if original.respond_to?(:file) end file_object = FileObject::Local.create!(owner: video, version: format) filename = "#{SecureRandom.uuid}.#{opts[:extension]}" FileUtils.mkdir_p file_object.file.store_dir destination = "#{file_object.file.store_dir}/#{filename}" transcode!(source, destination, format, opts) file_object.update_column(:file, filename) S3UploadWorker.perform_async(file_object.id) unless Rails.env.development? end end end
CarrierWave is a great solution for file uploading. It's also a great solution for simple processing. The moral of the story is, you have to use the appropriate tools. CarrierWave versions are not enough for complex processing or any processing that doesn't use the original uploaded file as the source. It may seem obvious in retrospect but that's what happens when your codebase gradually evolves. When this happens, always try to find some time to stop, look back and ask yourself: "Is this code doing what it was originally made for?".
Happy coding!
0 notes