Don't wanna be here? Send us removal request.
Text
Replace a database column with ActiveRecord with zero downtime
There are times with your web application where you want to switch a database column. Perhaps you discovered that the data sits better on a different table or perhaps the data stored in that single column needs to be split across two. This is a problem teams I’ve been on at GDS seem to face every now and again and I thought I’d take the opportunity to document the process I recommend to developers.
Often developers approach this task by trying to complete this task with their changes all wrapped into a single deployment, which is reasonable if you’re prepared to take your application down for maintenance. However if you want to keep the application available and avoid serving errors you’ll need to follow a multi-step process with a number of deploys with distinct migrations. Depending on the risks involved in switching your column you may also want to keep the door open to rolling back the application to using the old column.
The basic steps to this are: create the new column(s) in the db and set the application to fill this whenever it writes or updates the old value; create a task to loop through the old data to backfill the new column; switch the application to use the new column; then drop the old one. What follows are the considerations to make at each one of these steps and examples of how this is done using ActiveRecord with Ruby on Rails.
Step 1: Introduce new column and populate it when new records are entered
The first step is to add the new column to the database. This can be done by a standard Rails migration. At GDS we’ll regularly consult braintree’s handy guide to safe operations for high volume PostgreSQL to determine our initial migration won’t block the DB. There’s also an interesting gem, strong migrations, that can warn you for PostgreSQL when you’re going to run a dangerous migration. There are likely similar resources for other popular database flavours.
The application should then be adjusted so that whenever a record is created or updated this new column is populated. This step is required before backfilling this new column with existing data, the reason for this is that during the process of backfilling new data may be written to the application and thus the backfill will end up missing data or having out of date data.
At the point of adding this new column there is also an opportunity to make amends to the column that is being migrated away from, such as dropping a non-null constraint.
Example
Given we’ve decided the field document_type needs moving from the Document model to the Edition:
class Document < ApplicationRecord has_many :editions # Document has a document_type value that is moving to editions end class Edition < ApplicationRecord belongs_to :document end
We can create the column with a migration:
class AddDocumentTypeToEdition < ActiveRecord::Migration[6.0] def change add_column :editions, :document_type, :string end end
We can populate records that are saved with an ActiveRecord callback (to avoid changing application business logic) if our change is simple:
class Edition < ApplicationRecord belongs_to :document before_save { self.document_type = document.document_type } end
If migrations are run automatically as part of a deploy (before the application code switches over) this whole step can be done in a single deploy, if not the migration will have to be deployed and run before the change to populate
Step 2: Backfill the new column
Now that any new writes to the table populate the column the next aim is to fill the column for all the existing records of the database. To do this some code is needed to iterate through all the records in the database and updating the record for each one.
You may choose to do this with another migration or a Rake task. Your choice here will depend on your attitude to what belongs in a Rails migration - a purist may only want to have commands that alter the database structure and nothing that alters data structure. Myself, I don’t mind them being as a migration because it keeps it’s a consistent place with the other database operations and doesn’t clutter parts of the app once they’re run.
Some things to consider with this task are: a backfill can take a long time and thus may fail in some way, it’s good if they can pick up where they failed if that happens; doing everything in a single transaction may cause performance problems once the transaction becomes huge; and using any application ActiveRecord classes carries risks if those classes have changed at the point a different user runs the migration.
Example
class BackfillDocumentTypeOnEdition < ActiveRecord::Migration[6.0] # We use this to run the migration outside a transaction, which allows resuming and reduces DB load disable_ddl_transaction! # create a private instance of an ActiveRecord object so that we don't need to use our application code class Edition < ApplicationRecord belongs_to :document end def up # by removing nil records we can resume a failed run of this and only update what was missed to_update = Edition.includes(:document).where(document_type: nil) total = to_update.count to_update.find_each.with_index do |edition, index| # for a long running migration/task it's useful to provide progress puts "Updated #{i}/#{total} edition document types" if (i % 1000).zero? edition.update!(document_type: edition.document.document_type) end puts "Updated #{total} edition document types" end def down # don't do anything, all these changes can be removed by rolling back the previous migration that added the column end end
This backfill will need to be deployed and run before we move onto the next step of starting to use the new column.
Step 3: Swith to using the new column
The new column should now be fully populated and the application can have it’s logic switched to make use of it. This is also an opportuntity to write any migrations that apply constraints (such as non-null) or defaults to the new column.
A key decision at this point is whether or not to provide a safety net to allow rolling back if the deploy has problems. Once the application stops populating the old column then rolling back is not possible, you’ll have to fix forward. In my experiences I’ve tended to not set-up the need for rollback and instead be cautious my deploy isn’t coupled with other changes but your circumstances may differ as may the impact a problem would have on your application/business.
If you are in a situation where you run staggered deploys (for example gradually deploy to a number of machines then others) your hand will be forced to need the rollback so that the application can work with both versions of the codebase.
With no rollback
If a rollback is not considered necessary then the next deploy of the application is focussed on the application making use of the new column. This would be represented by changing the application logic to use the new column that was introduced and no longer make references to the old column.
Without a rollback we can combine the deploy to change application logic with step 4 of deprecating the old column to reduce the number of deploys needed in this process.
With ability to rollback
To allow rolling back a deploy that switches the application to use the new column it is necessary to have a process which keeps the old column populated while the rest of the application switches to using the new column. To do this you’ll need to make all the necessary logic changes that favour the new column and then do a reversal of the process in step 1 to keep the old column populated.
Example
class Edition < ApplicationRecord belongs_to :document after_save do document.update!(document_type: document_type) if document.document_type != document_type end end
Step 4: Deprecate the old column
Once the new column is in use and you are satisified that the process is successful the work can begin to get rid of the old column. Any code that was added to populate the old column can be removed.
It’s common for a developer to proceed to add a migration to remove the column that is no longer needed, this however carries a risk of application errors which can be alleviated with an intermediatry step (which unfortunately requires another deployment). The type of error you may see if you remove the column are: "PG::UndefinedColumn: ERROR: documents.document_type does not exist", the reason you’d see these is because ActiveRecord caches the underlying columns when the application starts. It then uses this cache when an ActiveRecord object instance is loaded to select each column from the database.
This error can be avoided by using the ignored_columns method that was introduced to ActiveRecord in Rails 5. Once Rails has been deployed with the column ignored we are then safe to remove the column.
Example
class Document < ApplicationRecord has_many :editions self.ignored_columns = %w[document_type] end
Step 5: Remove old column then relax and celebrate
Finally, the last step is the simplest one. All that is needed is the actual removal of the old column. This can be done by a simple migration. We can then set out a sigh of relief that this process is complete.
Example
class RemoveDocumentTypeFromDocument < ActiveRecord::Migration[6.0] def change remove_column :documents, :document_type end end
Has this procress helped you or do you do something different? Let me know on Twitter.
0 notes
Text
How to write an OpenAPI 3 parser
Two years ago I pushed my first commit towards openapi3_parser, which is a Ruby gem to convert a file conforming to the OpenAPI 3 specification into a graph of Ruby objects. I felt that this milestone marked an appropriate time to reflect on my learnings from this process and explain some key decisions. When I began working on this gem were only a few, limited implementations of parsers for OpenAPI version 3 that I could take inspiration from and none in Ruby. This helped drive my interest in the project as I felt there were fresh challenges ahead.
This article serves to share information I’d have found useful when I started this project, with the aim of helping others who are deciding how to work with OpenAPI files in any programming language. It’ll talk you through some of the key design decisions for an OpenAPI parser and explanations of the routes I took.
Do you even need an OpenAPI parser?
The first thing to ponder is does anyone need a specific tool to parse an OpenAPI file? After all, they are either YAML or JSON files and these file types can easily be converted into an associative array structure in most modern programming languages.
The answer to this is yes, though you probably could do without one if it wasn’t for one pesky - but rather central - feature: references. These allow you to either reference data in another part of the OpenAPI file or in a separate file, these referenced data can then contain subsequent references so you can quickly end up with a complex data structure.
As well as resolving references a parser offers numerous advantages to working with the raw data: it allows you to convert the data into domain objects with documented interfaces, it offers the potential for elegant ways to traverse the OpenAPI structure, helper methods can be added to classes representing individual objects and data validity can be determined in advance to working with data.
Were there to be a data format similar to YAML or JSON that offered a native ability to handle references it would be substantially easier to work with OpenAPI files and building a parser would be a massively simpler experience, but as far as I'm aware no suitable options exist.
Starting with the user experience
One of the first things to consider is how a user, whether yourself or a wider audience, is going to experience a parser. With openapi3_parser I aimed to have the least friction between a users input to them working with the data.
For example:
document = Openapi3Parser.load_url("https://raw.githubusercontent.com/OAI/OpenAPI-Specification/master/examples/v3.0/petstore.yaml") document.paths["/pets"].post.description => "Creates a new pet in the store. Duplicates are allowed"
On the root module, Openapi3Parser, there are different methods to load data in: load, load_file and load_url which respectively allow loading raw data, loading from the filesystem or loading based on a url. Each of these methods returns an instance of the Document class which is an object that represents the overall document and provides an interface into the OpenAPI data.
This, I believe, achieves a fluent user experience where the guts of the parsers implementation is hidden from the user and they can quickly be working with the data.
Getting data into a parser
Given an interface of how someone will use a parser you then need to know how to get that data into it. The simplest approach would be to have the user of your parser convert the YAML or JSON data of an OpenAPI file into an object structure and pass this in.
For example:
openapi_data = YAML.load_file("openapi.yaml") Openapi3Parser.load(openapi_data)
As previously mentioned openapi3_parser has 3 individual methods for input, these actually expand into a variety of ways a user can intialise the parser. The load_file method accepts a file path string, the load_url method accepts a URL string, and the load method can accept a Hash, a File, a string input and - given we’re working in the duck typing world of Ruby - objects that match the interfaces of these.
This amount of options may seem suprisingly large, and superfluously flexible. However, most of these are actually by-products of the need to working with a reference system that can support external files.
As a reference can link to a URL or a file we need the ability to load these items automatically. This leads us to the underlying code needed for load_file or load_url. The process of turning a file into parsed data involves us working with a File object, parsing the string input of that file and finally working with the contents of that file converted to a Hash. Thus, as all these steps are needed for working with a separate file, we can then use them to deal with a variety of input.
Behind the scenes in openapi3_parser these individual files (or raw data input) are represented as Source objects that encapsulate the concerns of that particular file. These Source objects are initialised with an instance of SourceInput which has subclasses for Raw, Url and File variances that reflect the 3 types of user input we anticipate. Common libraries are used to turn any string input (whether loaded from a file or as actual user input) into a basic hierachical data structure.
Many of the parsers available for OpenAPI don’t provide support for references that exist in separate files, which allows them to bypass this complexity. This is an attractive option for simplification of a parser as references to separate files don’t seem to be commonly used and the complexity they introduce is high.
Turning data into types
If we want to provide a user with classes that represent the various OpenAPI objects then we need to take the hierachicial data from an OpenAPI file and use this to build specific objects. This conversion process can be rather simple: input the underlying data into an object that represents the root OpenAPI object and have that embrace a decorator pattern to provide methods for it's properties, each of these can then return their own decorated class for their object.
For example:
class Node::Openapi def initialize(data) @data = data end def info Node::Info.new(@data["info"]) end # .. end
There are challenges to resolve however and this approach can quickly increase in complexity.
The first challenge is references. Any referenced data would need to be resolved either prior to this objects creation or at the point a reference is found. I opted to do this at runtime in openapi3_parser and used a Context class as a means to bridge an object and it’s underlying data source. This Context object can be then be used to resolve references when they are encountered.
Another challenge is whether items are existant or valid. Since the data from an OpenAPI file is just sourced from YAML or JSON there are no guarantees that you will have data available at any point and whether it’ll be of the type you anticipate. My approach was to use a layer of validation that checked the input type. I felt this offered the ability to error early when an object had invalid data rather than a user receiving a cryptic Ruby NoMethodError, this also meant the documentation for return types could be accurate and reliable.
Dealing with these complexities led me to switch to a different system to the simple example above for builing objects. Rather than have a single class per object that had both usage and building responsibilities I switched to a system where there are factory classes to build each OpenAPI object. This offered large gains in simplicity where the classes for working with an individual object are mostly trivial in complexity, however it presented the disadvantage that for every OpenAPI object there are two classes to consider.
As there are many (~30) different types of OpenAPI objects I took advantage of the metaprogramming technique in Ruby to set up a DSL that could be used to define the fields and their respective types concisely.
Working with references
The most complicated part of writing an OpenAPI parser is working with references, particularly supporting those in different files.
There are a number of things to understand and consider in resolving them. First references can reference other references, so the resolving process may need to look up multiple document locations, and second references can be recursive which can easily lead to infinite loops if not these are not considered.
There are a number of things that can go wrong with references that may also need to be handled: the reference could point to a location that doesn't exist, the data at the location could be the incorrect type, a file may not be available and references could point to other references in a cyclic structure.
The approach taken in openapi3_parser is to have a number of classes with key responsibilities to handle a reference. There is a class to represent a reference object that facilitates looking up a reference and has a check for a cyclic pointer. The Source class has a method to determine the source file and return a ResolvedReference object that mediates between the resolved data and the initial reference. The ResolvedReference class has the ability to either build the object or communicate any issues that prevent building it. The final key class used is a ReferenceRegistry, this is an object associated with the document and stores all the source files and references used - this class allows the same objects to be used each time the same reference is encountered, which can vastly increase the speed of parsing.
Validating input
Parsers aren’t necessarily validators but they have to understand the source files in similar ways. In my experiences with openapi3_parser I found that in order to provide accurate return types a substantial amount of validation logic was needed. I decided to take this further and aim to add the remaining validation rules so that the gem could provide full validation.
Whether this was a great idea or not I am not sure. A consequence of having a good understanding of the rules has meant that openapi3_parser is quite strict and will raise an exception to inform users of an error. Having seen that it isn't especially rare to find small mistakes in OpenAPI files (a common one being a description with a reference) I do wonder if the strictness can be a burden rather than a helper to users, it has led me to consider whether a lenient mode would be benficial.
In the time since work began on openapi3_parser there has been momentum around providing an official JSON schema file that can validate OpenAPI files. So it could be that over time the relevancy of a parser providing validation decreases. However it is also the case that JSON schema cannot provide everything needed to validate to the OpenAPI spec (for example it cannot validate that a reference resolves or to the correct type) so it may remain to be seen if the JSON schema approach is good enough to become the de-facto validation method.
With hindsight I would advise someone building an OpenAPI parser to only worry about performing validation on areas where it is necessary to have a parser conform to an API - which essentially would be just performing type checks.
Working with different versions of OpenAPI
At the time of writing the current version of OpenAPI is 3.0.2 with 3.1.0 on the horizon and changes coming. From the past there is Openapi 2.0 which shares a specification with Swagger 2.0. Currently when trying to find an OpenAPI parser there is frequently confusion as to whether it will support OpenAPI v3.0 or v2.0 - with it being rare that both are supported.
In building a parser you'll have to consider and communicate what versions of the specifications are supported and how you'll handle future versions. With openapi3_parser I chose to name it with the version number to make it distinct from the open_api_parser provided by braintree that only supports swagger 2.0 - at the risk of dooming this project when OpenAPI 4.0 comes out. I didn't see value in providing a parser that supported both 2.0 and 3.0 since these are different specifications and 2.0 already had parsers.
Looking towards the future I'm anticipating there to be newer versions of the specification to consider that will need different parsing logic. To support this I set up a basic system that produced a warning were the version of OpenAPI not to match. Into the future I'm hoping that pre-determining the minor version used will allow it to be not too difficult to handle multiple versions of OpenAPI with minimal changes to the parser.
Quirky things to handle
In the process of building an OpenAPI parser there a few things that feel somewhat inconsistent to the rest of the spec that you may have to deal with. Here is a catalogue of ones I encountered and how I approached or intend to approach them.
Path Item reference
Throughout most of OpenAPI an object can either be a reference or be an object, there is no middleground, you can't share extra data in a reference to adjust a referenced object. However an exception to this rule lies in the Path Item object which allows a form of merge behaviour between the referenced object. In theory the referenced Path Item object could reference a different Path Item object too so the merge could contain multiple layers of data.
The approach taken in openapi3_parser for this is to validate referenced objects separately and merge the data together at the point of building the object.
The Link object
The Link object in OpenAPI Specification features two mutually exclusive fields that associate a link with an operation: operationId and operationRef. The operationId field is the simpler of these two, which is an id value that must be represented on an Operation object within the document. The operationRef is more complicated as it is a reference that may refer to the current file or an external file.
At the time of writing this is an area of the OpenAPI specification that openapi3_parser does not support. My expectation is that operationId will be relatively simple to support with runtime lookups to find a particular operation whereas operationRef will be treated similar to other references.
Schema names
Schema objects can refer to a property outside the object, the schema name. This is data that is inferred from a schemas location within the #/components/schemas map, for example a schema defined at #/components/schemas/Dog has a name of "Dog". This is somewhat problematic as schemas defined directly (within a Media Type object or another Schema object) do not have a name. This issue is confused further by the presence of a title field on a Schema object which is unrelated to Schema name.
Life would be simpler if the title field was the preferred way of naming schemas, this would allow any Schema to be named no matter where they are defined and would match conventions of other referenced definitions in the components map that don't make any use of their identifier outside of references. However by convention authors rarely use the title field and frequently define most schemas in #/components/schemas so it would be superfluous to use the title field as well.
In openapi3_parser a method has been added to Schema objects to extract their name based on the location in the file they have been defined.
Servers
The Server object can be defined in 4 locations of an OpenAPI file: OpenAPI object, Path Item object, Operation object and Link object. An unusual charaterstic of this object is that it acts in a cascading form where looking up the servers of an operation would return an operations servers if available, otherwise the path items ones and if none of those the ones from the root OpenAPI object. Unusually too the root servers property on OpenAPI defaults to an array with a single Server object, as far as I'm aware this is the only instance where OpenAPI defaults to an object within a collection.
This is a part of openapi3_parser that is yet to be implemented (it's pretty much next on the list) creating the default object may well test the system in place for array defaults to provide the object. For the cascading properties I imagine the simplest approach would be to use a runtime lookup when accessing an operations servers then fall back to path items and then the root object. To provide this there'll need to be a simple way to traverse down the graph back towards root.
Default values for maps and arrays
An area that I don't think is cracked properly with openapi3_parser is the approach to take for default values of array and map objects, specifically whether the absence of a value indicates an empty collection or a null value. The OpenAPI specification doesn't define the behaviour of these so it is left to the interpretation of the parser.
Initially openapi3_parser approached these with an opinion that they should never be null and default to empty collections. This felt like an approach that was developer friendly as you could anticipate that they'd always return an instance of the same class (the collection) and you'd not have to consider whether it's null or empty.
This however didn't work out and was most noticibly flawed on the Schema object with the anyOf, allOf and oneOf fields. JSON Schema defines valid fields for these to be an array of at least one item, so an empty array was contravening the specification. The default value of an empty array also implied a meaning in comparison to other fields that could be null.
For example:
if schema.oneOf # meets condition but is an empty array # we presume there is a oneOf end if schema.description # can be a string or null # we presume presence of a description end
To handle this in openapi3_parser many of the map and array objects can be a collection or null and are not coerced into an empty collection. This is the case for any scenarios where the presence of this collection may imply a meaning such as on a Schema object. In places where presence doesn't imply additional meaning an empty collection can be defaulted. This applies for example to all the child properties of a Components object which makes them simpler to traverse.
Ideally it'd be great if the specification itself was clear on whether the absence of data should be treated as an empty collection or a null value. This would resolve ambiguity for parser implementors.
Reflection
So, like I said at the start of this article this marks 2 years since I started working on openapi3_parser and I'm somewhat amazed that a) I'm still interested in it and b) that it's still not reached version 1.0. I'm pleased that during the time period there haven't been too many drastic changes in the OpenAPI specification which has managed to maintain the projects relevancy. It's been a fun ride for me learning more about the specification and trying to define an intuitive, maintainable API for developers to use.
My next aims for the project are to continue working towards 100% compliance with the specification and then to consider the 1.0 release of the project. After that I'd like to identify any methods that can make it easier to work with the API - like quick ways to look up particular paths or schemas. I'd also love to get more contributors involved, do get in touch if that's of interest to you.
In the time since I've started the project I've noticed that at least 2 similar projects for Ruby have been released: openapi_parser, and oas_parser. Which are alternatives to consider if you are looking for a Ruby OpenAPI parser and don't like mine :-). It does seem a shame to me that we've ended up with multiple projects solving similar problems so if any of those developers of those are interested in collaborating and consolidating these to a single parser please get in touch.
I'm interested to learn if anyone finds this information useful for parsers they are creating or wants help with one they are working on, if this is you please ping me on GitHub so we can talk more.
0 notes
Text
Isomorphic Javascript with Rails 4, React Router, Browserify and Babel
Recently I've been digging into the world of React and ES6. I was keen to get these working in a Rails context. The key things I wanted to achieve were server-side rendering, ES6 syntax JavaScript, react-router and to import node modules. On a heavily delayed train journey from London to Edinburgh I decided to try crack it.
There are a bunch of gems that can get us most of the way with very little configuration. However all of these don't get you quite to my desired feature combination, but with a bit of tweaking it is possible.
There is React-Rails
We can use use the react-rails gem to get moving pretty fast. It adds both JSX and Babel transpilers to the asset pipeline so you can generate your React files just like any other Rails asset with the .jsx suffix. The gem is also maintained (or at least approved) by the React organisation, so it feels like an officially supported way to use React.
Most beneficial of all is it provides a simple view helper that can be used in your views to render your React component to the HTML server side. It also provides a script that will start your react client-side app automatically based on data attributes from the component you rendered server side.
<%= react_component(component, arguments, options) %>
The downside of this automagic is both of these renderings happen before the point you'd hook up the react-router to your app. Luckily someone has already solved that.
React-Router-Rails to the rescue
Another gem very similar to react-rails has been created by mariopeixoto called react-router-rails. It provides very similar functionality to react-rails, however where react-rails renders a component react-router-rails renders a router.
<%= react_router(component, arguments, options) %>
Using this you're soon creating a react app with routing. It's very cool.
However what nagged at me was how the asset pipeline affected matters. By using this everything had to be shared via global variables. This didn't feel very natural in the context of React:
// Not very react (function(Components) { Components.App = React.createClass({ render: function () { return <ReactRouter.RouteHandler />; } }); })(Project.Components);
External libraries also needed to be introduced via global variables rather than the frequently seen CommonJS module system.
This left me at the staring point of a simple React-Router powered Isomorphic JS app, but it didn't feel quite right. It felt like I had one foot in the present, one in the past (albeit the recent past given the pace of client-side developments :-).
Enter browserify-rails
There are two popular projects bringing the world of node modules to the browser, Browserify and Webpack. Webpack is the one that aims to do a bit more, with the aim of bundling all your JS deps with some cool hot-reload functionality, whereas Browserify is a bit simpler only focussing on node modules.
In the realms of Rails, browserify seemed the more attractive bet. People have done cool integrations with Webpack but that struck me as problematic for server-side rendering.
Like most things Rails, someone has already built it and released the gem. There is browserify-rails out there which will run browserify against your JS as part of the asset pipeline. Instead of using sprockets to require files, it can all be done via CommonJS imports.
Browserify can also run babel (via babelify) over the JS files to transpile both the ES6 and JSX syntaxes.
Using the ES6 syntax does cause an initial problem that browserify-rails doesn't actually realise it is supposed to transpile the file as it looks for presence of CommonJS ES5 syntax. This can be resolved by including a comment of // require, by configuring the path or by setting everything to transpile.
Bringing them all together
Each of these work pretty good together but bringing them all together produced a few hurdles to overcome.
Firstly it felt a bit weird having some client side dependencies served by gems (react & react-router) and others via NPM. So I felt it was easier to switch those out to NPM and not use the bundled gem ones.
This switch revealed that I had both broken the existing code (by not importing React) and had an environment that wasn't easy to debug (it doesn't report which JS file has an error server side). In the heat of this I believed (incorrectly) that problems lay in react-router-rails and how it deviated from keeping up with react-rails.
I decided a safer route was to stick to just using the react-rails and include the changes needed for routing myself via changing the renderer via the react-rails config and introducing a new view helper method (mostly copied from react-router-rails). I'm not sure if I've made a great decision here or not, on one hand I've killed a dependency, the other I've added a concern. Ideally it'd just be easier in react-rails to specify the JS that is used to initialise the app server-side.
Is it any good?
It all works and is cool but does have a couple of issues. The browserify stuff feels a bit fragile, particularly on the server-side. I've had a few issues where I've felt like restarting the server or clearing cache were required to get the most up to date code executed. I'm too early into this project to know if that'll be a consistent issue.
Whether Rails is the right tool in a stack like this feels contentious. For the project I'm working on, the freebies you get from the eco-system make it feel like the right choice. But for many projects it'd be smarter to use a node/express setup with webpack. Some of the pains should be eased when Sprockets ES6 comes with Rails 5, but at the pace of change within client-side development it might feel like it's behind in a different way.
Anyway, now to pick a Flux implementation. That can't be too hard right?
1 note
·
View note
Text
IE and Vary: Origin
We hit a tricky bug the other day where a font we were loading via @font-face wasn't loading on IE9-IE11 in production but was fine in all other browsers and fine in IE on local and staging environment. The only difference was that production was behind HTTPS.
After much searching and reading it turns out our issue was because our CORS plugin for rails was adding Origin to the Vary header (which is required to indicate that the cache depends on the origin). This seems to mess IE up on HTTPS for reasons I don't know - however it is covered in this Stack Overflow post.
We resolved this by returning the same response to all origins so that the response no longer varied by origin and thus did not need this header. If you need to do the same to rack-cors, you just need to add a credentials: false option to your resource declaration.
Similar issues: On IE CSS font-face works only when navigating through inner links , @font-face EOT not loading over HTTPS
0 notes
Text
A Cleaner Backbone Mixin Strategy
I've found applying mixins to the MV* classes of Backbone can feel a bit verbose and difficult to read. In this post I'm going to explain the approach I have come up with for handling them.
So here's the out-the-box approach to mixins
Backbone.Model.extend(MyMixin).extend(MySecondMixin).extend({}, MyStaticMixin).extend({ // actual extend });
By nature of the syntax you're invited to take a more traditional hierarchical view to class composition through Backbone. What I wanted to achieve is a nicer, easy to read approach to mixins where the mixin intention was clearer.
Backbone.mixins( MyMixin, MySecondMixin ).extend({ // actual object });
I wanted to be able to handle static properties too so I thought of two strategies for that. The first is a staticMixins method, the second would be the original mixins method accepting arrays of length 2 to specify prototype and static properties.
Backbone.mixins( MyMixin, MySecondMixin [ {}, MyStaticMixin ] ).staticMixins( AnotherStaticMixin ).extend({ // actual object });
To implemented this with the following simple object that could be extended into classes.
Applying this to your classes is done via extending the static properties. I do this on a common super class model.
var MyCommonModel = Backbone.Model.extend({}, Backbone.Etch.Mixins);
You could even go dirty and patch it onto the Backbone classes
Backbone.Model.mixins = Backbone.Collection.mixins = Backbone.View.mixins = Backbone.Etch.Mixins.mixins; Backbone.Model.staticMixins = Backbone.Collection.staticMixins = Backbone.View.staticMixins = Backbone.Etch.Mixins.staticMixins;
For more complex mixins (e.g. those that define prototype and static properties) I like to define my mixins as functions that return the objects to be mixed in.
var MyMixin = function() { var prototypeProperties = { // properties }; var staticProperties = { // properties }; return [ prototypeProperties, staticProperties ]; };
A problem that a mixin approach does cause in JavaScript is overriding messages and calling the super. The advised approach to this in Backbone is to call the prototype of the parent object. This approach only works in a hierarchical system (and even then is a bit flaky) and if we apply that to mixins then they are tied to being a particular order - defeating the opportunity to mix them into classes at will. Also if we're defining mixins dynamically as functions we do not have a specific object available to access.
To counter this I pass all mixins that are defined as functions an argument of the current object which acts as a super class.. The mixin can then use this super class variable to call the parents methods. Backbone itself defines a __super__ attribute, which references the super class' prototype. Because this references the prototype it cannot be used for static methods, for these the extend method will need to be overridden.
var MyCommonModel = Backbone.Model.extend({}, Backbone.Etch.Mixins); var MyOddMixin = function(superClass) { return { set: function(key, value, options) { console.log('my odd mixin'); superClass.prototype.set.call(this, key, value, options); } } }; var MyModel = MyCommonModel.mixins( MyOddMixin, function(superClass) { return { set: function(key, value, options) { console.log('my inline mixin'); superClass.prototype.set.call(this, key, value, options); } }; } ).extend({ defaults: { my_var: false }, set: function(key, value, options) { console.log('my model'); MyModel.__super__.set.call(this, key, value, options); } }); var myModel = new MyModel(); myModel.set('my_var', true);
Running this will change the my_var attribute to true and log 'my model', 'my inline mixin', 'my odd mixin'. Nicely solving the super issue.
If you want you can take the mixins further so that they are generated by a function to specify particular methods/properties.
var MySpecificMixin = function(propeties) { return function(superClass) { var mixin = { prop1: 'property', func1: function() { superClass.func1.call(this); }, prop2: 'property', func2: function() {} } return _.pick(mixin, propeties); } }; var MyModel = MyCommonModel.mixins( MySpecificMixin([ 'prop1', 'func1' ]) ).extend({ // actual object });
So I hope this helps inspire you in ways to neaten up your mixins with Backbone. If you have a nicer strategy let me know on twitter @el_kev.
2 notes
·
View notes
Text
Jumping into Responsive Web Design
Geek rocker Rivers Cuomo once sang "The world has turned and left me here" about a failed relationship however it's a phrase that resonates in me as a feeling I experience as a multi-disciplinary web nerd. One day you're a master of a craft, understanding the nuances and subtleties, and then something happens (a long project, learning new skills, etc) and when you return to the craft, it's like everything has changed and your knowledge has drifted into irrelevancy. This was me and frontend web development as I took on embracing the techniques of responsive web design.
I remember a time in the early 00s where to impress a web geek it wasn't enough to have just a really cool looking page, that was just step 1. The next was to view the source and if you saw nested table elements that page had blown it and was not cool. A bit later a similar scenario occurred with cool interactive elements , you were only impressed if a right click revealed that flash hadn't been used to create it. Now the browser resize is the test that a website design is actually cool. Yes, we are a fickle bunch.
Coolness factor aside responsive is clearly a sane way to approach our multi device landscape. It heralds an end of the need for a mobile specific domain (and how do you define mobile anymore anyway?) and brings requesting resources back to the original intentions of the HTML spec. If there's one thing I've learnt from my years of web development it's that once you're deviating from the specs intentions you're going wrong and dragons await.
However jumping into responsive web design is not without peril. With any "write once, run everywhere" approach you have to be prepared to test everywhere and there is a whole lot of configurations to test, with not just form factor but bandwidth to consider.
Smaller screens first
In my past dabblings with responsive design I undertook an approach of large screen first and media query to smaller devices. This has the nice advantage that browsers that lack media query support (old IE - shock horror) get a large screen optimised version, the downside though is it is a total nightmare to code with frequent unstyling of elements. Luckily respond.js (nicely bundled with the ubiquitous modernizr) has come to the rescue with a handy polyfill that makes mobile first a reality.
Starting with targeting small screens first offers you a key advantage of focusing on presenting just the minimal required content for a document. The principle I follow is to present a document with minimal markup and if its not needed to all device sizes lazy load it. Of course when considering something that isn't needed to all device sizes you have to question is it needed at all, which leads to simpler designs (yay).
SASS
Having used CSS pre-processors for a while I struggle to imagine life without them. Nesting rules alone is a killer feature and being able to abstract away vendor prefixes brings much more sanity to the stylesheet.
Recently I gave SCSS a go instead of my usual LESS, as it feels like SCSS is getting more community usage (and for both SCSS and LESS I feel community support is their most important feature). I tried out a responsive mixin which really made it so easy to adapt element patterns to responsive breakpoints without the mixin doing any indecipherable magic. The key advantage this gives is keeping media queries in context of the modules they are styling.
.no-icon-box { .copy-container { @include basic-copy-container; } &.double-copy { @include responsive($breakpoint-768) { @include clearfix(); .copy-container { width: 50%; float: left; padding-left: 1em; padding-right: 0; &:first-child { padding-right: 1em; padding-left: 0; } } } } }
Images
Images seem to be the biggest thorn in the side of responsive design, with rigid bitmap images being the square peg in the responsive round hole. For many cases SVG's can be a silver bullet. What I learnt though was that if you have a graphic with a texture the SVG file size can go insane (15MB plus) because of the amount of XML generated. I found I was forced to go back to PNG's for sane file sizes on my complex graphics. More recently I have delved more into the capabilities of the SVG format and do wonder if my problems could have been resolved with filters and similar. I have come away feeling there's a lot more to SVG's than just trusting an Illustrator export and that there is far more possibilities to explore.
Approaching bitmap images in responsive design is complicated, there are many things written about the problem, and almost as many hack based solutions. The approach I took was to (and this is nearly always a sign you are making a mistake) roll my own solution. My approach was to use a no script element with data attributes to create an image element with a src that was runtime modified to request the image from a PHP script that returned an image to the correct width.
<noscript class="img-replacement" data-img-path="/load-image?image=images/icons/e-icon.png&width=%width%" data-attr-alt="" > <img src="/load-image?image=images/icons/e-icon.png&width=180" alt="" /> </noscript>
This approach solved me a few problems:
I was able to save only a single image size (at a huge file size) and have it rendered to each size in whatever dpi the device requested.
I served only the width the device used so no serving huge images to small screen devices.
There is only 1 initial request for an image.
I was able to give a fallback image to non script equipped agents.
This worked well and the images all look lovely on all devices tested. However I'm not happy with this approach and intend to rethink it. The problems I identified were:
It's not perfect to just provide a small screen with a small image. Sending a large one isn't entirely a waste of bandwidth as a user can zoom in.
It's messy using javascript to generate urls on demand based on the styling. It breaks the separation of concerns principal between the content, styling and behaviour.
It feels unsemantic and ugly using noscript in this way.
I fear the image resize script could be poor for resources (uncompressing an image uses a lot of memory) and based off this could be a server vulnerability.
It doesn't bear any correlation with the proposed standards
To approach this again I think I'd loosen up more about what size images devices are sent and let the browser do more resizing themselves and just set a few sizes (maybe just hidpi and normal dpi). If I was to use a resize script I'd keep that separate from the JS and just used in either HTML or CSS.
Complexity
After jumping back in after becoming a relative outsider I was initially daunted by the complexity. There are a whole fresh bunch of tricks to learn and a large matrix of scenarios to test for. Thinking back though I can't remember a time when there wasn't a daunting bag of tricks (framesets, tables, DHTML, misusing floats) and its just a new set to learn.
CSS is now very complex and difficult to navigate with endless vendor prefixes and multiple media queries. Higher level CSS can abstract this and bring readability back (of course this requires a conscious effort you can easily make these more indecipherable than CSS if you choose) in the way jQuery brought readability back to DOM manipulation.
So...
Jumping into responsive was a fun, refreshing experience. The techniques behind it seem to be constantly changing with hacks filling upcoming specs and hacks influencing specs. The fresh specs offer plenty of room for people to experiment with what's around the corner and how that can push the web forward. It is however important to remember that Responsive Web Design isn't something brand new, it's an evolution (or a full circle to some degree) and rather than considering a technique in its own right it is just an application of the HTML and CSS specs and how they have evolved to meet modern needs.
0 notes
Text
A fallback for when Google Web Font Loader fails
I spend a few hours a week working on a train without internet. For the most part it is bliss productivity. However often I work on sites with Google Web Font Loader where, to avoid the dreaded FOUT, it is common for people to leave their web fonts as visibility: hidden; until Web Font appends it's wf-* classes (naughty naughty I know, but still a reality). This sucks if you're without connectivity as those classes never fire (even if the webfont.js script is cached) and you either have no fonts or you need to adjust the CSS entirely for an offline use (and if you're me, that accidently ends up commited).
Anyway I bashed something out quick for that scenario, and should also be nice for times of slow connectivity. It just waits half a second and if we have no web fonts loading sets a class on the HTML element that we can use in CSS to show fonts.
1 note
·
View note
Text
Git rm file but leave local copy
A trick I forget a lot is how to remove a file from git but leave it in the my local copy (i.e. when I accidently commit local config files) git rm --cached <file> Source: http://stackoverflow.com/questions/936249/removing-a-file-from-git-source-control-but-not-from-the-source
0 notes
Text
Your developer might not be a rock star, but is your start-up the next Van Halen?

Ok, so we've all seen companies that advertise for the rock star devs and we all know they're a bit silly. If Nikki Sixx had dabbled in a spot of Ruby in his heyday he'd be totally unemployable.
These new "rock stars" live a life of long hours hunched over a keyboard powered by caffeine, pizza and ambition. No one would want to trust their back-end to wild party animals, snorting coke off of strippers, too strung out to write a conditional.
Perhaps the “rock star” analogy fits in a different way - has the culture of the tech industry become a parallel to the music industry?
Silicon Strip

If you wanted to be a rock star in the 80's, the place to be was The Sunset Strip in West Hollywood. It was where legendary acts such as Guns n' Roses, Van Halen and Mötley Crüe cut their teeth and their last stop before international stardom. The dream was that you formed a band, you publicised your shows and then you would be rewarded with fame and fortune.
The sunset strip of today can be seen in Silicon Valley and (closer to me) the Silicon Roundabout of East London. The bands today are no longer musicians, they are tech start-ups. Instead of persuading people to attend shows, start-ups persuade you to use their product.
Like the Sunset Strip people are lured by the tales of fame, fortune and success of the big names. However, with such a huge amount of start-ups out there the chance of being a success is statistically just a lottery.
It's all about the Hits, man!

Where the music industry and tech start-up world converge must dramatically is in the quest for "the hits". Both industries thrive on the process of taking a product (be it a record or an application), getting the product in front of influential people (radio/magazines and twitter/blogosphere respectively) and then hoping that people connect to it and share it with others. We call this "going viral" now, but popularity spreads the same as it always used to, only now the channels and mediums have changed.
Like the music industry, the scale of the hit is what's most important. Determining how to react to the success of a hit is how you determine your longevity as a start up.
But how do you conjure a hit?
For many years the record industry has hunted for this perfect formula, and although they may not have completely cracked it, they have honed it into a fine art.
Take your average hit:
At the centre you have the hook (the catchy aspect that gets stuck in your head).
Next you have the sound, it's trendy, it's like what other current hits sound like but with a slight twist to keep it fresh.
Then there's the element of relatability, do the words resonate with audience or the sound compliment/trigger an emotion.
Boom! you have a hit.
This can be applied to tech products:
For the hook you have the reason the user keeps using your app
For the trendy sound, this is determined by the UI conventions and how they mirror the trends (as I write this it feels like Microsoft Metro is the latest trendy design paradigm).
For relatability, this is how the app fits into your life and what it solves for you.
The thing to watch out for in tech is that we don't make products too formulaic to get that hit. In music as people got better at making the perfect formulaic hits, the audience lost interest because they became dull and generic. When was the last time you cared what was going on in the charts?
Getting a record deal

The record companies of the tech start-up scene are the investors. If you want to make a great application someone is going to have to stump up some cash so you can afford to feed, shelter and clothe your team until it takes flight.
The business model most major record labels take is to invest in a large number of artists knowing most will fail, but if one is to succeed massively it will pay for the losses the other artists incur. Investors in tech start-ups do exactly the same. For this reason you will often be expected to win big, not just break even.
Choosing a record company and entering into a record deal is a tough decision for an artist and they have to carefully consider the pros and cons:
How far can the record company take the artist?
Do they have the resources to take the artist as far as they want to go?
What involvement will the label have?
Will they want creative control?
Are they experienced enough to provide advice and understanding?
How long will the artist be tied into the contract and can parties exit if things breakdown?
Most importantly, do they get the artist and have matching aspirations?
These questions are exactly the same questions you need to be answering when your tech start-up is looking at investors, with both you need to be coming to the conclusion that the pros sufficiently out weigh the cons to have a successful project.
The Decline

Where this (now near exhausted) analogy of the music industry and tech start-ups gets most interesting is when we consider the state of the music industry right now. That industry has lost its glow, people aren't buying records in the same way, the high street shops are closing and the labels are laying off staff.
There is still a lot of money being made through live music, however this is mostly going to the top, established live acts and its increasingly acknowledged that is more difficult to establish yourself as a new act than it was in the past. The blame for this seems to fall on the labels (who themselves blame Internet piracy) for their inability to embrace technological changes and reluctance to meet consumer expectations.
What I find amazing about the labels problem is that it was industry wide and all the major labels struggled and acted in the same way. This is akin to start-ups being paralysed because their investors had too much control and hindered experimentation and innovation. It happened in the music industry because times were good and artist never foresaw (or weren't in a position to negotiate) sufficient control to react to a change. Major acts like Radiohead and Nine Inch Nails reacted to the changed climate once free from the shackles of the label. Now many acts are embracing crowd funding as a way to re-approach the music industry without the labels.
Where do we go now?
Could the current climate of tech start-ups decline the way of the music industry? Well yes and no.
It seems unlikely start-up's will ever be as trapped as artists were from monopolistic record labels as there's just not the same need for a middleman in getting a product to an audience. But if there is more structure in investors and more big players dominating the field, start-ups could end up trapped.
Right now times are good, people are investing in innovation and believe innovation is the route to a financial return. With start-ups formed less than a decade now valuing over behemoth brands there is an air now that anything is possible and owning a part of the next big thing is a golden ticket. But bubbles burst and if the returns dry up, so will the investment and the innovation it helps fund.
Might as well jump

So going back to the title, is your start-up the next Van Halen? Probably not, but in this climate people are likely to pay for you to try, so why not enjoy it while it lasts?
When you look back to the bands of hair metal that didn't make it huge, they sure looked like they had fun trying. But do consider what happened to hair metal, while a million people tried to do what their predecessors did before them, their successors (grunge alt-rock) came from the another perspective, another set of rules and changed everything.
This has and will happen in tech again, and as always it'll come from the people at the fringe, the indie rockers, the people in it for the art and the people they inspire.
Incidentally when putting this together I did ponder who in tech is closest to the rock star, my conclusion: the serial conference speakers. They have the touring, the audience and (in some cases) the ego. If they were to embrace that connection more (spandex, feathered hair, a mean light show) tech conferences would be a better place for all :-)
All images courtesy of our Google overlord
1 note
·
View note
Text
Where to progmatically lay out views in iOS 5 (and handling orientation changes)
One of the first hurdles you hit when getting started with developing apps for the iPhone and iPad is handling orientation changes. At first it's so easy; the auto-sizing does the job for you nicely but then you want to do something more complicated and you find it doesn't work in all places (Arrgghh). To make sense of this I set up a little project (available on github) that does a ton of logging when events happen to work out the best ways to proceed. There are a whole bunch of places you can hook in the initial code for laying out your view. Some of them are better than others: here's a run down of methods called [UIViewController loadView] - Only called when view controller is created and only when done programitcally. Bounds are incorrect (normally empty) at this point. [UIViewController viewDidLoad] - Only called when view controller is created. Bounds are incorrect (normally empty) at this point. [UIViewController viewWillAppear:] - Called whenever the view is presented on screen (although not necessarily when app returns from background state). We have bounds at this point however the device orientation hasn't been corrected so these might be for portrait when we are in landscape (or vice versa). [UIViewController viewWillLayoutSubviews] - This is the first method where we have the correct bounds for the device and is called whenever the view controller is changing the display of the view (like device orientation, view loaded) or the UIView is marked as needing a layout [UIView layoutSubviews] - This method is called on the UIView rather than the UIViewController. It has the correct bounds. Great place to change layout. [UIViewController viewDidLayoutSubviews] - Called after layoutSubviews is called on UIView. Changes made in layoutSubviews can be animated, further changes made in this method can cancel the animations. [UIViewController viewDidAppear:] - Called once the view has appeared. It has the correct bounds but changes called here can happen after an animation has occurred and cause a slight jump. If we were to launch the app in landscape we'd find before viewWillLayoutSubviews the following method is called: [UIViewController willRotateToInterfaceOrientation:duration:] - Incorrect bounds And after viewDidLayoutSubviews: [UIViewController willAnimateRotationToInterfaceOrientation:duration:] - Correct bounds, changes in here will animate [UIViewController didRotateFromInterfaceOrientation:] - Correct bounds From this we have a view places where we can do our laying out, but once we consider a couple more cases the amount of places decreases When rotating the device the following methods are called: [UIViewController willRotateToInterfaceOrientation:duration:] [UIViewController viewWillLayoutSubviews] [UIView layoutSubviews] [UIViewController viewDidLayoutSubviews] [UIViewController willAnimateRotationToInterfaceOrientation:duration:] [UIViewController shouldAutorotateToInterfaceOrientation:] [UIViewController didRotateFromInterfaceOrientation:] And when presenting a view controller we get the following: [UIViewController loadView] [UIViewController viewDidLoad] [UIViewController viewWillAppear:] [UIViewController shouldAutorotateToInterfaceOrientation:] [UIViewController viewWillLayoutSubviews] [UIView layoutSubviews] [UIViewController viewDidLayoutSubviews] [UIViewController viewDidAppear:] And finally dismissing a view controller (when orientation was changed in the child view controller): [UIViewController viewWillAppear:] - Note correct bounds here [UIViewController viewWillLayoutSubviews] [UIView layoutSubviews] [UIViewController viewDidLayoutSubviews] [UIViewController viewDidAppear:] This actually narrows us right down to the point where we only have 2 safe choices if we want to be able to animate the transition and have the view laid out correctly across all the orientations. The first approach is to use the layoutSubview method on the UIView. Pros of this technique: + Called every time the view is loaded or orientation changed and can be triggered + Separates the concerns of dealing with the positioning of subviews to be within the view meaning cleaner view controllers :) + Changes done here will animate if animating a rotation Cons: - Needs a subclassed UIView each time, which is another class to create. - You don't have direct access to the orientation of the device here, you can use the bounds for laying out or if you want the device orientation you can access it via the static method [[UIApplication sharedApplication] statusBarOrientation] The second approach is to use viewWillAppear and willAnimateRotationToInterfaceOrientation:duration: on the ViewController Pros: + All code can be in the UIViewController so no need to subclass the UIView + Handles animation Cons: - You need to create an extra method that both methods call or duplicate code - Will often mean the interface is laid out twice for a single interface, shouldn't really be a performance problem unless you're doing a ton of stuff to work out the lay out. This concludes the complications of orientation changes. No doubt there are some scenarios I haven't considered but hopefully it will help someone out. If not It helped me at least :-) Again if you want the source it's on github.
7 notes
·
View notes
Text
Changing Form Type Guesser order on Symfony2
I'm still getting used to Symfony2 and one of the things thats got in my way a bit is the way Doctrine guesses at validation type seem to take precendence over the validator constraints (which seems the wrong way round to me - although I understand it's because validator is registered as first guesser as it's part of the Symfony Framework bundle and then Doctrine Bundle is registered later and naturally for subsequent guessers you added you'd want them to be later than initial ones). I experimented with a few ways to try get the constraints injected into the type class but it was getting pretty messy and not something i'd want to repeat every form.
Then i stumbled on this little trick that put the validation ones later than the doctrine one thus making them more likely to be guessed.
Simply in AppKernel.php move "new Symfony\Bundle\DoctrineBundle\DoctrineBundle()," to the top of your bundle list above "new Symfony\Bundle\FrameworkBundle\FrameworkBundle(),".
I know it's a total hack though, a less hacky alternative is to add the Validator as a guesser again via:
<services> <service id="form.type_guesser.validator_2" class="%form.type_guesser.validator.class%"> <tag name="form.type_guesser" /> <argument type="service" id="validator.mapping.class_metadata_factory" /> </service> </services>
But then that means every validation guess goes through an additional cycle of the validation model.
Not sure what's the best way forward but it works for now
9 notes
·
View notes
Text
Setting up Multiple PHP Versions with Plesk 10
One of my objectives in setting up a new work dev server was the option to allow us to test our applications against multiple versions of PHP. The server environment we have is Plesk 10 with Apache 2.2, running on Ubuntu 10.04. The way we run PHP by default is using mod_php but for multiple version we need to use a different method. went with FastCGI but played around with CGI and suPHP too and was able to set up on all of them pretty easy (this guide covers just FastCGI). I found existing guides got me pretty close but hit problems with Plesk, for main resource see here: http://dbforch.wordpress.com/2010/05/21/apache2-fastcgi-multiple-php-versions-ubuntulucid-10-04/
As per the guide I installed php farm logged in as root using:
# mkdir -p /opt/php # cd /opt/php # svn co svn co http://svn.php.net/repository/pear/ci/phpfarm/trunk phpfarm
When it came to compiling PHP I got errors like:
undefined reference to `mbstring_globals undefined reference to `php_ob_gzhandler_check' collect2: ld returned 1 exit status make: *** [sapi/cgi/php-cgi] Error 1
I resolved these by editing src/compile.sh in phpfarm to use make clean instead of make.
Next I encountered a painful problem trying to set up a wrapper in the global cgi-bin, It seems to me (although I couldn't find the data to prove it) that the docroot in suexec is set to be /var/www/cgi-bin/cgi_wrapper/cgi_wrapper as I could not get any files in that bin except that path to work (and found errors in /var/log/apache2/suexec.log about docroot)
The approach I took to solve this was to create a wrapper in the /var/www/cgi-bin and instead of calling this wrapper directly from the vhost.conf instead I would set up a wrapper in the cgi-bin of the host that solely executes the /var/www/cgi-bin wrapper.
So to set up the vhost i used the following process:
# cd /var/www/vhost/[domain] # mkdir -p cgi-bin/php_fcgi # chown [user]:psacln cgi-bin/php_fcgi # vim cgi-bin/php_fcgi/php-5.2.10_wrapper
then enter:
#!/bin/sh exec /opt/php/phpfarm/inst/php-5.2.10/bin/php-cgi
Save the file thenn set the perms/ownership
# chmod 755 cgi-bin/php_fcgi/php-5.2.10_wrapper # chown [user]:psacln cgi-bin/php_fcgi/php-5.2.10_wrapper
Then to edit vhost.conf:
# vim conf/vhost.conf
Enter (Using the same FastCGI configuration as Plesk 10):
<IfModule mod_fcgid.c> <Files ~ (\.php)> SetHandler fcgid-script FCGIWrapper /home/rbdev/www/cgi-bin/php_fcgi/php-5.2.10_wrapper .php Options +ExecCGI allow from all </Files> </IfModule>
You can add this at the top to disabled mod_php (as plesk 10 by default uses php_admin_flag for engine you need to override this in a directory definition using an admin flag, using php_flag won't work)
<Directory /var/www/vhosts/rb-dev.co.uk/httpdocs> php_admin_flag engine off </Directory>
All done! only downside is that mod_php runs as apache user and FastCGI will run as the owner, but I'm not aware of a way around that at this moment, maybe using suPHP_UserGroup will do it I need to recompile it to check.
24 notes
·
View notes
Text
PHP UK Conference 2011
So last friday I headed down to PHP UK 2011 to catch up on the recent goings on in the world of PHP. The trip started off cursed what with me coming down with a chest infection mid last week, then missing my train to London (ouch poor wallet) and then getting no sleep due to the worlds loudest snorer vs the paper thin walls of one of Kings Cross' more budget friendly hotels.
Bleary eyed I headed down to the Business Design Centre in Islington just in time for the talks to start. First up was the keynote delivered by Marco Tabini covering themes on understanding users and delivering some news on phparchitect magazine (which i plan to keep track of this year).
Next up I caught Ivo Jansch's talk on PHP in a Mobile Ecosystem. This was a good talk covering the considerations and approaches in using PHP. I got a few good links from this, in particular http://www.detectmobilebrowsers.mobi/ (for generating backend code on how to deal with particuarly handsets) and a sequence of blog posts on http://thoomtech.com helping introduce objective-c for php developers (which admittedly would have been more useful for me a couple of months ago but I'm still awful at objective-c so plenty of room for improvement).
At the same time as Ivo's talk there was 2 other talks of note, the first was one on HTML 5 and CSS 3 delivered by (believe it or not) a Microsoft Evangelist, the people I spoke to was audience to this talk weren't very happy. On the other hand the talk by Ian Barber called Zero MQ is the answer went down very well and seemed to be many people's highlight of the confrence (not hugely surprising for me as I really enjoyed his talk on fulltext searching last year). Although Zero MQ is a technology i'm unlikely to use in production anytime soon it has inspired me to give it a go locally.
After Ivo's talk I went to see Jonathon Weiss talk about running web apps on Amazon EC2, which is something I've wanted to explore for a while, I just need a project now to justify playing with it. Next up I caught Thorsten Rinne introducing Continuous Improvement concepts and tools, like every similar talk like this I have seen at conferences I came away with a renewed enthusiasm to set up a Continuous Integration server for work and then slowly start employing all the cool techniques that help ensure/improving coding quality. This time I'm going to do it, watch this space.
Next up I caught a great talk by Lorenzo Alberton called NoSQL: What, When and Why. Lorenzo focused first on the academic concepts of NoSQL databases which explained how they can work in distributed environments and be resilient to problems. Prior to this talk I only had brief knowledge of document databases like CouchDb and MongoDB, I had no that NoSQL had 4 different kinds. I recommend checking out the slides for a great introduction to NoSQL that's not clouded by the buzzword hype bubble.
Last up I caught Advanced OO Patterns by Tobias Schlitt. The talk was actually quite different to what I expected (my expectation had been crazy design patterns pulled out of dusty computer science text books applied to PHP) It actually focused on a few common design patterns: Dependancy Injection, Active Record and Data Mappers. It did open my eyes to how much of an active record pattern Doctrine 1.2 is (despite the spin) and it will be interesting to see if using Doctrine 2 will feel like active record or like a less typing Data Mapper, we will see!
Overall a fun day, I don't feel i got as much out of it socially as I normally do out of a confrencence but I'm blaming that on not being top form but I've come away with fresh ideas and practices to try out and thats the other half of the point right?
0 notes