Text
100 Awesome Folkways LPs
Back in 2005 I had just dropped out of graduate school and returned to Edmonton, Alberta. I was waiting for the semester to start so I could begin my journey as a mathematician. I'm indebted to my friend Christopher Bateman (co-author of In Fine Style) who told me that there was a collection of Folkways Records at the University of Alberta. An entire collection?! Wow! The only other complete collection was at the Smithsonian Museum! And I had nothing to do all day (except for my string of terrible minnimum wage jobs)!
I started visiting folkwaysAlive!, the small Folkways centre (photos below), regularly and digging through their collection. Most of the knowledge about Folkways at the time was centred on their incredible collection of ethnographic recordings. However, as I began looking through the catalog I was finding all kinds of weird and amusing records. There was one titled Speach After the Removal of the Larynx and another called Sounds of the Junkyard. Even more, there were famous composers of electronic music who had published LPs on folkways. What were those doing there? It turns out that Folkways founder Moses Asch was determined to capture all "sound" and that in the pure pursuit of this philosophy several LPs of collateral damage would delight the catalog.
Some years later I had a radio show on CJSR where I played music entirely from the Folkways collection (the show was appropriately titled The Sounds of Folkways). At the same time I was trying to capitalize on my newfound knowledge and tried to pitch a book and/or column to various people titled 100 Weird and Wonderful LPs on Folkways. I wasn't successful, but in the process I had made a "list of weird Folkways LPs" and that's what I'd like to share today.
Here is my list. It's not complete. It's not a list of "the best folkways LPs." It's just a list of Folkways records I happen to like and think weird or cool. Thus, there are large omissions in categories like Blues and Ethnomusicology, which contain many of the most incredible Folkways records.
You can find music clips for most of these LPs on YouTube. This list is a decade old, so there's probably some errata and some of the catalog numbers aren't likely correct.
I hope you enjoy.
PS - I still need a copy of Speach After the Removal of the Layrnx!
PPS - I'm forward thankful to Lorna Arndt who so graciously let me hang out in the folkwaysAlive! office!
Avant-Garde / Electronic
FW 34006 – Music of Ann McMillan - Whale-Wail, In Peace, En Paix
FW 33451 – Gateway Summer Sound – Abstracted Animal & Other Sounds Composed by Ann McMillan
FW 33855 – Reelizations – Composed and Preformed by Barton Smith
FW 33856 – Reelizations Vol. 2 – Composed and Preformed by Barton Smith
FW 37464 – Dariush Dolat-Shahi – Electronic Music Tar and Setar
FW 37467 – Dariush Dolat-Shahi – Ostashagah
FW 03434 – Eight Electronic Pieces – Todd Dockstader
FTQ 33951 - Ilhan Mimaroglu - To Kill a Sunrise and La Ruche
FW 31313 – Gamelan in the New World Vol.1
FW 31312 – Gamelan in the New World Vol. 2
FSS 33878 - Israeli Electroacoustic Music
FS 003861 – Radio Programme No. 1 Audio Collage / Henry Jacob’s Music & FolkloreFT 03704? – Indeterminacy (John Cage Reading and David Tudor Music)
FW 47902 – Invocations – Richard Kostelanetz
FX 06250 - Science Fiction Sound Effects Record
FW 06160 – Sounds of New Music
FW 06241 – Travelon Gamelon – Music for Bicycles – Richard Lerman
FW 33440 – Outer Space Music by Vaclav Nelhybel
FW 33436 – Electronic Music
FW 06253 – Futuribile the Life to Come – Gianni safred and his Electronic Instruments
FM 03349 – The Piano Music of Henry Cowell
FW 03438 – Electronic Music From Razor Blades to Moog – Produced & Composed by J.D. Robb
FW 33435 – J.D. Robb - Rhythmania & Other Electronic Compositions
FW 33439 – Music by Jean ivey for Voices, Instruments, and Tape
FW 33445 – Jon Appleton – Music for Synclavier and other Digital Systems
FW 33437 – The World Music Theatre of
FW 33442 – The Dartmouth Digital Synthesizer
FW 37461 – Jon Appleton – Four Fantasies for Synclavier
FW 33450 – McLean: Electro-Symphonic Landscapes
FW 36050 – Electronic Music from the Outside In
FW 37465 – Computer Music from the Outside In
FW 37475 – Computer Music
FSS 6301 – Highlights of Vortex
FW 33431 – Extended Piano (Elliot Schwartz)
FW 33901 – New American Music Vol. 1
FW 33902 – New American Music Vol. 2
FW 33903 – New American Music Vol. 3
FW 33904 – New American Music Vol. 4
FW 33441 – Tract: A Composition of Agitprop Music for Electromagnetic Tape by Ilhan Mimaroglu
Ethnographic
FW 31306 – The Nuru Taa African Musical Idiom: Played on the Mama-Likembi (Nadi Qamar)
FW 08975 - Mushroom Ceremoney of the Mazatec Indians of Mexico
FW 08512 – Music of Upper-Egypt
FW 04457 – The Pygmies of the Ituri Forest
FE 4377 - Modern Maya: The Indian Music of Chiapas, Mexico
FE 4379 - Modern Maya: The Indian Music of CHiapas, Mexico - Vol. 2
FC 7755 - Bilal Abdurahman – Echoes Of Timbuktu And Beyond
Folk
FG 3526 - Elizabeth Cotton - Folksongs And Instrumentals With Guitar
FTS 31003 - Elizabeth Cotton - Vol. 2: Shake Sugaree
FW 03537 - Elizabeth Cotton - Vol. 3: When I'm Gone
FW 07535 – Innovative Rythmic and Tonal Textures for developing Creative Motor Skill Activities (Bilal Abdurahman)
FW 07540 – Sound Rhythm Rhyme and Mime for Children (Bilal Abdurahman)
FW 03581 - Mike Hurley – First Songs
FW 07520 – Ghetto Reality (Nancy Dupree)
FW 32850 – Niss Puk Band – No More Nukes
FW 32852 – Roger Matura – The Outrage Grows
FW 32851 – Roger Matura – Times are Gonna Get Harder
BR 304 - The Village Fugs - Sing Ballads Of Contemporary Protest, Point Of Views, And General Dissatisfaction
PAR 01020 – A Grain of Sand: Music for the Struggle by Asians in America
FTS 31066 - Lucinda Williams - Ramblin' on My Mind
FTS 31067 - Lucinda Williams - Happy Woman Blues
FA 2951 - Anthology of American Folk Music Volume One: Ballads (Harry Smith)
FA 2951 - Anthology of American Folk Music Volume Two: Social Music (Harry Smith)
FA 2951 - Anthology of American Folk Music Volume Three: Songs (Harry Smith)
Jazz
FW 33867 – East New York Ensemble de Music at the Helm
FW 33866 – Entourage
FW 33870 – The Neptune Collection – The Entourage Music & Theatre Ensemble
FW 05403 – From the Cold Jaws of Prison
FW 09718 – Kenneth Patchen reads with Jazz in Canada
FW 02966 – Mary Lou Williams – The Asch Recordings 1944-1947
FW 02843 – Mary Lou Williams – Black Christ of the Andes
FSS 37462 - Bertram Turetzky - A Different View
New Age
FW 06195 – Clouds: New Music for Relaxation Vol. 1 (Craig Kupka)
FW 06916 – Crystals: New Music for Relxation Vol. 2 (Craig Kupka)
FW 37463 – Charles Compo – Seven Flute Solos
Soul / Funk / R&B
FW 32863 – Into Morning – Charles Cha-Cha Shaw
FW 32870 – Kingdom Come – Cha Cha Shaw
FW 31037 – Climbing High Mountains – Juanita Johnson & The Gospel Tones
FW 33868 – The Montgomery Movement featuring The Montgomery Express
FW 09723 – Underground Streets – Words and Original Music by Normal Riley
FW 09710 – Boss Soul – 12 Poems by Sarah Webster Fabio set to Drum Talk, Rhythms & Images
FW 09715 – Together to the tune of Coltranes “Equinox” (Sarah Webster Fabio)
FW 09714 – Jujus Alchemy of the Blues – Poems by Sarah Webster Fabio Read by Sarah Webster Fabio with Musical Background
Spoken Word
FC 7471 - Rev. Howard Finster - Man of Many Voices
FW 06134 – Speech After the Removal of the Larynx
FW 05401 – Angela Davis Speaks
FW 09871 – Dante: The Inferno
FW 06102 – Interview with Sir Edmun Hillary – Mountain Climbing
FW 09711 – Soul Ain’t Soul Is – Poems by Sarah Webster Fabio
FW 05404 – The End of the World
FW 09701 – The Psychedelic Experience (Timothy Leary)
FW 055338 – What If I am a Women? Black Women’s Speeches Narrate by Ruby Dee
Strange
FW 06118 – Playing Music With Animals – The Interspecies Communication of Jim Nollman with 300 Turkeys, 12 Wolves and 20 Orca Whales
FW 06143 – The Sounds of the Junk Yard
SFW 45060 – Sounds of North American Frogs
FW 06142 – Sounds of the Office
FW 06121 – Sounds of the Sea Vol. 1
FW 06122 – Sounds of the Sea Vol. 2
FW 05589 – Street and Gangland Rhythms
FW 05580 – A Dog’s Life (Tony Schwartz)
FW 05562 – The World in My Mailbox (Tony Schwartz)
FW 05581 – Music in the Streets (Tony Schwartz)
FW 06200 – Voices of Satellites
FW 06123 – Vox Humana
2 notes
·
View notes
Text
Extensible Effects in the van Laarhoven Free Monad
Edit: you can find this code on Hackage at free-vl.
Algebraic effects seem to be a sort of holy grail in functional programming. What I mean when I say “algebraic effect” here is: treating any effect like a value or type in your program, while also having some simple operations (an algebra) to combine effects.
What does this look like practically? The two languages that come to mind are Idris and PureScript. When you program using their Effects support, you write monadic code, but essentially have a list of effects you can pull from the environment: logging, state, IO, etc. Further, you can program against a stack of effects, only assuming the ones you need are present, allowing us to arbitrarily grow that effect stack as needed. It’s very nice.
Unfortunately we don’t have access to these tools in Haskell. Instead, haskellers usually rely on mtl or Free Monads.
What I want to present today is an Effects library close to that of Idris and PureScript using the van Laarhoven encoded Free Monad armed with a Heterogeneous List (HList) of effects. I claim this has some of the benefits of Effect tooling in Idris and PureScript, the same expressiveness of regular Free Monads, a more performant encoding than Church, Fused, or Oleg encodings, and only costs us a few extensions. All in about 60 lines of code.
Motivating Example
First, an example of what we’ll end up with:
-- | we use the explicit `liftVL` combinator for illustrative purposes. -- in real code you'd have your own combinators. -- Make a post request postReq :: HasEffect effects Http => Url -> RequestBody -> FreeVL effects StatusCode postReq url body = do resp <- liftVL (\http -> put http url body) return (statusCode resp) -- take any arbitrary free monad and wrap it with logging withLog :: HasEffect effects Logging => String -> String -> FreeVL effects a -> FreeVL effects a withLog preMsg postMsg program = do liftVL (\log -> infoLogger log preMsg) a <- program liftVL (\log -> infoLogger log postMsg) return a -- a concrete list of effects used to define an interpreter type MyEffects = ( Http ': Logging ': Random ': State ': '[] ) -- an interpreter as a value ioInterpreter :: Effects MyEffects IO ioInterpreter = httpIO .: loggerIO .: randomIO .: stateIO .: EmptyEffect -- actually running our program main :: IO () main = interpret ioInterpreter (withLog "POST!" "phew! made it!" (postReq "https://weirdcanada.com" "rare=cool") )
The only part that’s missing from the above is what our effects (Http, Logger, etc.) look like. Here is an example:
-- the HTTP effect data Http m = Http { get :: Url -> m Response , put :: Url -> RequestBody -> m Response -- etc. }
-- the Logging effect data Logging m = Logging { infoLogger :: String -> m () , debugLogger :: String -> m () -- etc. }
The rest of this post is written in literate haskell. I encourage you to cut-and-paste this code and play with it yourself! To start, let’s get some extensions and imports out of the way!
> {-# LANGUAGE DataKinds #-} > {-# LANGUAGE FlexibleContexts #-} > {-# LANGUAGE FlexibleInstances #-} > {-# LANGUAGE GADTs #-} > {-# LANGUAGE KindSignatures #-} > {-# LANGUAGE MultiParamTypeClasses #-} > {-# LANGUAGE PolyKinds #-} > {-# LANGUAGE RankNTypes #-} > {-# LANGUAGE TypeOperators #-} > > module Main where > > import Control.Arrow ((&&&)) > import Control.Concurrent (threadDelay) > import Control.Exception (catch) > import Control.Lens ((^.)) > import Data.ByteString.Lazy (ByteString) > import Network.Wreq (get, post, Response, responseStatus, statusCode) > import Network.HTTP.Client (HttpException(StatusCodeException)) > import qualified Network.HTTP.Types.Status as S > import System.Random (randomIO)
van Laarhoven Free Monad
I refer you to Russell O'Connor’s great blog post on the van Laarhoven Free Monad. It’s a short and succinct read. In some sense, the van Laarhoven Free Monad is dual to the usual one; instead of using a sum type to model operations we use a product.
Here is the usual Free Monad encoding:
> -- type aliases to make this look like real code. > type Url = String > type RequestBody = ByteString > > -- old-fashioned free monad encoding > data Free effect a = Pure a > | Free (effect (Free effect a)) > > -- example http effect: using Strings to represent urls and responses for brevity > data YeOldeHttp a = Get Url (Response ByteString -> a) > | Post Url RequestBody (Response ByteString -> a) > > -- example interpreter > freeIOInterp :: Free YeOldeHttp a -> IO a > freeIOInterp (Pure a) = return a > freeIOInterp (Free (Get url next)) = get url >>= freeIOInterp . next > freeIOInterp (Free (Post url body next)) = post url body >>= freeIOInterp . next > > -- example combinator > oldGet :: Url -> Free YeOldeHttp (Response ByteString) > oldGet url = Free (Get url Pure)
Given an effect, which is itself a sum-type (each branch a different operation (e.g. Get, Put)) we can show that Free YeOldHttp a is a monad (see Gabriel’s blog post for more) and write interpreters against it, supplying it with the right semantics. The great part about Free Monads is that we can write different interpreters, each for their own specific use (testing, production, debugging, etc.).
Now, the van Laarhoven Free Monad is a different encoding, and requires you to represent effects as products instead of sums. The above example is equivalent to:
> -- (simple) van Laarhoven Free Monad encoding > newtype FreeVL1 effect a = > FreeVL1 { runFreeVL1 :: forall m. Monad m => effect m -> m a } > > -- example Http effect > data NewHttp m = > NewHttp { getNewHttp :: Url -> m (Response ByteString) > , postNewHttp :: Url -> RequestBody -> m (Response ByteString) > } > > -- example interpreter > newHttpIO :: NewHttp IO > newHttpIO = NewHttp { getNewHttp = get, postNewHttp = post } > > freeVL1IOInterpreter :: FreeVL1 NewHttp a -> IO a > freeVL1IOInterpreter program = runFreeVL1 program newHttpIO > > -- example combinator > newGet :: Url -> FreeVL1 NewHttp (Response ByteString) > newGet url = FreeVL1 (\httpEffects -> getNewHttp httpEffects url)
The nice thing about FreeVL1 is it’s just a function. To interpret a program written in FreeVL1 NewHttp a requires us only to provide a value of type NewHttp m as demonstrated above. This means that writing programs against FreeVL1 NewHttp a will have the same runtime cost as function composition or the Reader monad. Contrast this to the regular encoding of Free which performs horrendously under binds (it’s basically a fancy linked list of operations). We can use the Church-encoding to improve this substantially, but it has other trade offs as well.
Now, the downside of the simple van Laarhoven encoding is that we only have one effect at a time. Let’s see how we can improve that!
The van Laarhoven Free Monad With Arbitrary Effect Stacks
Our motivation now is to create new effects (for example, instead of just Http perhaps we want logging, random numbers, etc.) and combine them. One way of doing this in the traditional Free Monad encoding is to use co-products (see blog post here). Since each effect is a Functor, and Co-Products of Functors is still a Functor, this is technically possible. However, it makes pulling effects out of the stack and writing and combining interpreters finnicky.
In the van Laarhoven encoding, our effect is already a product type. What we want is to be able to add more “fields” to our effect. For example, if we could add the field log :: String -> m (), that would be almost like adding a logger to our effect stack!
An equivalent way of adding fields would be to create a Heterogeneous list of effects! If instead of “multiplying” our effect products we appended them to a heterogeneous list, then we’ve got a way to add more effects that is isomorphic to adding more fields.
Let us design such an HList and show how this empowers us to extend the previous van Laarhoven encoding!
> -- | our HList of effects > -- note that as per the van Laarhoven encoding, our effects are parameterized > -- by a monad m. > data EffectStack a (m :: * -> *) where > EmptyEffect :: EffectStack '[] m > ConsEffect :: effect m -> EffectStack effects m -> EffectStack (effect ': effects) m
EffectStack now contains an arbitrary list of effects, each one paramaterized by m. We are now ready to define the stack-driven van Laarhoven Free Monad:
> -- van Laarhoven Free Monad with Effect Stacks encoding > newtype FreeVL effects a = > FreeVL { runFreeVL :: forall m. Monad m => EffectStack effects m -> m a } > > -- Yes, it is a monad > instance Functor (FreeVL effects) where > fmap f (FreeVL run) = FreeVL (fmap f . run) > > instance Applicative (FreeVL effects) where > pure a = FreeVL (const (pure a)) > (FreeVL fab) (FreeVL a) = > FreeVL $ uncurry () . (fab &&& a) > > instance Monad (FreeVL effects) where > (FreeVL run) >>= f = > FreeVL $ \effects -> run effects >>= \a -> runFreeVL (f a) effects
As with the previous van Laarhoven encoding, interpreters are simple functions:
> -- interpret a van Laarhoven Free Monad with Effect Stacks > interperet :: Monad m > => EffectStack effects m > -> FreeVL effects a > -> m a > interperet interpreter program = runFreeVL program interpreter
Unfortunately we are not quite ready to write programs in our new fancy Free Monad. We need to construct programs with arbitrary effect stacks, and for that, we need a way to pull an effect from EffectStack and use it.
To achieve this I borrowed a trick from Julian Arni of haskell-servant (you can see his code here). Essentially, we create a typeclass capable of crawling the HList in EffectStack and search for the effect we want, and then return it.
> -- define a type class that will only compile if a certain effect is > -- present in the stack, and if it is present, return it. > class HasEffect (effects :: [((* -> *) -> *)]) (effect :: ((* -> *) -> *)) where > getEffect :: EffectStack effects m -> effect m > > -- Let's provide some instances of `HasEffect` that can crawl EffectStack looking > -- for an effect that matches and then return it. > > -- this first instances handles the case where our effect type doesn't match the > -- head of the HList and recurses further. > instance {-# OVERLAPPABLE #-} > HasEffect effects effect => HasEffect (notIt ': effects) effect where > getEffect (ConsEffect _ effects) = getEffect effects > > -- this instance matches the case where our 'effect' type matches the head > -- of the HList. we then return that effect. > instance {-# OVERLAPPABLE #-} > HasEffect (effect ': effects) effect where > getEffect (ConsEffect effect _) = effect
Those typeclasses will likely bend your mind a little (they most certainly bent mine), but if you write it our yourself (which I encourage you to do) you kind of get the hang of it. (PS - I’m forever grateful to Julian for this idea because it’s so handy!)
Now that we have tooling to pick our effects, we can start writing combinators that will allow us to write programs against an arbitrary effect stack.
> -- lift operations into the van Laarhoven Free Monad > liftVL :: HasEffect effects effect > -- ^ constraint enforcing that our effect is in the effect stack > => (forall m. effect m -> m a) > -- ^ method to pull our operation from our effect. > -> FreeVL effects a > liftVL getOp = FreeVL (\effects -> getOp (getEffect effects))
Programs in the VL Free Monad with Effects Stack
Let’s write some user code. We’ll start by defining three effects:
> -- HTTP Effect > data Http m = > Http { getHttpEff :: Url -> m (Either Int (Response ByteString)) > , postHttpEff :: Url -> RequestBody -> m (Either Int (Response ByteString)) > } > > -- Logging Effect > data Logging m = Logging { logEff :: String -> m () } > > -- random number effect > data Random m = Random { getRandEff :: m Int } > > -- suspend effect > data Suspend m = Suspend { suspendEff :: Int -> m () }
Now for some code. Let’s write combinators for each operator in each effect.
> getHttp :: HasEffect effects Http > => Url > -> FreeVL effects (Either Int (Response ByteString)) > getHttp url = liftVL (`getHttpEff` url) > > postHttp :: HasEffect effects Http > => Url > -> RequestBody > -> FreeVL effects (Either Int (Response ByteString)) > postHttp url body = liftVL (\http -> postHttpEff http url body) > > logMsg :: HasEffect effects Logging > => String > -> FreeVL effects () > logMsg msg = liftVL (`logEff` msg) > > getRand :: HasEffect effects Random > => FreeVL effects Int > getRand = liftVL getRandEff > > suspend :: HasEffect effects Suspend > => Int > -> FreeVL effects () > suspend i = liftVL (`suspendEff` i)
With these combinators we can write programs! Let’s write a program that makes a web-request and if it fails, suspends for 100ms and retries. It will retry a random number of times.
> repeatReq :: ( HasEffect effects Http > , HasEffect effects Random > , HasEffect effects Suspend > ) > => Url > -> FreeVL effects (Either Int (Response ByteString)) > repeatReq url = do > numRetries <- (flip mod 10) getRand > eResponse <- getHttp url > go numRetries eResponse > where > go 0 r = return r > go i _ = do > eResponse <- getHttp url > case eResponse of > r@(Right _) -> return r > l@(Left _) -> suspend 100 >> go (i-1) eResponse
Now, let’s write a combinator that will add logging to any program!
> withLog :: HasEffect effects Logging > => String > -> String > -> FreeVL effects a > -> FreeVL effects a > withLog preMsg postMsg program = do > logMsg preMsg > a <- program > logMsg postMsg > return a
And finally, let me show you that we can combine arbitrary programs and effect stacks by wrapping our previous repeatReq code with logging and supplying a url.
> -- let's combine some programs > program :: ( HasEffect effects Http > , HasEffect effects Random > , HasEffect effects Suspend > , HasEffect effects Logging > ) > => FreeVL effects (Either Int (Response ByteString)) > program = withLog "running request!" "done!" (repeatReq "http://aaronlevin.ca")
Note that if you remove one of those constraints (like Suspend for example), you will get a compile error:
01.lhs:313:49: Could not deduce (HasEffect effects Suspend) arising from a use of ‘repeatReq’ from the context (HasEffect effects Http, HasEffect effects Random, HasEffect effects Logging) bound by the type signature for program :: (HasEffect effects Http, HasEffect effects Random, HasEffect effects Logging) => FreeVL effects (Maybe (Response ByteString)) at 01.lhs:(308,14)-(312,57) In the third argument of ‘withLog’, namely ‘(repeatReq "http://aaronlevin.ca")’ In the expression: withLog "running request!" "done!" (repeatReq "http://aaronlevin.ca") In an equation for ‘program’: program = withLog "running request!" "done!" (repeatReq "http://aaronlevin.ca")
Interpreters in the van Laarhoven Free Monad with Arbitrary Effects Stack
Now that we’ve written some programs, we need to supply some interpreters. We’ll supply the main interpreter in IO and leave it as an exercise to the reader to create a pure one.
Recall that an interpreter in the van Laarhoven Free Monad is just a value of type effect m. Similarly, in the effect stack version, it’s a value of type EffectStack effects m, which is just an HList of our effects.
> -- a combinator to make creating HLists syntactically nicer. > (.:.) :: effect m -> EffectStack effects m -> EffectStack (effect ': effects) m > effect .:. effects = ConsEffect effect effects > infixr 4 .:. > > -- interpret http actions in IO > handleExcep :: HttpException -> Either Int a > handleExcep (StatusCodeException status _ _) = Left (S.statusCode status) > handleExcep _ = error "unhandled HttpException" > > httpIO :: Http IO > httpIO = > Http { getHttpEff = \req -> (Right get req) `catch` (return . handleExcep) > , postHttpEff = \req body -> (Right post req body) `catch` (return . handleExcep) > } > > -- interpret logging actions in IO > logIO :: Logging IO > logIO = Logging { logEff = putStrLn } > > -- random number generator in IO > randIO :: Random IO > randIO = Random { getRandEff = randomIO } > > -- suspend in IO > suspendIO :: Suspend IO > suspendIO = Suspend { suspendEff = threadDelay } > > -- our effect stack > type MyEffects = ( Http ': Logging ': Random ': Suspend ': '[] ) > > -- our interpreter > ioInterpreter :: EffectStack MyEffects IO > ioInterpreter = httpIO .:. logIO .:. randIO .:. suspendIO .:. EmptyEffect
Now that we have an interpreter, we can run our program!
> main :: IO () > main = interperet ioInterpreter program >> putStrLn "exit!"
Conclusion
Hopefully by now you’ve been convinced that we’ve achieved our goal: we can program against effects in Haskell just like our comrades programming with Idris and PureScript (I say this fully tongue-in-cheek). Further, we can provide arbitrary effect stacks and combine interpreters in whatever way we want (so long as they share the same monad).
While this is all very exciting, there is still some work to do:
put this in a library on Hackage see free-vl
our EffectStack should obey some laws, but which ones?
start creating an ecosystem of effects and interpreters!
How to store pure state when combining interpreters
investigate program analysis. The van Laarhoven Free Monad is just a function, but can we supply it with an effect stack built for program or static analysis?
5 notes
·
View notes
Text
Reasoning about Errors in Free Monads and Their Interpreters
Free Monads are a powerful abstraction for modeling operations in your program. While there are many articles about free monads, there are relatively few about using free monads.
At my current work we have a large Free Monad that abstracts various actions one might do against data in our system. We also have three interpreters: a pure interpreter for unit testing, a psql-backend interpreter for the back end of our data service, and a http-client interpreter for clients using the data service over HTTP.
One difficulty I’ve encountered several times has been how to properly model errors and exceptions. I’m not talking about the much-adored EitherT vs ErrorT vs ExceptT debate, but rather differentiating between:
Semantic Errors: attempting to perform an action that doesn’t make sense. For example, adding an edge between non-existent nodes in your graph.
Interpreter Errors: attempting to do something illegal within your interpreter. For example, submitting a malformed sql query which causes an error.
Runtime Errors: your code is correct but the outside world is not. For example, the database goes down causing an exception in your connection pool.
In this post I want to discuss a general approach to modeling and distinguishing these three classes of errors. I will then show my own approach that, combined with the Type Famillies extension I wrote about earlier, presents a simple and type-safe solution.
Worlds of Error
Semantic Errors
Interpreter Errors
Runtime Errors
Using Type Families to Encode Errors
Worlds of Error
{-# LANGUAGE DataKinds #-} {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE GADTs #-} {-# LANGUAGE TypeFamilies #-} module FreeMonadErrors where import Control.Monad.Catch import Control.Monad.Error import Control.Monad.Free (Free(Free, Pure)) import Control.Monad.Trans.Either (EitherT)
While cache-invalidation and naming things are the pride of hard problems, proper error handling is a close third. When to throw? Where to catch? What errors are kept pure? This problem is exasberated in the context of Free Monads because we have a clean separation between program and interpreter.
To properly model errors in free monads, we first recognize that one advantage of Free Monads is the separation between program, interpreter, and outside world:
+--------------------------+ +--------------------+ +----------+ | | ---> | | <-> | | | Programs described | ---> | interpreter(s) | <-> | outside | | by our Free Monad | ---> | | <-> | world | | | ---> | | <-> | | +--------------------------+ +--------------------+ +----------+
Next we recognize that “errors” or “bad behaviour” may manifest in each world, reducing our problem to:
Identifying errors that stay within their respective zone and how they are handled
Identifying errors that are propogated into the interpreter
Identifying errors that are propogated to the outside world
Semantic Errors
Semantic errors are errors that are expected outcomes of your problem domain. Some examples of semantic errors:
fetching a User with id = 324, but that user does not exist.
adding an edge between non-existent nodes in your graph.
For semantic errors, you want the person writing the program within the Free Monad to handle this kind of error. These errors should not propogate into the interpreter. Often this involves ensuring the return type of a value in your algebra encodes the error. For example:
-- a simple user data User = User { userId :: Int, userName :: String } deriving Eq -- a functor describe the algebra of our programs. data AlgebraF1 a = CreateUser1 Int String (User -> a) | GetUser1 Int (Either String User -> a) instance Functor AlgebraF1 where fmap f (CreateUser1 i s g) = CreateUser1 i s (f g) fmap f (GetUser1 i g) = GetUser1 i (f g)
As you can see in GetUser1, we’ve ensured that when fetching a user, the program’s author must handle the case where a user with an id may not exist (via the Left part of Either String User). An example program may look like:
-- check if the user with the given id has the name. fooProgram :: Int -> String -> Free AlgebraF1 (Either String Bool) fooProgram fooUserId name = do user <- Free (GetUser1 fooUserId Pure) return $ fmap ((==) name userName) user
Semantic errors should not propogate into the interpreter. This makes sense until you consider the fact that you may want to log such errors. To which I respond: logging should be part of your Free Monad transformer stack!
Interpreter Errors
Errors in the interpreter arise when your interpreter does not behave correctly. For example, your interpreter may create inappropriate SQL queries or emit bad C programs. Ultimately, there is not much you can do with these types of errors aside from encoding as much as possible in your types, verifying the correctness of your interpreter (via encoding laws or equational reasoning), and testing. It’s likely errors of this form will not manifest themselves until runtime, as they will immediately be propagated to the outside world and reflected back at us rather harshly.
In some cases interpreter errors can be identified by catching the right runtime error. We’ll discuss this later.
Runtime / Outside World Errors
Runtime errors are bound to manifest. Database connections are dropped or errors propogated from the interpreter are reflected back at us. A user writing a program within a Free Monad should not be concerned with these errors. They will be handled (or thrown) within the interpreter, usually via throwM or catch, and any program running an interpreter against a free monad will need to handle exceptions that may be raised.
-- foo interpreter fooInterpreter :: (MonadError String m, MonadCatch m) => Free AlgebraF1 a -> m a fooInterpreter _ = throwError "I'm a runtime error, can you catch me?"
Using Type Families to Encode Errors
Thus far we’ve discussed various scenarios where semantic, interpreter, and runtime errors can occur. How can we model these?
If we use the approach of encoding actions within our Free Monad as a universe of types from my previous blog post Type Families Make Life and Free Monads Simpler, then there exists a nice encoding of semantic, interpreter, and runtime errors.
First, let us describe our Free Monad:
-- our universe of actions data Algebra = CreateUser | GetUser -- singleton data SAlgebra (a :: Algebra) :: * where SCreateUser :: SAlgebra 'CreateUser SGetUser :: SAlgebra 'GetUser -- type family representing data required to produce action. type family InputData (a :: Algebra) :: * where InputData 'CreateUser = (Int, String) InputData 'GetUser = Int
Semantic Errors
We want to encode semantic errors into our OutputData type family (which we’ve not defined yet) so that errors are handled within our free monad (via an action’s return type). When we encounter an error, we will want to know what action in our algebra caused the error and in which world (semantic, interpreter, runtime) we are in.
-- a universe of errors data ErrorUniverse = Semantic | Interpreter | Runtime -- singleton encoding of our error universe data SErrorUniverse (e :: ErrorUniverse) :: * where SSemantic :: SErrorUniverse 'Semantic SInterpreter :: SErrorUniverse 'Interpreter SRuntime :: SErrorUniverse 'Runtime -- | our custom error type, tagged by an error universe and carrying runtime information -- about an action. data AlgebraError (e :: ErrorUniverse) :: * where AlgebraError :: Show (InputData a) -- Show instance so we can log => SErrorUniverse e -- error singletone -> SAlgebra a -- the action causing the error -> InputData a -- the input data supplied to the action (for logging) -> AlgebraError e
Note that our AlgebraError encodes all the information related to an action in our algebra (the specific action via SAlgebra and its input data via InputData a (handy for logging)). With AlgebraError defined, we are now ready to define the OutputData type family.
-- output data for our free monad type family OutputData (a :: Algebra) :: * where OutputData 'CreateUser = User OutputData 'GetUser = Either (AlgebraError 'Semantic) User -- our Algebra as a functor data AlgebraF next :: * where AlgebraF :: SAlgebra a -> InputData a -> (OutputData a -> next) -> AlgebraF next -- Functor instance instance Functor AlgebraF where fmap f (AlgebraF a i o) = AlgebraF a i (f o) -- our free monad type FreeAlgebra a = Free AlgebraF a -- some smart constructors to make our life easier createUser :: (Int, String) -> Free AlgebraF User createUser d = Free (AlgebraF SCreateUser d Pure) getUser :: Int -> Free AlgebraF (Either (AlgebraError 'Semantic) User) getUser i = Free (AlgebraF SGetUser i Pure)
Let’s take note of a few things.
There is no semantic error for the CreateUser action. This is because we assume, semantically, that if you provide the correct data you will create a User. If any errors occur at this point they must be interpreter or runtime errors.
The GetUser action returns Either (AlgebraError 'Semantic) User, indicating that when “getting” a user, we may encounter an error. In this case, the error we’ll encounter is simply that the user does not exist.
It may be the case that there are multiple semantic errors for a single action. In this scenario, you can simply add a sum type to InputData a to encompass various types of error (there are other ways to do this as well).
With the above we are able to write programs in our Free monad and handle error. For example, here is one that creates a user, fetches the user, and checks that they’re the same:
createAndCheck :: Free AlgebraF (Either (AlgebraError 'Semantic) Bool) createAndCheck = do newUser@(User newUserId _) <- createUser (37,"Heitkotter") fetchedUser <- getUser newUserId return ( fmap (== newUser) fetchedUser )
Within createAndCheck the client was forced to handle the case where one might fetch a user that does not exist (a semantic error).
Runtime Errors
Where do runtime errors pop-up? In the interpreter! I won’t write an interpreter within this blog post, but I will state an example type signature:
-- free monad interpreter used at megacorp. megacorpInterpreter :: Free AlgebraF a -> EitherT (AlgebraError 'Runtime) IO a megacorpInterpreter = error "to be defined" -- an alternative interpreter using mtl megacorpMtlInterpreter :: MonadError (AlgebraError 'Runtime) m => Free AlgebraF a -> m a megacorpMtlInterpreter = error "to be defined"
When the interpreter is run, errors encountered here are reflected in the type (via EitherT or MonadError in the above examples), therefore the client running the interpreter needs to handle these in whatever way makes sense for the program.
Translating Semantic and Interpreter Errors to Runtime Errors
You can imagine that a small program in our free monad may return a value with the type Free AlgebraF (Either (AlgebraError 'Semantic) User. When we run our megacorpInterpreter against this program, we’ll get EitherT (AlgebraError 'Runtime) IO a. So, where did our semantic error go? Well, in your interpreter, there should be some code that translates these semantic errors into a meaningful type within the interpreter. For example, one might perform case analysis on the Either and responds to it (either by logging or generating an interpreter-specific error (e.g. an HTTP error code like 400 or a retry for an http client)).
You may also have the opportunity to catch certain interpreter errors. For example, the postgresql-simple client has an error type for poorly formed sql queries. You may be able to catch this and translate it into a runtime error within the interpreter.
Conclusion
When using free monads it’s a good idea to clearly separate various kinds of errors. In this blog post we identified three kinds of errors: semantic, interpreter, and runtime. Semantic errors can be handled by the user writing programs in your Free Monad and should reference domain-specific nuances. Runtime errors are handled in your interpreter. Interpreter errors manifest at runtime but are interpreter-implementation specific, and therefore they should be caught and translated to runtime errors.
Additionally, we showed that by representing actions in our free monad by an algebra and by creating a universe of error types we were able to reflect action- and error-specific types in our free monad and interpreter.
I hope this gives you more insight into how to handle and represent errors in your free monads. May all your free monads be error free, but not free of errors.
4 notes
·
View notes
Text
Flexibility With Referential Data Using Type Families
The problem: you are writing the backend of an e-commerce site. Your Order data type references a Customer and a list of Products. Do you represent the list of Products as a list of product ids? Or do you use fully realized Products? When rendering an invoice, you may need customer data. But for data analysis, you may only need uuids. How do we encode all this in a flexible and safe manner?
This post will show you a neat trick using Type Families to safely (and sanely) define your Order data type to accommodate all varieties of referential data.
First, let's start with some extensions we'll need. This post is written in literate haskell.
> {-# LANGUAGE DataKinds #-} > {-# LANGUAGE TypeFamilies #-} > > module SaneRef where > import Network.URI (parseURI, URI)
Let's define some base, non-referential data types (they don't reference any of our other types).
> data Customer = Customer { customerId :: Int > , customerName :: String > } > data Product = Product { productId :: Int > , productName :: String > , productPrice :: Double > }
Naively, our Order will reference Customers and Products and may look like:
> data NaiveOrder1 = NaiveOrder1 { naive1Customer :: Int > , naive1Products :: [Int] > } > data NaiveOrder2 = NaiveOrder2 { naive2Customer :: Customer > , naive2Products :: [Product] > }
Which one of should we use? Perhaps we should use both and just move on? While the business decision about over-engineering a problem like this is outside the scope of this post, I believe there is a little solution that highlights a very practical usage of haskell's Type Families.
The basic idea: if we can tag referential data with a phantom reference, then we can use a type-level function to map a reference and data type to a new type. For example, we might say: "when we have a Customer referenced in the database, it will manifest as an Int; when we have a customer referenced in our REST API, it will manifest as a URI; but when we have no reference to a customer, it's a fully realized Customer object."
How do we accomplish this? With type families!
We start by defining a universe of references:
> data Reference = Database > | REST > | NoRef -- same as "fully realized"
The DataKinds extension will promote the values Database, REST, and NoRef to types. It will also promote Reference type to a kind.
Next we define a type-level function to map a reference and type to its reference type:
> type family RefType (a :: *) (r :: Reference) :: * where > RefType Customer Database = Int > RefType Customer REST = URI > RefType Product Database = Int > RefType Product REST = URI > RefType a NoRef = a
Our RefType type family states, essentially: "if you're referencing a Customer or Product from the Database, it's going to be an integer, if you're referencing it via the REST api, you're going to get a URI, otherwise, you're going to get fully realized data"
How do we use this? Let's define our Order data type!
> data Order (r :: Reference) = > Order { orderId :: Int > , orderCustomer :: RefType Customer r > , orderProducts :: [RefType Product r] > }
We've defined Order in such a way that requires programmers to tag values with a source. Did you get this Order from the database? Well, then you may need to do an application-level join to get a Customer or Products. Did you already do the join in the DB? Well, then you're going to get fully realized, unreferenced Customer and Products!
If we start using this Order type we will start noticing how handy the Reference type tag becomes. For example:
> -- | an Order we fetched from the database > orderFromDB :: Order Database > orderFromDB = Order 1 13 [141, 5594, 21] > > -- | an Order we fetched via our REST api. > orderFromREST :: Maybe (Order REST) > orderFromREST = do > customerUri <- parseURI "https://v1/customer/13" > product1Uri <- parseURI "https://v1/product/141" > product2Uri <- parseURI "https://v1/product/5594" > product3Uri <- parseURI "https://v1/product/21" > return (Order 1 customerUri [product1Uri, product2Uri, product3Uri]) > > -- | a fully realized Order > orderFull :: Order NoRef > orderFull = Order 1 > (Customer 13 "Aaron Levin") > [ Product 141 "Anne Briggs - ST" 299.99 > , Product 5594 "Anne Briggs - The Time Has Come" 399.99 > , Product 21 "Anne Briggs - The Complete Topic Recordings" 29.99 > ]
Above we have three different definitions of the same Order, and in each case we can infer from the type what to expect from our referenced data (customer and products). The code is slightly more readable and we've explicitly stated the assumptions about the form of our referential data.
While it's possible to write a function without specifying the Reference, we will be forced by GHC to make no assumptions about the type of referential data! For example:
> getOrderId :: Order r -> Int > getOrderId (Order i _ _) = i
And that's it! This is by no means a perfect solution, and it's definitely not the only solution, but hopefully it inspires you to investigate more simple, practical usages of type-level programming!
Enjoy!
2 notes
·
View notes
Text
FizzBuzz - Continuation Passing Style
For some reason I've been thinking about Continuation Passing Style a lot lately. I was reading some old code and realized it could have been simplified had it been done in a continuation monad. Did I really understand continuation passing style?
When thinking about simple and unnecessary uses of CPS, FizzBuzz came to mind. FizzBuzz doesn't really lend itself well to continuation passing style because it's control flow is so simple, but the idea of checking a set of conditions and returning early when they're met is still interesting.
There are a few ways you can approach FizzBuzz in CPS style:
Pass the continuation directly
Use a continuation monad and callCC to handle early exit.
Use the EitherT monad transformer to handle early exit within the continuation monad.
1. Pass the continuation directly
The best way to see how continuation passing style works is to write really simple functions that pass a continuation directly. Here is how we might write FizzBuzz in such a way:
fizzy :: Integer -> (String -> r) -> r fizzy i continuation | i `mod` 15 == 0 = continuation "FizzBuzz" fizzy i continuation | i `mod` 5 == 0 = continuation "Buzz" fizzy i continuation | i `mod` 3 == 0 = continuation "Fizz" fizzy i continuation = continuation (show i) -- | our program which passes a continuation that will print the strings -- yielded by `fizzy` fizzyIO :: IO () fizzyIO = mapM_ (`fizzy` putStrLn) [1..100]
As you can see, the continuation is passed directly. After performing some basic logic we pass our result directly to the continuation. This means that we can pass a continuation capable of printing the strings or ignoring the results entirely. This is the power of continuation passing style: we yield control over our results to the caller.
2. Use callCC to handle early exit
callCC has an interesting signature:
import Control.Monad.Cont (Cont, ContT) import Control.Monad.Cont.Class (callCC) class Monad m => MonadCont m where callCC :: ((a -> m b) -> m a) -> m a
Basically what this says is: if you have a function that takes, as an argument, a function (((a -> m b) -> m a)) whose first parameter is an exit strategy ((a -> m b)) and this function returns a value in your monad (m a), then callCC will return a value in your monad and exit early using your exit strategy should it be used. It can be subtle and hard to grok, so let's see an example with FizzBuzz. We will use the Cont r a monad explicitly, as it implements MonadCont (Cont is a simple newtype wrapper around functions of the form (a -> r) -> r which are exactly the types of continuations - it happens that they form a monad which is nice!)
import Control.Monad.Cont (Cont, ContT, runCont) import Control.Monad.Cont.Class (callCC) fizzeroo :: Integer -> Cont r String fizzerro i = callCC $ \exit -> if i `mod` 15 == 0 then exit "FizzBuzz" else if i `mod` 5 == 0 then exit "Buzz" else if i `mod` 3 == 0 then exit "Fizz" else return (show i) -- | our fizzerro program: map `fizzeroo` over the list of numbers -- run the continuation with a function that will print the result. fizzerooIO :: IO () fizzerooIO = mapM_ (flip runCont putStrLn fizzeroo) [1..100]
You can see we use callCC and when we encounter a situation where we want to exit early, we use the exit continuation, otherwise we yield a stringyfied integer (show i).
3. Use the EitherT monad transformer to handle early exit within the continuation monad.
This is bridging on the ridiculous and unnecessary. One of the great monad transformers in the pantheon of Haskellian abstractions is EitherT as it can add early-exit strategies to any monad stack. Can we use it with continuation passing style? Sure! It's an odd pairing as one often uses CPS for early exit as we saw above. Nevertheless:
import Control.Monad.Cont (Cont, ContT, runCont) import Control.Monad.Cont.Class (callCC) import Control.Monad.Trans.Either (EitherT, left, right, runEitherT) -- | a simple helper to generate our `mod` conditions. Yield the integer -- to the continuation if it doesn't satisfy the mod condition, otherwise -- exit early via `EitherT`'s `left` if it does. fizzed :: Integer -> String -> Integer -> EitherT String (Cont r) Integer fizzed m msg n = if n `mod` m == 0 then left msg else right n -- | Run our integer through several functions, each one checking if -- the integer satisfies the mod conditions, exiting early if it does -- or yielding the integer. See how the `fizzed` functions act like -- guards fizzedIn :: Integer -> EitherT String (Cont r) Integer fizzedIn i = fizzed 15 "FizzBuzz" i >>= fizzed 5 "Buzz" >>= fizzed 3 "Fizz" fizzedInIO :: IO () fizzedInIO = mapM_ ( flip runCont (either putStrLn print) runEitherT fizzedIn ) [1..100]
What's interesting about the EitherT case is that you can replace EitherT String (Cont r) Integer with ContT r (Either String) Integer and almost nothing changes.
Conclusion
Continuation Passing Style is neat. FizzBuzz is neat. It's still a weird question to ask in a technical interview. Interviewers should ask harder questions like "has a co-worker ever said something that made another co-worker feel unsafe? If so, how did you respond?"
If your response is "I grabbed the conversation continuation, halted, removed the co-worker who made other people feel unsafe, and resumed working in a safer environment" then you understand continuation passing style.
Enjoy!
1 note
·
View note
Text
Using Data.Proxy to Encode Types in your JSON Strings
yo dawg, I heard you like strings in your types so I put a type in your string so you could type check your strings while you stringify your types - Proxy "XZibit"
The saying goes that one should encode as many invariants in the type system as possible. This way bad programs don't type check, thereby affording you hours (previously spent writing unit tests) to meander in the warm embrace of nihilism. With that in mind I was faced with the following problem: at work we have a json-encoded message object that has generic payloads. However, each payload should contain the keys "type" and "data" where "type" contains a string-encoded value to indicate how to route or parse "data". For example:
{ "from": "you" , "to" : "me" , "payload" : { "type" : "identity_validation_request" , "data" : { ... IdentityValidationData data ... } } }
As time withers and our corporeal bodies float aimlessly through this cold, meaningless universe, the possible values of "type" (in this example: "identity_validation_request") will increase and be littered throughout our codebase as we add various types of payload. These values are a great example of invariants that should be encoded somewhere in our type system. But how?
The goal of this post is to create a data type that encodes our "type" value as a type-level string, holds this type-level string in its type, and succesfully parses a json encoded string if the "type" value in some json matches the type-level string. It should behave something like this:
-- | `s` will hold the value of our type, `a` is the data type of the payload. data Payload s a -- | creating a function like this is the goal parse :: ByteString -> Maybe (Payload s a) -- | here is some an example json to parse jsonB :: ByteString jsonB = "{\"type\" : \"identity_validation_request\", \"data\": ... }" -- | itWorks should evaluate to the value -- 'Just (Payload "identity_validation_request" IdentityValidationData)' itWorks = decode jsonB :: Maybe (Payload "identity_validation_request" IdentityValidationData) -- | doesNotParse should evaluate to 'Nothing' as -- the type level string "xxx" does not match "identity_validation_request doesNotParse = decode jsonB :: Maybe (Payload "xxxx" IdentityValidationData)
Additionally, we will strive to maintain a global index of (type-level string, type) pairs using a type family, and also provide a simple, polymorphic container for clients to use.
To get there, we will:
serialize and de-serialize Proxy values of type Proxy (s :: Symbol).
serialize and de-serialize a Payload (s :: Symbol) (a :: *) datatype to associcate arbitrary payloads with type-level strings.
introduce the TypeKey type family to maintain a global index of types and their assumed keys.
serialize and de-serialize values of type Payload (TypeKey a) a.
serialize and de-serialize values of type Message a, a polymorphic wrapper around Payload (TypeKey a) a, creating a nice interface for our clients.
show that Message a satisfies all our requirements.
table of contents:
Sum Types: The Simplest Thing That Works
(De)Serializing Data.Proxy
A More General DeSerialization of Data.Proxy
Stringly-Typed Programming for the Masses
A Global Index of Your String Types
Polymorphic Containers
Conclusion
For the adventurous, the full solution is here.
1. Sum Types: The Simplest Thing That Works
code: https://gist.github.com/aaronlevin/4aa22bd9c79997029167#file-01-simplest-thing-that-works-hs
Before we dive into type-level tomfoolery, let's create the simplest thing that works. The goal is to dispatch based on the key in "type". There are a few ways of doing this. We could forgoe any sense of type-safety and write FromJSON instances that inspect "type" willy-nilly. However, to bring some sanity to our codebase, let's use a basic sum type called Payloads which contain all possible payloads. This will force us to put all "type" string assumptions in one place.
{-# LANGUAGE OverloadedStrings #-} module SimpleThings where import Control.Applicative ((<$>), (<*>)) import Control.Monad (mzero) import Data.Aeson import qualified Data.Aeson as A -- | sum type containing all possible payloads data Payloads = FooPayload Int String | InviteRequestPayload String | IdentityValidationRequestPayload Int Int String -- | dispatch on the value of `"type"` instance FromJSON Payloads where parseJSON (Object v) = v .: "type" >>= handlePayloadType where -- handle foo_payload key handlePayloadType (A.String "foo_payload") = v .: "data" >>= \sub -> FooPayload <$> sub .: "id" <*> sub .: "msg" -- handle invite_request key handlePayloadType (A.String "invite_request") = v .: "data" >>= \sub -> InviteRequestPayload <$> sub .: "msg" -- handle identity_validation_data key handlePayloadType (A.String "identity_validation_data") = v .: "data" >>= \sub -> IdentityValidationRequestPayload <$> sub .: "from" <*> sub .: "to" <*> sub .: "validation_msg" -- default handlePayloadType _ = mzero parseJSON _ = mzero
In this design we are expecting a json structure that looks like: {"type" : "...", "data": { ... }}. To parse this we inspect the value of the "type" key and dispatch accordingly. This approach is simple and forces us to keep the assumed json keys in one place. However, we loose lots of generality. This solution is not polymorphic (i.e. we aren't working with an arbitrary, abstract container) and is prone to the expression problem (add a new payload, we now have to dispatch in a bunch of places).
To improve on this we want a data type that looks like:
data Payload a = Payload { payload :: a }
How and where do we encode the "type" string in this polymorphic container? Note that we can no longer dispatch over the "type" value anymore and return different types of a when writing our FromJSON instance (unless we wanted to forgoe something that looked like instance FromJSON a => FromJSON (Payload a) and instead write FromJSON (Payload SomeType) for each possible payload.
To resolve this issue we're going to take a detour into Data.Proxy, a data type that will help us pass around values that encode type-level information.
2. Serializing Data.Proxy
code: https://gist.github.com/aaronlevin/4aa22bd9c79997029167#file-02-data-proxy-hs
Data.Proxy is a great example of Haskell's expressive type system. It's an incredibly simple and essential data type in Haskell's type-level swiss army knife. It's definition:
data Proxy a = Proxy
Proxy is useful whenever you need value-level representations of information at the type level. Note that a can be of any kind, specifically, it can be of kind Symbol, which means that Proxy "foo" is a valid type (with the DataKinds extension enabled). Haskell also exposes some support for transitioning from the Symbol kind to a value of type String:
symbolVal :: KnownSymbol s => Proxy s -> String
$ ghci Prelude> import GHC.TypeLits Prelude GHC.TypeLits> import Data.Proxy Prelude GHC.TypeLits Data.Proxy> :set -XDataKinds Prelude GHC.TypeLits Data.Proxy> symbolVal (Proxy :: Proxy "foo") "foo"
Armed with Proxy and symbolVal we can now attempt to serialize and de-serialize JSON into Proxy "foo". The ToJSON instance is pretty straight forward:
{-# LANGUAGE DataKinds #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE OverloadedStrings #-} module Proxy where import Control.Monad (mzero) import Data.Aeson import Data.Proxy (Proxy(Proxy)) import GHC.TypeLits (KnownSymbol, symbolVal) instance ToJSON (Proxy "foo") where toJSON p = object [ "type" .= symbolVal p ]
This will serialize Proxy :: Proxy "foo" into {"type": "foo"}.
Now, let's write a FromJSON instance for Proxy "foo":
instance FromJSON (Proxy "foo") where parseJSON (Object v) = v .: "type" >>= handleType where handleType (A.String "foo") = return (Proxy :: Proxy "foo") handleType _ = mzero parseJSON _ = mzero jsonString :: BL.ByteString jsonString = "{\"type\": \"foo\"}"
You can see this work in action by loading the code in ghci:
$ ghci Prelude> :load 02-data-proxy.hs [1 of 1] Compiling Proxy ( 02-data-proxy.hs, interpreted ) Ok, modules loaded: Proxy. *Proxy> :set -XDataKinds *Proxy> decode jsonString :: Maybe (Proxy "foo") Just Proxy *Proxy> decode jsonString :: Maybe (Proxy "bar") <interactive>:5:1: No instance for (FromJSON (Proxy "bar")) arising from a use of ‘decode’ In the expression: decode jsonString :: Maybe (Proxy "bar") In an equation for ‘it’: it = decode jsonString :: Maybe (Proxy "bar")
Awesome! This works for Proxy "foo" and we get a compiler error when trying to deserializing into Proxy "bar".
3. A More General DeSerialization of Data.Proxy
code: https://gist.github.com/aaronlevin/4aa22bd9c79997029167#file-03-more-general-proxy-hs
Obviously we don't want to write a FromJSON instance for Proxy "bar" and every other type-level string that might appear. If you write the naive FromJSON instance you'll hit a wall:
{-# LANGUAGE DataKinds #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE OverloadedStrings #-} module Proxy where import Control.Monad (mzero) import Data.Aeson import qualified Data.Aeson as A import qualified Data.ByteString.Lazy as BL import Data.Proxy (Proxy(Proxy)) import GHC.TypeLits (KnownSymbol, symbolVal) import Data.Text (pack) instance KnownSymbol s => ToJSON (Proxy s) where toJSON p = object [ "type" .= symbolVal p ] instance KnownSymbol s => FromJSON (Proxy s) where parseJSON (Object v) = v .: "type" >>= handleType where handleType (A.String s) | s == pack (symbolVal (Proxy :: Proxy s)) = return (Proxy :: Proxy s) handleType _ = mzero parseJSON _ = mzero jsonString :: BL.ByteString jsonString = "{\"type\": \"foo\"}"
This intance naively checks if the value corresponding to the "type" key matches the symbolVal of the Proxy and if they match, it returns a Proxy of the correct type.
Unfortunately, if you compile this you will get the following error:
03-more-general-proxy.hs:21:44: Couldn't match kind ‘*’ with ‘GHC.TypeLits.Symbol’ Expected type: Value -> aeson-0.8.0.2:Data.Aeson.Types.Internal.Parser (Proxy s) Actual type: Value -> aeson-0.8.0.2:Data.Aeson.Types.Internal.Parser (Proxy s0) In the second argument of ‘(>>=)’, namely ‘handleType’ In the expression: v .: "type" >>= handleType
The problem is that GHC differentiates between the variables that appear in a type signature from the variables that appear in a function's definition. Therefore, the s in Proxy s in the type signature is different from the s in (Proxy :: Proxy s) appearing in the definition. To resolve this, we can enable the ScopedTypeVariables extension, which will extend the scope of a type variable throughout the function definition. This will allow GHC to infer that s satisfies the KnownSymbol constraint and compile. Adding {-# LANGUAGE ScopedTypeVariables #-} to our list of extensions and loading our code into ghci:
$ ghci Prelude> :load 03-more-general-proxy.hs [1 of 1] Compiling Proxy ( 03-more-general-proxy.hs, interpreted ) Ok, modules loaded: Proxy. *Proxy> :set -XDataKinds *Proxy> decode jsonString :: Maybe (Proxy "foo") Just Proxy *Proxy> decode jsonString :: Maybe (Proxy "bar") Nothing *Proxy> import Data.ByteString.Lazy *Proxy Data.ByteString.Lazy> :set -XOverloadedStrings *Proxy Data.ByteString.Lazy> let otherString = "{\"type\":\"bar\"}" :: ByteString *Proxy Data.ByteString.Lazy> decode otherString :: Maybe (Proxy "bar") Just Proxy
4. Stringly-Typed Programming for the Masses
code: https://gist.github.com/aaronlevin/4aa22bd9c79997029167#file-04-stringly-typed-programming-hs
Let's now try to accomplish our original goal: to create a data type Payload s a were s is a type-level string representing the value we expect in the json key "type". This will reuire one additional extension (KindSignatures). We'll also be updating our ToJSON and FromJSON instanes for Proxy to look for specific strings as opposed to a full json object (this will simplify the Payload serialization). Preamble:
{-# LANGUAGE DataKinds #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE KindSignatures #-} {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE ScopedTypeVariables #-} module Proxy where import Control.Applicative ((<$>)) import Control.Monad (mzero) import Data.Aeson import Data.Aeson.Types import qualified Data.Aeson as A import qualified Data.ByteString.Lazy as BL import Data.Monoid ((<>)) import Data.Proxy (Proxy(Proxy)) import GHC.TypeLits (KnownSymbol, Symbol, symbolVal) import Data.Text (pack) -- | Instances for serializing Proxy instance KnownSymbol s => ToJSON (Proxy s) where toJSON = A.String . pack . symbolVal instance KnownSymbol s => FromJSON (Proxy s) where parseJSON (A.String s) | s == pack (symbolVal (Proxy :: Proxy s)) = return (Proxy :: Proxy s) parseJSON _ = mzero
And now for our Payload s a data type with its ToJSON/FromJSON instances. Note that in the FromJSON instance we first attempt to deserialize into Proxy s and if successful we discard the result and parse the rest of the payload.
-- | our new data type. newtype Payload (s :: Symbol) a = Payload a -- | ToJSON instance instance (KnownSymbol s, ToJSON a) => ToJSON (Payload s a) where toJSON (Payload a) = object [ "type" .= (Proxy :: Proxy s) , "data" .= a ] -- | FromJSON instance instance (KnownSymbol s, FromJSON a) => FromJSON (Payload s a) where parseJSON (Object v) = (v .: "type" :: Parser (Proxy s)) >> Payload <$> v .: "data" parseJSON _ = mzero -- | Show intance for ghci instance (KnownSymbol s, Show a) => Show (Payload s a) where show (Payload a) = "Payload " <> symbolVal (Proxy :: Proxy s) <> " " <> show a jsonString :: BL.ByteString jsonString = "{\"type\": \"String\", \"data\": \"cool\"}"
Now, if we load this in ghci we should see the following:
$ ghci Prelude> :set -XDataKinds Prelude> :load 04-stringly-typed-programming.hs [1 of 1] Compiling Proxy ( 04-stringly-typed-programming.hs, interpreted ) Ok, modules loaded: Proxy. *Proxy> decode jsonString :: Maybe (Payload "String" String) Just Payload String "cool" *Proxy> decode jsonString :: Maybe (Payload "Int" String) Nothing *Proxy> decode jsonString :: Maybe (Payload "String" Int) Nothing *Proxy> let x = Payload 10 :: Payload "My Very Special Integer" Int *Proxy> encode x "{\"data\":10,\"type\":\"My Very Special Integer\"}"
This is exactly what we want! We're able to specify in the Payload type exactly the value of the "type" key we expect. You might think this feature is somewhat pedantic, and I hope to dispell the myth in the next section, but consider how much more readable code using Payload becomes. If you have a REST endpoint deserializing this Payload (or any other type you encode parameters into the type) you will see, immediately, the assumptions being made simply by analyzing the type (e.g. a function returning Payload "invite_request" InviteRequest).
5. A Global Index of Your String Types
code: https://gist.github.com/aaronlevin/4aa22bd9c79997029167#file-05-indexing-your-keys-hs
All is good in the land of types and strings but we'd be remiss to wontonly throw strings in our types and hope for the best. What would be really nice is the following:
A global index of keys/type-level strings and their corresponding type.
A compile-time error when you make a bad assumption about a key and its type.
These two can be accomplished with a closed type family that will serve as our index and a few simple modifications to Payload s a.
We begin with a simple, closed type family, requiring the TypeFamilies exension:
type family TypeKey (a :: *) :: Symbol where TypeKey Int = "int" TypeKey String = "string" -- other types you have
To incorporate this type family we need to update our Payload s a data type to use a Generalized Algebraic Data Type, requiring the GADTs extension:
data Payload (s :: Symbol) a :: * where Payload :: a -> Payload (TypeKey a) a
To write our ToJSON/FromJSON instances we will need to take advantage of equality constraints to work around the limitations of haskell's type-level computations. Ideally we'd like to write instance ToJSON (ToJSON a, KnownSymbol (TypeKey a)) => ToJSON (Payload (TypeKey a) a), stating that if there is a ToJSON instance for a and the TypeKey mapping on a results in a known symbol, then we can write a ToJSON instance for Payload. Unfortunately doing so will result in a compiler error that looks like:
05-indexing-your-keys.hs|50 col 28 error| Could not deduce (s ~ Proxy.TypeKey a) || from the context (GHC.TypeLits.KnownSymbol (Proxy.TypeKey a), || aeson-0.8.0.2:Data.Aeson.Types.Class.FromJSON a) || bound by the instance declaration || at /home/aterica/dev/tmp/blogpost/05-indexing-your-keys.hs:47:10-72 || ‘s’ is a rigid type variable bound by || the instance declaration || at /home/aterica/dev/tmp/blogpost/05-indexing-your-keys.hs:47:10 || Expected type: a -> Proxy.Payload s a || Actual type: a -> Proxy.Payload (Proxy.TypeKey a) a
We can work around this by using the equality constraint s ~ TypeKey a hinted to us by GHC.
-- | ToJSON instance instance (s ~ TypeKey a, KnownSymbol s, ToJSON a) => ToJSON (Payload s a) where toJSON (Payload a) = object [ "type" .= (Proxy :: Proxy s) , "data" .= a ] -- | FromJSON instance instance (s ~ TypeKey a, KnownSymbol s, FromJSON a) => FromJSON (Payload s a) where parseJSON (Object v) = (v .: "type" :: Parser (Proxy s)) >> Payload <$> v .: "data" parseJSON _ = mzero -- | Show intance for ghci instance (KnownSymbol s, Show a) => Show (Payload s a) where show (Payload a) = "Payload " <> symbolVal (Proxy :: Proxy s) <> " " <> show a jsonString :: BL.ByteString jsonString = "{\"type\": \"string\", \"data\": \"cool\"}" x :: Payload "int" Int x = Payload 10
Here, (s ~ TypeKey a, KnownSymbol s, ToJSON a) should read as: if s is constrained to be equal to TypeKey a (i.e. s is a type of kind Symbol) and s is also a KnownSymbol then we can reate a ToJSON instance for Payload s a.
Loading up ghci, we should see that trying to compile Payload "string" String should pass, while Payload "int" String should fail (because TypeKey String was defined to be "string" not "int"):
$ ghci Prelude> :set -XDataKinds Prelude> :load 05-indexing-your-keys.hs *Proxy> decode jsonString :: Maybe (Payload "string" String) Just Payload string "cool" *Proxy> decode jsonString :: Maybe (Payload "int" String) <interactive>:5:1: Couldn't match type ‘"string"’ with ‘"int"’ In the expression: decode jsonString :: Maybe (Payload "int" String) In an equation for ‘it’: it = decode jsonString :: Maybe (Payload "int" String)
As expected our TypeKey type family will ensure that we get a compile error if we assume the wrong key for a specific type!
Oh, but we are not yet done!
6. Polymorphic Containers
code: https://gist.github.com/aaronlevin/4aa22bd9c79997029167#file-06-polymorphic-containers-hs
Now, you might be thinking: ok, ok, ok, I can put my assumptions in the type, but really I don't want to specify these keys everywhere, I just want to keep this global index for a reference. So, you want a simple, polymorphic container that hides the underlying type-level machinery? I claim that with the help of a new GADT and a scary extension (UndecidableInstances) we can do this.
Here is our polymorphic container:
data Message a where Message :: (s ~ TypeKey a, KnownSymbol s) => Payload s a -> Message a -- | ToJSON instance which serializes a message's payload instance ToJSON a => ToJSON (Message a) where toJSON (Message payload) = object [ "payload" .= payload ] -- | FromJSON instance instance (s ~ TypeKey a, KnownSymbol s, FromJSON a) => FromJSON (Message a) where parseJSON (Object v) = Message <$> v .: "payload" parseJSON _ = mzero instance Show a => Show (Message a) where show (Message p) = "Message ( " <> show p <> " )"
The Message a data type simply wraps the Payload s a for us, hiding the ugly deails from the client. Nevertheless, it behaves exactly as we'd expect. Consider the following ghci session:
$ ghci Prelude> :set -XOverloadedStrings Prelude> :load 06-polymorphic-containers.hs *Proxy> let message = "{ \"payload\": {\"type\": \"string\", \"data\": \"cool\"} }" :: Data.ByteString.Lazy.ByteString *Proxy> decode message :: Maybe (Message String) Just Message ( Payload string "cool" ) *Proxy> decode message :: Maybe (Message Int) Nothing *Proxy> Message (Payload 420) :: Message Int Message ( Payload int 420 ) *Proxy> Message (Payload "420") :: Message String Message ( Payload string "420" ) *Proxy> data Cool = Cool *Proxy> Message (Payload Cool) :: Message Cool <interactive>:15:1: No instance for (KnownSymbol (TypeKey Cool)) arising from a use of ‘Message’ In the expression: Message (Payload Cool) :: Message Cool In an equation for ‘it’: it = Message (Payload Cool) :: Message Cool
As you can see, Message a has he desired behaviour:
we can deserialize strings only if the "type" key has the right value.
the value of the "type" key, and thus the type-level string needed on our Payload s a type is not exposed to clients using Message.
if we try to create a Message with a type not indexed in our closed type family TypeKey, we get an error (e.g. Message (Payload Cool) :: Message Cool did not compile).
While this last part required a scary extension, it's somewhat safe to be used in this context.
Conclusion
To recap what we've accomplised so far, let's recall what we set out to do. We encountered a situation where we wanted to deserialize some JSON that required us to dispatch on a specific value of a json key ("type") and, based on that value, attempt to parse the JSON into a specific type. We discussed several attempts:
Ad-hoc
Using a sum type
Encoding the expeted value of "type" in a type-level string
We spent most of the time exploring the last option. We were able to:
Serialize and de-serialize Proxy values of type Proxy (s :: Symbol). This allowed us to encode the "type" value as a type-level string in the proxy.
using 1 we created a Payload (s :: Symbol) (a :: *) datatype to associcate arbitrary payloads with type-level strings.
we showed you could serialize and de-serialize values of type Payload s a.
we then created a global index of types and the assumed keys they would have by using the type family TypeKey.
using 4 we were able to serialize and de-serialize values of type Payload (TypeKey a) a, encoding our json-key assumptions at compile time in a global, unique index.
we then introduced a Message a datatype that wrapped our Payload (TypeKey a) a, creating a nice interface for our clients.
finally, we avoid run-time errors based on bad assumptions by having the compiler throw an error if we try to deserialize an instnae of Message a where a has no entry in our TypeKey type family index.
There is a lot more you can do with these ideas. I hope this (lengthy) post inspires you to try enoding more invariants in the type system. For further inspiration, I recommend trying to grok the reflection library, which takes the idea of encoding information within types to the next level.
May all your strings be well-typed!
3 notes
·
View notes
Text
Data Families Make Types and Free Monads More Librarious
At the end of my post Type Families Make Life and Free Monads Simpler I conjectured whether it would be possible to write a "CRUD" library around the Free CrudF monad. The exact meaning of a library in this context is somewhat nuanced. What libraries look like when you're doing type-level programming wasn't clear to me initially. After learning and working with dependent types in Haskell I've settled on a better understanding that sometimes feels like "Java Spring for Types."
In this post I will outline how we can use Open Type and Data Families to write libraries that users configure at the type-level. We will do this by creating a generalized Free CrudF monad, continued from my previous post, whose types are configured by the user. Along the way we'll learn about Type and Data Families and how to assist GHC with type checking.
Recap
User-Defined Type and Data Families
Crud Library
User-Provided Types and Kinds
Y U No Compile, Dr. Interpreter?
Using Typeclass Constraints to Support the Type Checker
God's in its own heaven; all's right with the world
All code samples for this post can be found here.
1. Recap
code: https://gist.github.com/aaronlevin/ce387cd891503540a5fd#file-01-recap-hs
If you recall from our last adventure, the primary type-level machinery for the Free CrudF monad consisted of a data type to index the types we perform CRUD on, in addition to some type families:
-- | User code. Our types. data Product = Product data ProductData = ProductData Int data Order = Order data OrderData = OrderData -- | a universe of data types that we perform crud on data Crudable = ProductCRUD | ProductsR | OrderCRUD -- | singleton support to traverse the type and kind level data SCrudable (c :: Crudable) where SProductCRUD :: SCrudable 'ProductCRUD SProductsR :: SCrudable 'ProductsR SOrderCRUD :: SCrudable 'OrderCRUD -- | a type family mapping elements from our Crudable universe -- to the data required to Read them. This can be read as: "to -- read a Product, we need an Int. To read Products we need a -- String. To read an Order we need an Int type family ReadData (c :: Crudable) :: * where ReadData 'ProductCRUD = Int ReadData 'ProductsR = String ReadData 'OrderCRUD = Int -- | a type family mapping elements from our Crudable universe -- to the base types we return when performing certain crud -- operations type family CrudBase (c :: Crudable) :: * where CrudBase 'ProductCRUD = Product CrudBase 'ProductsR = [Product] CrudBase 'OrderCRUD = Order -- | the CrudF functor. Remember how it uses the SCrudable singleton to -- manifest a c of kind Crudable and then the type families to map that kind -- to the required types. data CrudF next :: * where Read :: SCrudable c -> ReadData c -> (Maybe (CrudBase c) -> next) -> CrudF next
We've omitted quite a bit of detail for the sake of a recap. We haven't included any interpreters, smart constructors, or even the full set of CRUD verbs (only Read is present). For the full code, see here.
The main question posed in this post is how do we turn the above into a library? We're used to writing libraries at the value level, but the type-level is a little different. There are a few places we need to introduce abstraction for the library's clients:
We need our universe of types (Crudable) to be user-defined.
We need our SCrudable singleton-support to be user-defined.
We need our type families to be user-defined.
Because we're doing a bit of refactoring, we'll ensure our CrudF GADT has a single constructor (instead of four).
When I first started thinking about this I had absolutely no idea how I would accomplish 1 and 2. Thankfully, @a_cowley had sent me some unrelated code a few months ago and I remembered seeing the keyboard data family. I figured this would be the best place to start (thanks Anthony!)
2. User-defined Type and Data Families
code: https://gist.github.com/aaronlevin/ce387cd891503540a5fd#file-02-user-defined-type-and-data-families-hs
We begin with a little refactoring of our previous code. First, we define a universe of CRUD verbs used to index some of our type families.
-- | A universe of CRUD verbs used to collapse `CrudF` into a single constructor data CRUD = Create | Read | Update | Delete -- | singleton support for the CRUD universe data CrudVerb (c :: CRUD) :: * where SCreate :: CrudVerb 'Create SRead :: CrudVerb 'Read SUpdate :: CrudVerb 'Update SDelete :: CrudVerb 'Delete
In our original CrudF code, our type families were closed. By this we mean they were not extendable by user code. We defined them over a universe (or index) of types of kind Crudable. In order for us to expose this code as a library, we need to use Open Type Families. Open Type Families will allow a user to provide their own type-level functions satisfying the properties of the type family. This is not dissimilar to typeclasses, except at the type level.
We will define two open type families: one for input data and another for output data. This representation will capture the types of data required as input for an operation and the type of data required as output for an operation. By this we mean: "when you're performing a Create operation, input refers to the data required to perform that operation and output refers to the data returned by that operation. The types of input and output will depend on a type of kind CRUD and a user-defined type of kind k.
-- | an open type family mapping types of kind CRUD and types of a user-defined kind k -- to the type of input data required. type family InputData (c :: CRUD) (a :: k) :: * -- | an open type family mapping types of kind CRUD and type sof a user-defined kind k -- to the type returned by the operation type family ReturnData (c :: CRUD) (a :: k) :: *
You can think of InputData and ReturnData as functions at the type level with a signature like: InputData :: CRUD -> k -> *.
Now that we've created type families that can be defined by users of our library, we need a way to abstract the original universe of kind Crudable. This is where Data Families come in. Recall that our Crudable kind and its singleton support SCrudable served two purposes: one, it afforded us an index for types that we want to permit CRUD actions on, and two, SCrudable created a bridge between the type and the kind level.
How do we allow users to define kind universes and type-kind bridges? Data Families!
As a convention, the kind k will be reserved for the universe of types that the user will provide as an index. In the previous post, this would have corresponded to Crudable. Our user-implemented data family will encapsulate mapping from types of kind k to types of kind *. We name this data family CrudSing as it represents the user-supplied singleton support for our CRUD library.
-- | data family to map types of kind k to types of kind * data family CrudSing :: k -> *
3. Crud Library
code: https://gist.github.com/aaronlevin/ce387cd891503540a5fd#file-03-crud-library-hs
With the CRUD universe of types, the InputData and ReturnData type families, and the CrudSing data family, we are now ready to define our CrudF functor! We present the program in entirety thus far:
-- | A universe of CRUD verbs used to collapse `CrudF` into a single constructor data CRUD = Create | Read | Update | Delete -- | singleton support for the CRUD universe data CrudVerb (c :: CRUD) :: * where SCreate :: CrudVerb 'Create SRead :: CrudVerb 'Read SUpdate :: CrudVerb 'Update SDelete :: CrudVerb 'Delete -- | an open type family mapping types of kind CRUD and types of a user-defined kind k -- to the type of input data required. type family InputData (c :: CRUD) (a :: k) :: * -- | an open type family mapping types of kind CRUD and type sof a user-defined kind k -- to the type returned by the operation type family ReturnData (c :: CRUD) (a :: k) :: * -- | data family to map types of kind k to types of kind * data family CrudSing :: k -> * -- | abstract CrudF functor! data CrudF :: [k] -> * -> * where CrudF :: CrudVerb v -> CrudSing c -> InputData v c -> (ReturnData v c -> a) -> CrudF fs a
This datatype might seem a little puzzling, so let's start by describing it at the type level. If you read the CrudF kind, it states: "given a type-level list of kind k and a type of kind *, return a type of kind *." The first question that comes to mind concerns the presence of the type-level list of kind k ([k]). There is a subtle reason for this. Recall that we quantify over types of kind CRUD and the user-provided k. As CRUD is closed, we don't have to parameterize our CrudF type with it, but since k is user-provided, we need to encode the types of kind k the user is defining somewhere. We also know there will be many types of kind k used in our data type, and we need this encoded somewhere as well. Therefore, we encode the types of kind k featured in CrudF by parameterizing our functor with [k]. If this nuance is tripping you up now, it should become more apparent as we continue.
At the value-level, the CrudF type reads mostly straightforward. Given a CrudVerb v and CrudSing c, i.e. given a crud verb and a user-provided singleton, v and c will specify a type of kind CRUD and k respectively. These types are then passed to the InputData and ReturnData type families to specify the input and return data used to construct CrudF.
To use CrudF as a free monad, we need to provide a functor instance:
instance Functor (CrudF fs) where fmap f (CrudF v s i g) = CrudF v s i (f g)
Now we can write some smart constructors. Here's an example for the create verb:
create :: CrudSing f -> InputData 'Create f -> Free (CrudF fs) (ReturnData 'Create f) create s d = Free $ CrudF SCreate s d Pure
This basically completes our library. We will next show what user code looks like and this will lead us to a snake in the grass!
4. User-Provided Types and Kinds
code: https://gist.github.com/aaronlevin/ce387cd891503540a5fd#file-04-user-provided-types-and-kinds-hs
As a client of our CrudF library, we need to:
Define our universe of types (in the previous post this was Crudable and in our library this is the kind k).
Provide an instance of the CrudSing data family.
Provide instances for the InputData and ReturnData type family.
Lets start by defining some basic types that we will work with (as in the previous post, Product and Order). Then we'll define our universe and provide instances for the data families and type families.
-- | User code. Our types. data Product = Product data ProductData = ProductData Int data Order = Order data OrderData = OrderData -- | A universe to index types we do crud over. This was `Crudable` in the -- previous post and is `k` in our library. data MyCrud = ProductCRUD | OrderCRUD -- | instance of the `CrudSing` data family. data instance CrudSing (a :: MyCrud) where SProduct :: CrudSing 'ProductCRUD SOrder :: CrudSing 'OrderCRUD -- | type family instances. This maps the Input and Output -- data required and returned for each CRUD operation. type instance InputData 'Create ProductCRUD = ProductData type instance InputData 'Read ProductCRUD = Int type instance InputData 'Update ProductCRUD = Product type instance InputData 'Delete ProductCRUD = Int type instance ReturnData 'Create ProductCRUD = Product type instance ReturnData 'Read ProductCRUD = Product type instance ReturnData 'Update ProductCRUD = Product type instance ReturnData 'Delete ProductCRUD = Product type instance InputData 'Create OrderCRUD = OrderData type instance InputData 'Read OrderCRUD = Int type instance InputData 'Update OrderCRUD = Order type instance InputData 'Delete OrderCRUD = Int type instance ReturnData 'Create OrderCRUD = Order type instance ReturnData 'Read OrderCRUD = Order type instance ReturnData 'Update OrderCRUD = Order type instance ReturnData 'Delete OrderCRUD = Order
As a client, usage of the library is fairly straight forward. We provide some data family and type family instances that feels very much like basic configuration (Java Spring for Types!).
So far so good. Now let's write some smart constructors:
-- | Smart constructor for creating products. Non-eta-reduced for clarity. createProduct :: InputData 'Create 'ProductCRUD -> Free(CrudF '[ProductCRUD, OrderCRUD]) Product createProduct d = create SCreat SProduct d
You can see here where the type-level list of kind k comes in ('[ProductCRUD, OrderCRUD]). We also use the fact that InputData is a type family and don't bother specifying ProductData as the type input to createProduct (even though InputData 'Create 'ProductCRUD = ProductData). This means that if we change the input we don't have to update our constructors, reinforcing the idea that we configure our program at the type level (Java Spring for Types!).
5. Y U No Compile, Dr. Interpreter?
code: https://gist.github.com/aaronlevin/ce387cd891503540a5fd#file-05-y-u-no-compile-hs
Now, what does an interpreter look like? For brevity, we present a simple, non-exhaustive interpreter below:
-- | sample, non-exhaustive interpreter. that won't compile :( -- gratuitous pattern matching to show the correct type is inferred interpreter :: Free (CrudF '[ProductCRUD, OrderCRUD]) a -> IO a interpreter (Pure a) = return a interpreter (Free (CrudF SCreate SProduct (ProductData _) g)) = interpreter $ g Product interpreter (Free (CrudF SCreate SOrder _ g)) = interpreter $ g Order
If you try to compile this you will get a brutal and somewhat misleading message:
05-y-u-no-compile.hs:93:34: Could not deduce (k ~ MyCrud) from the context (v ~ 'Create) bound by a pattern with constructor SCreate :: CrudVerb 'Create, in an equation for ‘interpreter’ at crud2.hs:121:26-32 ‘k’ is a rigid type variable bound by a pattern with constructor CrudF :: forall (k :: BOX) (fs :: [k]) a (k :: BOX) (v :: CRUD) (f :: k). CrudVerb v -> CrudSing f -> InputData v f -> (ReturnData v f -> a) -> CrudF fs a, in an equation for ‘interpreter’ at crud2.hs:121:20 Expected type: CrudSing f Actual type: CrudSing a0 In the pattern: SProduct In the pattern: CrudF SCreate SProduct (ProductData _) g In the pattern: Free (CrudF SCreate SProduct (ProductData _) g) (... repeat ...)
To my naive eyes, this message is intractable. The main point is that GHC is having a hard time with type inference. When I initially saw this I was dumbfounded. How could GHC not infer these types? After all, we're pattern matching on SCreate so obviously GHC should know, right away, that v is 'Create. However, if we consider the type of our GADT, we know that the user-provided types we need have been quantified over and are not known to GHC after construction of the data type (i.e. when pattern matching). Recall, that the fully quantified type for the CrudF constructor is:
-- | CrudF constructor with explicit quantification CrudF :: forall (v :: CRUD) (f :: k) (fs :: [k]) (a :: *) CrudVerb v -> CrudSing f -> InputData v f -> (ReturnData v f -> a) -> CrudF fs a
As we quantify over (f :: k), we erase any knowledge of f. Most importantly, we do not know if it is even contained in fs :: [k]! For example, it's possible that f = 'OrderCRUD and fs = ['ProductCRUD].
Is all hope lost?
6. Using Typeclass Constraints to Support the Type Checker
code: https://gist.github.com/aaronlevin/ce387cd891503540a5fd#file-06-using-typeclass-constraints-to-support-the-type-checker-hs
How can we fix this? I was stumped on this for quite some time. Thankfully, I had seen the solution in some code @a_cowley sent to me a few months earlier.
The solution is to create a constraint forcing f :: k to be contained in the type-level list fs :: [k]. This is done by defining a special data type and a typeclass.
-- | type checking support. This is similar, but not the same, as an HList. data Elem (x :: k) (xs :: [k]) where Here :: Elem x (x ': xs) There :: Elem x xs -> Elem x (y ': xs) -- | typeclass to assist with type checking. class Implicit a where implicitly :: a -- | if the first `x` of kind `k` in `Elem` matches the head of the list -- then `Elem x (x ': xs))` is an instance of `Implicit` instance Implicit (Elem x (x ': xs)) where implicitly = Here -- | If `x` is not at the head of the list, then we constrain -- instances to those for which `Elem x xs` is an instance. This guarantees -- us to roll through `Elem`. instance Implicit (Elem x xs) => Implicit (Elem x (y ': xs)) where implicitly = There implicitly
In our case, where [k] ~ ['ProductCRUD, 'OrderCRUD], if we constrain the CrudF constructor to Implicit (Elem f fs) then we will have the following class environment available (there will be many more permutations, but for the sake of our discussion, this is enough):
implicitly :: Elem 'ProductCRUD ('ProductCRUD : 'OrderCRUD : []) implicity = Here implicitly :: Elem 'OrderCRUD ('ProductCRUD : 'OrderCRUD : []) implicitly = There (Here :: Elem 'OrderCRUD ('OrderCRUD : [])) :: Elem 'OrderCRUD ('ProductCRUD : 'OrderCRUD : []) implicitly :: Elem 'Create '[] implicitly = Here
Now, we just amend our data type and smart constructor:
-- | new CrudF constructor with type checking hints data CrudF :: [k] -> * -> * where CrudF :: Implicit(Elem f fs) => CrudVerb v -> CrudSing f -> InputData v f -> (ReturnData v f -> a) -> CrudF fs a -- | smart constructor also needs type checking hints. create :: Implicit(Elem f fs) => CrudSing f -> InputData 'Create f -> Free (CrudF fs) (ReturnData 'Create f) create s d = Free $ CrudF s d Pure
With this type-checking support, GHC can now infer that when you pattern match on CrudF ['ProductCRUD, 'OrderCRUD] that the f is constrained to be an element of the list by virtue of the Implicit(Elem f fs) constraint and it can then infer the right type of f! And now our program compiles!
7. God's in its own heaven; all's right with the type checker
code: https://gist.github.com/aaronlevin/ce387cd891503540a5fd#file-07-gods-in-its-own-heaven-hs
And that's it! The full code, with client code is below and everything type checks!
It's worth revisiting what we have accomplished. First we took some dependently typed haskell code and turned it into a library. Because we were dealing with user-defined abstractions at the type and kind level, this required the use of open Type Families and Data Families. Everything was going well until we hit a snag with the type checker, but we were able to use typeclass constraints to give hints to the compiler.
It's questionable how useful our Free CrudF monad is. However, the main takeaway is the nature of how type-level libraries are used. In a dialectical gesture that would impress even Hegel, our programs become a vehicle for the configuration of types; the core logic safely abstracted from us. This is a very empowering style of programming.
Now, go forth and create libraries at the type level!
-- | kitchen sink of extensions {-# LANGUAGE ConstraintKinds #-} {-# LANGUAGE DataKinds #-} {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE GADTs #-} {-# LANGUAGE KindSignatures #-} {-# LANGUAGE OverlappingInstances #-} {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE PolyKinds #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE RankNTypes #-} module FreeCrud where import Control.Monad.Free(Free(Free, Pure)) -- the library portion is below -- | type checking support, thanks to @a_cowley data Elem (x :: k) (xs :: [k]) where Here :: forall x xs Elem x (x ': xs) There :: forall x xs y Elem x xs -> Elem x (y ': xs) class Implicit a where implicitly :: a instance Implicit (Elem x (x ': xs)) where implicitly = Here instance Implicit (Elem x xs) => Implicit (Elem x (y ': xs)) where implicitly = There implicitly -- | Index of types that can be crudded / singleton support data family CrudSing :: k -> * -- | crud verbs used to collapse `CrudF` to a single constructor data CRUD = Create | Read | Update | Delete data CrudVerb (a :: CRUD) :: * where SCreate :: CrudVerb 'Create SRead :: CrudVerb 'Read SUpdate :: CrudVerb 'Update SDelete :: CrudVerb 'Delete -- | open type families to index data types type family InputData (v :: CRUD) (a :: k) :: * type family ReturnData (v :: CRUD) (a :: k) :: * -- | The new CrudF functor data CrudF :: [k] -> * -> * where CrudF :: Implicit(Elem f fs) => CrudVerb v -> CrudSing f -> InputData v f -> (ReturnData v f -> a) -> CrudF fs a -- | functor instance instance Functor (CrudF fs) where fmap f (CrudF v s i g) = CrudF v s i (f g) -- | sample smart constructor create :: Implicit(Elem f fs) => CrudSing f -> InputData 'Create f -> Free (CrudF fs) (ReturnData 'Create f) create s d = Free $ CrudF SCreate s d Pure -- | User code data Product = Product data ProductData = ProductData Int data Order = Order data OrderData = OrderData -- | index of crudable types data MyCrud = ProductCRUD | OrderCRUD -- | data family instance data instance CrudSing (a :: MyCrud) where SProduct :: CrudSing ProductCRUD SOrder :: CrudSing OrderCRUD -- | type family defs type instance InputData 'Create ProductCRUD = ProductData type instance InputData 'Read ProductCRUD = Int type instance InputData 'Update ProductCRUD = Product type instance InputData 'Delete ProductCRUD = Int type instance ReturnData 'Create ProductCRUD = Product type instance ReturnData 'Read ProductCRUD = Product type instance ReturnData 'Update ProductCRUD = Product type instance ReturnData 'Delete ProductCRUD = Product type instance InputData 'Create OrderCRUD = OrderData type instance InputData 'Read OrderCRUD = Order type instance InputData 'Update OrderCRUD = Order type instance InputData 'Delete OrderCRUD = Int type instance ReturnData 'Create OrderCRUD = Order type instance ReturnData 'Read OrderCRUD = Int type instance ReturnData 'Update OrderCRUD = Order type instance ReturnData 'Delete OrderCRUD = Order -- | sample smart(er) constructor createProduct :: InputData 'Create 'ProductCRUD -> Free (CrudF '[ProductCRUD, OrderCRUD]) Product createProduct = create SCreate SProduct -- | sample, non-exhaustive interpreter interpreter :: Free (CrudF '[ProductCRUD, OrderCRUD]) a -> IO a interpreter (Pure a) = return a interpreter (Free (CrudF SCreate SProduct (ProductData _) g)) = interpreter $ g Product interpreter (Free (CrudF SCreate SOrder _ g)) = interpreter $ g Order
2 notes
·
View notes
Text
Type Families Make Life and Free Monads Simpler
After watching Ollie Charles' talk on Strongly Typed Publish/Subscribe over Websocks via Singleton Types, I felt very inspired to try some type-level programming in Haskell. Along the way I stumbled across a really handy design pattern I thought I would share.
Recently I've been working with another person on creating a client for the Shopify API in Haskell. Being there are a few different http clients in haskell (http-client, http-conduit, and pipes-http), I wanted the library to keep the http-client implementation independent. After discussing a few different approaches, I settled on the Free Monad approach for its flexibility.
During this post I will detail how I used type families and singletons to make a complicated algebra very simple, flexible, and safe. Furthermore, this approach, albeit somewhat naive and repeated in a few other contexts (e.g. indexed free monads), can be applied to many situations to help make your code safer and flexible.
As a spoiler, the end goal will be to create a program like (note the varying data types for the create and read constructors):
createOrder :: Free CrudF Order createOrder = do product1 <- create SProduct (ProductData ...) product2 <- read SProduct 13 products <- read SProducts "select * from product" create SOrder (OrderData (product1:product2:products))
Contents:
A Naive Functor and its Free Monad
Failing to Minimizing The Number of Constructors
Using Type Families Index Our Types
Using GADTs to Generalize our Functor
Further: Not All Things Read Can Be Created
A General Algebra for describing CRUD for shopify types
Summary and Next Steps
1. A Naive Functor and its Free Monad
code: https://gist.github.com/aaronlevin/4fc1fcfdce947a41567b#file-01-naive-functor-hs
Shopify, being a powerful e-commerce platform, has a a lot of data types available thorugh its API. For each of these data types, we'll want our client to perform CRUD on them: create, read, update, and delete. We'll focus on two such data types throughout this post: Product and Order.
The first step to expressing the actions our client can express via Free Monads is to create a Functor to describe each action. We will create two main data types (Product and Order) as well as the data needed to construct them (ProductData and OrderData). You can think of Product and Order as being what lives in the database, whereasProductData and OrderData represent what we need to create a row in our database. Their internals are largely irrelevant, the only thing you should keep in mind is that the four of them are different from each other.
Our naive functor will look like this:
{-# LANGUAGE DeriveFunctor #-} module Main where import Control.Monad.Free (Free(Free,Pure)) -- | Product data type data Product = Product { productId :: Int , productName :: String } -- | data we need to construct a product data ProductData = ProductData String -- | main Order data type data Order = Order Int String [Product] -- | data we need to construct an Order data OrderData = OrderData String [Int] -- | Our initial Crud Functor data CrudF a = CreateProduct ProductData (Product -> a) | ReadProduct Int (Maybe Product -> a) | UpdateProduct Product a | DeleteProduct Int a -- | ^ Product CRUD | CreateOrder OrderData (Order -> a) | ReadOrder Int (Maybe Order -> a) | UpdateOrder Order a | DeleteOrder Int a -- | ^ Order CRUD deriving (Functor) -- | Our Free Monad (uses the free library) type ShopifyAlgebra a = Free CrudF a main :: IO () main = print "01-naive-functor"
With our CrudF functor and the free monad it generates (ShopifyAlgebra) we can create smart constructors to express some programs. Here is an example one that Creates a Product, reads a product, and then creates an order using the two products.
-- | An example mini-program that creates a product, reads a product, -- | updates its name, and uses those products to create an order. programCreateOrder :: ShopifyAlgebra Order programCreateOrder = do newProduct <- createProduct $ ProductData "OG Bent Wind LP" existingProduct <- readProduct 17 let productIds = catMaybes [ Just productId $ newProduct , productId <$> existingProduct ] createOrder $ OrderData "amazing new order!" productIds
2. Failing to Minimizing The Number of Constructors
code: https://gist.github.com/aaronlevin/4fc1fcfdce947a41567b#file-02-wont-compose-hs
This all looks great, and we can write interpreters against our free monad (ShopifyAlgebra). However, a quick look at the Shopify API and we'll notice we're going to need 4 x 20 = 80 new CrudF constructors, which means 80 new smart constructors , which also means every interpreter we write is going to have to pattern match over 96 CrudF constructors! This is unwieldy!
How do we get around this? Well, one naive attempt would be to generalize our CrudF functor. For example, we might try:
data CrudF2 a d i next = Create2 d (a -> next) | Read2 i (Maybe a -> next) | Update2 a next | Delete2 i next deriving (Functor)
But now our constructors won't compose! For example:
createProduct2 :: ProductData -> Free (CrudF2 Product ProductData Int) Product createProduct2 productData = Free $ Create2 productData Pure createOrder2 :: OrderData -> Free (CrudF2 Order OrderData Int) Order createOrder2 orderData = Free $ Create2 orderData Pure
Notice the types after Free, mainly: CrudF2 Product ProductData Int versus CrudF2 Order OrderData Int. The Functors do not match, so we cannot create general programs that create Products and create Orders! This is terrible!
3. Using Type Families to Index Our Constructors
code: https://gist.github.com/aaronlevin/4fc1fcfdce947a41567b#file-03-type-families-index-constructors-hs
Notice that our CrudF2 constructors had a general pattern. Mainly:
Create2 {type for creating data a} (a -> next) Read2 {type to find a} (Maybe a -> next) Update2 a next Delete2 {type to find a} next
If only we could say: given some type a, map that type to the type required to create it, the type required to find it. This is exactly what Type Families are handy for!
Without going into the theory too deeply, we essentially want a type level function that maps our type to another type when used in the context of our CRUD functor. I will start by showing how you achieve this for the Create constructor, and we will add more later.
To start, we'll need a host of extensions in addition to our original data types. These extensions, most importantly TypeFamilies and DataKinds, will allow us to do type-level programming and promote our types to kinds and our values to types! Therefore, for (almost) each type in our module there will be an associated kind, and for each value constructor an associated type.
{-# LANGUAGE DataKinds #-} {-# LANGUAGE DeriveFunctor #-} {-# LANGUAGE GADTs #-} {-# LANGUAGE KindSignatures #-} {-# LANGUAGE RankNTypes #-} {-# LANGUAGE TypeFamilies #-} module Main where -- | we continue to use our original data types. However, with these -- | extensions, each type is promoted to a kind, and each value to -- | a type. -- | Product data type data Product = Product { productId :: Int , productName :: String } -- | data we need to construct a product data ProductData = ProductData String -- | main Order data type data Order = Order Int String [Product] -- | data we need to construct an Order data OrderData = OrderData String [Int]
Now, the first thing we need is a way to index the types we can do CRUD with. The most naive way to capture this is to create a datatype (which, with DataKinds, will be promoted to the kind level):
data Crudable = ProductCRUD | OrderCRUD
Now Crudable represents types we do CRUD with. With DataKinds we now have the kind Crudable and the types 'ProductCRUD and 'OrderCRUD (it's customary to prefix promoted types with a single apostrophe ').
As per Ollie's talk and the general singletons approach to type-level programming in Haskell, we'll need a glue or pathway between the type/value-level programming and the kind/type-level programming. For this we create a singleton using GADTs:
data SCrudable (c :: Crudable) where SProductCRUD :: SCrudable 'ProductCRUD SOrderCRUD :: SCrudable 'OrderCRUD
Armed with our Crudable index and SCrudable singleton type, we can write some type families to define some type-level functions:
-- | type family that maps a Crudable type to the type required to create it type family CreateData (c :: Crudable) :: * where CreateData 'ProductCRUD = ProductData CreateData 'OrderCRUD = OrderData
What this type families tells us is that if we have a type of kind Crudable we can map this to the data required to create it. This encodes at the type-level the fact that we need ProductData to create Products and OrderData to create Orders. However, we'll also need a map between Crudable and our base types Product and Order, so we'll create another type family for this:
-- | type family that maps a Crudable type to its base type. This binds -- | our Crudable kind to the types we're actually interested in. type family CrudBase (c :: Crudable) :: * where CrudBase 'ProductCRUD = Product CrudBase 'OrderCRUD = Order
Now we are ready to re-write our CrudF functor. For now, we'll just re-write the Create constructor:
-- | new CrudF functor. "c" represents a type of kind Crudable. data CrudF3 (c :: Crudable) next = Create3 (CreateData c) (CrudBase c -> next)
Are we done? Unfortunately no. We're almost there, but we haven't made it all the way yet. In fact, at this point we haven't really gained very much. For example, if we tried to create a smart constructor to actually use this functor, you'll notice a problem:
-- | a new smart constructor. Use gratuitous pattern matching to show that· -- | the right types are inferred via the CreateData type family create3 :: SCrudable c -> CreateData c -> Free (CrudF3 c) (CrudBase c) create3 SProductCRUD productCreateData@(ProductData _) = Free $ Create3 productCreateData Pure create3 SOrderCRUD orderCreateData@(OrderData _ _) = Free $ Create3 orderCreateData Pure
Do you notice the issue? We've gained nothing! Our type is still Free (CrudF3 c) (CrudBase c), which, after the appropriate type family is applied, we have either Free (CrudF3 'ProductCRUD) (Product) or Free (CrudF3 'OrderCRUD) (Order). Because 'ProductCRUD and 'OrderCRUD are different types (albeit of the same kind Crudable), we will not be able to combine smart constructors in the following way:
-- | will not compile falseProgram = do newProduct <- create3 SProductCRUD (ProductData "heitkotter") newOrder <- create3 SOrderCRUD (OrderData "chargeback" [productId newProduct]
This won't compile because the Functor in newProduct and the Functor in newOrder are different.
Is all hope lost?
4. Using GADTs to Generalize our Functor
code: https://gist.github.com/aaronlevin/4fc1fcfdce947a41567b#file-04-gadts-to-the-rescue-hs
GADTs to the rescue! What we want to do is quantize over types of kind Crudable. We can do this with GADTS. If we keep all our types and type families, but update our CrudF functor:
data CrudF4 next :: * where Create4 :: SCrudable c -> CreateData c -> (CrudBase c -> next) -> CrudF4 next -- | our functor is sufficiently complex and GHC can no longer derive our -- | Functor for us. instance Functor CrudF4 where fmap f (Create4 c d g) = Create4 c d (f g)
Awesome! Now our CrudF4 functor is parameterized over a single type! Which means we can write smart constructors and create a working version of the above mini-program:
-- | smart constructor. we wont do pattern matching here. we only did it above -- | to highlight the usage of type families create4 :: SCrudable c -> CreateData c -> Free CrudF4 (CrudBase c) create4 c d = Free $ Create4 c d Pure -- | mini program that creates two products and an order workingProgram :: Free CrudF4 Order workingProgram = do newProduct1 <- create4 SProductCRUD (ProductData "Lewis - L'Amour") newProduct2 <- create4 SProductCRUD (ProductData "Lewis - Romantic Times") let productIds = productId <$> [newProduct1, newProduct2] create4 SOrderCRUD (OrderData "Lewis gripper" productIds)
This is amazing. We now have a Free Monad that can create Products and Orders with the same Functor! Additionally, our smart constructor can handle different data types depending on the context (SProductCRUD or SOrderCRUD).
A sensible question might be: could we achieve this without Type Families and Data Kinds, using a regular GADT? The answer is no. How could we write a smart constructor for such a type? More importantly, how would we write a valid interpreter for such an algebra? Our interpreter would have to handle the entire Hask category of types.
5. Going Further: Not All Things Read Can Be Created
code: https://gist.github.com/aaronlevin/4fc1fcfdce947a41567b#file-05-going-further-hs
Not only can we now express more general miniprograms indexed by different types, we can also express CRUD for types that may only handle the R (read) part of it! Let me show you.
Let's exapnd on our previous example by adding another Crudable type, ProductsR, which will represent the fact that our API exposes an endpoint to read a list of products, but not necessarily create a list of products. We will also be adding a Read constructor to CrudF, which means we'll need another type family to map our types to the type required to read them (usually this is an Integer to fetch an object from a database).
-- | new stuff! data Crudable = ProductCRUD | ProductsR | OrderCRUD -- | type family that maps a Crudable type to the type required to read it type family ReadData (c :: Crudable) :: * where ReadData 'ProductCRUD = Int ReadData 'ProductsR = String ReadData 'OrderCRUD = Int -- | type family that maps a Crudable type to the type required to create it -- | note that we dont include ProductsR as we cant create lists of products. type family CreateData (c :: Crudable) :: * where CreateData 'ProductCRUD = ProductData CreateData 'OrderCRUD = OrderData -- | we add ProductsR = [Product] to our CrudBase type family type family CrudBase (c :: Crudable) :: * where CrudBase 'ProductCRUD = Product CrudBase 'ProductsR = [Product] CrudBase 'OrderCRUD = Order -- | Here we add `Read5` to our GADT data CrudF5 next :: * where Create5 :: SCrudable c -> CreateData c -> (CrudBase c -> next) -> CrudF5 next Read5 :: SCrudable c -> ReadData c -> (Maybe (CrudBase c) -> next) -> CrudF5 next instance Functor CrudF4 where fmap f (Create5 c d g) = Create5 c d (f g) fmap f (Read5 c d g) = Read5 c d (f g)
Note that our CreateData type family is purposefully non-total as we cannot actually create lists of products. The intuition is that Shopify's HTTP endpoint might expose something like GET /api/products which returns a list of products, but it might not expose POST /api/products. However, we'd still like to express the fact that we can read lists of products through our API.
The beauty in this approach is that it is impossible to write a program that creates lists of products. We will get a compile time error! Check for yourself:
-- | smart constructor. we wont do pattern matching here. we only did it above -- | to highlight the usage of type families create5 :: SCrudable c -> CreateData c -> Free CrudF5 (CrudBase c) create5 c d = Free $ Create5 c d Pure -- | smart constructor for reading read5 :: SCrudable c -> ReadData c -> Free CrudF5 (Maybe (CrudBase c)) read5 c d = Free $ Read5 c d Pure -- | mini program that creates two products and an order workingProgram :: Free CrudF5 Order workingProgram = do (Just products) <- read5 SProductsR "select * from products" let productIds = productId <$> products create5 SOrderCRUD (OrderData "Lewis gripper" productIds)
The above program compiles. But the one below will not:
-- | this will not compile falseProgram :: Free CrudF5 [Product] falseProgram = create5 SProductsR [ProductData "Kit Ream"]
If you try to compile this you will get:
Couldn't match expected type ‘Main.CreateData 'Main.ProductsR’ with actual type ‘[Main.ProductData]’
It's not the most helpful method, but the key to it is here: expected type: Main.CreateData 'Main.ProductsR. Since CreateData was a non-total type family, CreateData SProductsR isn't defined and GHC throws an error!
6. A General Algebra for describing CRUD for shopify types
We are now ready for an entire CRUD algebra for Shopify types! You can see a full example with all constructors here: https://github.com/aaronlevin/haskell-shopify/blob/master/src/Network/API/Shopify/Client.hs
Near the bottom I've also provided an HTTP interpreter built on top of http-client for those curious as to what that might look like.
7. Summary and Next Steps
We started by attempting to wrap the Shopify API by using a Free Monad to express the possible actions against it. Starting with only two types, Product and Order, we saw that a naive functor expressing our actions would have eight constructors. As there are over twenty other types we'd need to wrap in the Shopify API, this would result in almost one hundred value constructors for our naive functor. Our first attempt at minimizing the constructors resulted in an algebra that wasn't composable across types, i.e., we were not able to express programs that created, updated, read, and deleted multiple types at once. To overcome this limitation, we used Type Families, Data Kinds, and GADTs to index our Functor, and the generated Free Monad, with types. This technique turned out to be flexible enough to capture actions against read-only endpoints (i.e. can't be created, updated, or deleted) as well.
There still remains a lot more to be done. For example, we could create a singleton type for each CRUD verb and reduce our CrudF to a single constructor. This would mean our type families might look like: type family CreateData (v :: CRUD) (c :: Crudable) :: *. Another step would be to expose this as a library via some Crudable typeclass, although I haven't thought too much about what this would look like (or if its possible in broader generality).
8. Bye
I hope this was valuable and that it gives some insight into how type-level programming in haskell may be practical and fun?
PS - much thanks to @acid2 for his inspiring post, @a_cowley for many encouraging comments, and @smdiehl for his formatting wizardry.
3 notes
·
View notes
Text
Running the Nix package manager within a Docker Container
I love Nix. I've been running NixOS for the past few weeks and it's been a great experience.
However, I started a new job recently and have been thinking about our deployment pipeline and how we will manage build and run-time environments. Finding the boundary between Nix and Docker in this area has been consuming me for the past few weeks.
I want to maximize the human element. I want one tool to declaratively (and deterministically) control our build- and run-time dependencies, while also gaining the portability (and industry traction) of Docker containers. I want to on-board new engineers by saying:
# want to work on this project? # Step one: install nix git clone $PROJECT cd $PROJECT nix-shell
and now they're in a shell with every run- or build-time dependency installed. But not all engineers will work on every service. This is where Docker shines. I also want to say:
# need to run $SERVICE? # Step one: install docker sudo docker run -t $SERVICE -p 8080:8080
How do we combine these two, seemingly disparate tools? Is there a proper intersection? Nix excels at managing per-project dependencies and works across distros (and even on OSX and Cygwin (modulo some in-progress work being done)), so it should work within any Docker container!
Is it possible to run Nix within Docker?
/rant
Suppose you have the following minimal default.nix that specifies a build environment with the Haskell compiler GHC as a dependency:
# default.nix { nixpkgs ? (import <nixpkgs> {}) }: let stdenv = nixpkgs.stdenv; ghc = nixpkgs.haskellPackages.ghc; in stdenv.mkDerivation rec { name = "our-project"; version = "0.0.1"; src = fetchurl { url = "http://your.domain/run.tar.gz" }; buildInputs = [ ghc ]; }
And suppose you want to create a Docker container that will run an executable (run) in a shell environment as defined by default.nix:
# Dockerfile FROM debian:wheezy # Install packages required to add users and install Nix RUN apt-get update && apt-get install -y curl bzip2 adduser # Add the user aaronlevin for security reasons and for Nix RUN adduser --disabled-password --gecos '' aaronlevin # Nix requires ownership of /nix. RUN mkdir -m 0755 /nix && chown aaronlevin /nix # Change docker user to aaronlevin USER aaronlevin # Set some environment variables for Docker and Nix ENV USER aaronlevin # Change our working directory to $HOME WORKDIR /home/aaronlevin # install Nix RUN curl https://nixos.org/nix/install | sh # update the nix channels # Note: nix.sh sets some environment variables. Unfortunately in Docker # environment variables don't persist across `RUN` commands # without using Docker's own `ENV` command, so we need to prefix # our nix commands with `. .nix-profile/etc/profile.d/nix.sh` to ensure # nix manages our $PATH appropriately. RUN . .nix-profile/etc/profile.d/nix.sh && nix-channel --update # Copy our nix expression into the container COPY default.nix /home/aaronlevin/ # run nix-build to pull the RUN . .nix-profile/etc/profile.d/nix.sh && nix-build # run our application CMD ["./results/bin/run"]
This Dockerfile will:
install nix
add a non-sudo-privileged user aaronlevin (yay security)
copy the default.nix nix expression to a working directory
build the default.nix nix expression, installing all its dependencies (ghc) and unpacking the run executable in the process.
run your application.
With these two in a working directory you can run:
sudo docker build -t myproject . sudo docker run -t myproject
Why?
By placing all your run- and build-time dependencies in nix expressions you have a single, common language that can be used locally on your dev machines or within docker containers. You get the consistency of deterministic builds across dev machines and docker containers!
nix-docker?
There is an interesting project named nix-docker that creates Docker containers based on NixOS configurations. This is great if you're comfortable with NixOS being the base of your Docker image and managing everything from a single configuration.nix. I like this idea, but I'm not totally won over yet. Nix really excels at working across distros, and there's something to the simplicity of having any base image, installing Nix, and running from there.
1 note
·
View note
Text
Reducing Irreducibility: Towards Intuitive Reducibility
Irreducible operators are very important in Mathematics. For the case of Directed Graphs, the adjacency matrix is irreducible if and only if the graph is strongly connected. Unfortunately, the canonical definition of irreducible matrices has bothered me for years. If you search for reducible on Wolfram, Wikipedia, or Planet Math you will get one of two definitions:
Definition 1: An n x n matrix A is said to be reducible if and only if for some permutation matrix P, the matrix P^T A P is block upper triangular.
or
Definition 2: A square n x n matrix A = a_ij is called reducible if the indices 1, 2, ..., n can be divided into two disjoint nonempty sets i_1, i_2, ... , i_u and j_1, j_2, ... , j_v (with u+v = n) such that a_{i_α, j_β} = 0 for α = 1, 2, ... , u and β = 1, 2, ..., v
What does this even mean? Block upper triangular? Disjoint, non-empty sets of indices? What does this have to do with reducibility? These definitions are over-complicated, overly-technical, and miss the simplistic beauty of what it means for an operator to be reducible.
What is Reducibility, Really?
As a graduate student I was tasked with extending the Perron-Frobenius Theorem to a more generalized setting. I had to think about what reducibility meant, in general. Several times my thesis supervisor hinted at such a generalization, but it never came to me intuitively until a few years ago.
Here is what the definition of a reducible matrix should be:
Definition: An N x N matrix A is reducible if there exists a non-trivial subspace invariant under A, i.e. there exists Y ⊊ ℝ_N such that A(Y) ⊆ Y.
Or, more generally for linear operators:
Definition: An linear operator T is reducible if there exists a non-trivial subspace invariant under T.
Non-Trivial Invariant Subspace?
In the above more general definition, we are saying: if T : X -> X is a linear operator, then T is reducible if there exists some non-trivial supspace Y ⊊ X such that T(Y) ⊆ Y. This means we can "reduce" the behaviour of T onto some well-defined subset Y ⊊ X. Therefore, it may be possible to analyze T in a restricted (or reduced) subspace Y. Essentially, we've "reduced" the behaviour of T to a subset of X.
Conversely, the above definition means that irreducible operators have no special behaviour for any subspace of X. That is, their behaviour cannot be further reduced; T requires the entire space X to express itself.
While I am using very loose language (what does it mean for an operator to express itself, or what is an operators "behaviour"), it is hopefully intuitive that this definition has more interpretive meaning.
Are These Definitions Equivalent?
For matrices, the invariant subspace definition and the block upper-triangular / permutation indeces definitions of reducibility are equivalent. It's a fairly straightforward proof: if a non-trivial, invariant subspace exists, then decompose your space into the direct sum of this space and its compliment. Using a permutation matrix to transform your basis, it's fairly easy to see that the matrix can be represented in block-triangular form (the proof comes down to showing that one of your blocks A_3 * Y = 0 which, since Y is non-trivial, implies the block A_3 must be zero). If your matrix is in block upper-triangular form, it's easy to see there is an invariant subspace.
Why Does This Matter?
The initial definitions are overly technical and miss the point of reducibility; that the behaviour of reducible operators can be simplified, and that irreducible operators require the entire space to express themselves. Further, the block upper-triangular and permutation indices definitions do not translate well to more generalized settings. For example, what does upper-triangular form mean in L_p spaces? It turns out there is a way to define upper-triangularity in L_p spaces (see this book), but reducibility in the context of invariant subspaces is much more intuitive.
What Next?
Please petition your local math authorities to adopt this much simpler definition. Thank you.
0 notes
Text
Dynamic Typing: A Local Minimum for Code Comprehension
I believe the most important value in a programming language is not performance, libraries, ecosystems, or tooling. While these are crucial to their adoption and success, a feature often overlooked is code comprehension. By this I mean: how easy is it for someone to read code and comprehend what it's doing within this language. This line of reasoning has lead me to believe that pure, functional languages like Haskell are the best tools to keep teams performant, productive, happy, and successful by maximizing code comprehension.
Preface: I recognize that my opinions are directly formed by my (limited) experiences, and are therefore biased and not to be perceived as any universal truth. I also recognize my incredibly privileged position as a white, cis-gendered male; my history and interaction with software has been free of institutional barriers, misogyny, racism, and other forms of oppression. I mention this because my thesis resides on software and coding being a highly "social" activity, and like all social activities, the experience of a person in a position of privilege is incredibly skewed.
Coding is Social
After completing my masters in Pure Mathematics and meandering around the digital media world, I got my first job as a Software Engineer. The first thing I realized was how social coding has become. Perhaps it's always been this way, but I was inspired to try and name a single-author library, language, OS, or run-time of any significance. They don't exist. Every piece of software you use has been written by groups of people. Groups that need to communicate, document, test, and understand each other's code. As a software engineer we principally do one of two things: read other people's code or write code that other people will read.
Therefore, the following questions are asked almost every single time you're looking at code:
What does this function do?
What kind of data do I pass to it?
What kind of data does it return?
Maximizing Comprehension
When I think about programming languages, I think about the importance of maximizing the ability to answer those questions as quickly and coherently as possible. This, to me, is the single most important function a programming language can offer. Everything else is secondary. If Google can make JavaScript reach speeds within a factor of C, and if Oracle can bring a Garbage Collected runtime to the same level, I'm confident any "comprehensible" language can be optimized (or interfaced with C).
Dynamic Typing
Dynamic Typing was my path to software liberty. After learning assembly, C, and C++ within a Numerical Methods context, I was profoundly inspired when I finally discovered Python. The sense of exploratory freedom that dynamic typing gives truly changed my life. However, the more I use dynamic languages, especially in a team setting, the more I realize they've hit a local minimum when it comes to comprehension. For example, this week I was working with a Sinatra-based web service that hadn't been documented. I was told the code should "document itself." I needed to know what kind of JSON was being returned from an endpoint and began reading the code. Because of Ruby's dynamic, object-oriented foundation and the pervasive use of monkey-patching and 3rd-party libraries, what a simple REST-end point was returning became entirely obfuscated. I had to wrestle my way through various libraries, only to end up in a schema.rb file conjecturing that some .all method returned "all the sql rows" while simultaneously lacing tests with various puts to figure out what was happening. Eventually I found my answer, but not without much work and time. While documentation and deeper testing would have resolved my issues, the enforcement of these practices is left to developer discipline, rather than the language itself, begging the question: is there a better way? A smart language and compiler capable of keeping discipline in check without getting in the way?
When it comes to dynamic languages, here is how they score with respect to my comprehension questions:
What does this function do? Anything. You must read the code of the entire function to find out. Further, the function may perform side effects before completing, complicating the "do" part of its being.
What kind of data do I pass to it? Any data, but you must read the entire function to see what assumptions are being made about this data. For example, the function may make use of duck typing to assuming the object you're passing in has a .to_bar method.
What kind of data does it return? Any data or no data. You must read the code of the entire function to find out.
This looks like a failure when it comes to comprehension. The programmer is required to read the entire method, tracing any dependencies and assumptions through deeper methods. This is hardly optimal and the complexity grows with abstraction (rather than the other way around). Dynamic languages grant zero guarantees to the programmer. This is their power, but also their curse. As codebases grow, so do the implicit, non-enforced assumptions made in every method/function. This is unsustainable.
Side-Effects in Non-Pure, Statically Typed Languages (C++, Java)
Side-effects are the real bane of code comprehension and they pollute both Dynamic and Statically typed, non-pure languages. By their very nature they are invisible, occurring "on the side," often in methods encapsulating their behaviour away from you. Here's how they answer those questions:
What does this function do? Whatever it's main purpose is + some unknown amount of side effects that you must read the entire code to understand.
What kind of data do I pass to it? Whatever Type is specified by the method, plus any implicit dependencies (e.g. via Dependency Injection frameworks)
What kind of data does it return? Whatever Type is specified by the method.
In the case of Static, Non-Pure languages, we gain slightly more comprehension, but not enough to reach a local maximum. We have a better idea of what our method takes and returns, but what our method does is still obfuscated. The method may require configuration for logging, or a database connection, or may interact with concurrent threads. It may be mutating global state that is important to the program, forcing us to maintain a growing stack of effects as we read the code.
We can rely on several ad-hoc methodologies to control this: code analysis, IDEs, discipline, documentation, testing, etc. But these are band-aids over a foundational flaw: the imperative nature of these languages requires you to perform side-effects in an uncontrolled manner. Further, these side-effects cannot be captured by the compiler and are only understood by reading all code. This puts a huge amount of pressure on engineers to maintain the logic of entire systems in their minds as they read code.
There must be a better way!
Haskell
Haskell is a pure, functional programming language. It has a powerful type system, compiler and runtime. Data cannot be mutated and any side-effects are encoded in the type system. This may seem like a limitation, but thanks to the power of Abstract Data Types, higher-kinded types, higher-order functions, and type inference, the language ceases to get in the way.
Here is how Haskell fares against my questions:
What does this function do? Exactly what the type states, with specificity proportional to the abstraction of the type. A method with the signature a -> a can only do one thing. A method Bool -> Bool can only do 4 things.
What kind of data do I pass to it? Only data that conforms to the type signature. Period.
What kind of data does it return? Whatever data the type signature specifies.
This approaches the local maximum. I can get very close to understanding everything about a method simply by looking at it's type signature. We can't get the whole way, but we're given guarantees and bounds, and every method within our function must also grant these guarantees. There can be no magic dependencies that are not specified in the signature. Every entity required by the function is passed in or provided via Monadic context (both encoded in the type). Further, any side effects will be encoded in the type as well. If we see a function like:
foo :: a -> b -> IO a
We know right away that:
foo performs a side effect (hence the presence of IO in the type signature)
Since foo makes no assumptions about b, either foo discards b or it performs a discarding side-effect using b.
foo makes no assumptions on a or b, and therefore does not use any of their properties. This means that the kind of side-effects it performs on a and b are quite limited, if any. For example, foo cannot even log or print a or b as the method doesn't require they implement the Show typeclass!
a and b do not "inherit" from a basic type (like Object in Java), therefore there can be no runtime reflection or other magic done to them (unless they are assumed to have an instance of the Typeable typeclass in context, which would appear in the signature ala foo :: Typeable a => a -> b -> IO a).
We get nowhere near this level of comprehension in a dynamic language or non-pure static language. Every piece of information we need to understand any method is in the type signature. We know exactly where and when side-effects happen and thanks to the immutability of data, we know exactly where state transitions take place and how they happen.
This is just the tip of the iceberg. In the case of Haskell, you are given incredible code comprehension for very little cost and gain an advanced type system capable of abstractions unheardof in other languages. If you want a taste, see: stm, pipes, lens, and Haxl.
Why Should I Care?
As software engineers we should be concerned with efficiency and performance. As teams and companies grow, the need to read and comprehend code so that we can extend or debug it grows exponentially. Every bug written into our code, from someone on our team or a library we're using, results in wasted time spent debugging. Any time spent not writing code, is time wasted not delivering a product. In a dynamic language we try to mitigate this by writing tests and documentation, but these methodologies are never complete and don't result in fast and certain code comprehension. In a pure, functional language like Haskell, we lean on the compiler to do the debugging for us by enforcing static guarantees and we lean on the type system to document our code. Not only does this free programmers from the burden of comprehending ever-expanding code complexities, it also establishes a system of trust wherein engineers are no longer afraid of change in an uncertain codebase.
Ultimately, if you are a programmer you should care because the code you write will be used by others. If we want to maximize the ability to collaborate and enrich our ecosystems of abstraction, we need to maximize code comprehension. But mostly, if you're running a company, you should care because it will save you money and keep your customers happy.
1 note
·
View note
Text
Sane Keyboardin' with XMonad + Arch on a Mac
I wanted to accomplish something very simple with my keyboard mappings:
Caps Lock is mapped to mod4
Command is mapped to Ctrl
Unfortunately, the lack of idempotency when using xmodmap makes for dangerous keyboardin'. Using setxkbmap is much safer, and if you really mangle your settings you can always setxkbmap -option to clear your settings.
This allows me to use my thumb for control on the apple keyboard, which is very natural for those addicted to the Kinesis Advantage keyboard.
I found it quite difficult to figure out how to accomplish these simple keyboard mappings, hence this post. Place the following command in your .xinit:
setxkbmap -option \ -model apple_laptop \ -layout us \ -variant mac \ -option altwin:ctrl_win \ -option caps:super
option caps:super maps caps lock to mod4 and altwin:ctrl_win will map the command key to ctrl (this was hard to find).
bonus
If you're trying to find a list of options you can pass to setxkbmap, see: /usr/share/X11/xkb/rules/base.lst
0 notes
Text
The Banana Republic of NXNE
On Friday I sat on a panel titled Why NXNE Sucks? This was a NXNE initiative aimed at providing a space for members from the local arts community to air their grievances about NXNE. As an attempted coup d'état to quash negative sentiment and puzzle panellists, they abolished their 45-day radius clause minutes before the panel started.
My thesis on NXNE's state of the union was simple: all of the current issues they face are symptoms of a foundational misunderstanding of and respect for the role played by local and emerging artists in their festival. NXNE is huge and important not because of the large talent they draw or the relatively-new Yonge/Dundas Square shows, but the cumulative impact of engaging with the massive local and emerging arts community. It is the mayhem in the streets that defines NXNE, not the MiO/Fleshlight/Samsung-sponsored "mayhem" at Y/D Square.
Walking out of the panel I felt like a parliamentary candidate in a Banana republic; NXNE staffers, having shown their hand by abolishing the radius clause, began celebrating within their ivory towers and sky castles while the commoner's debated their false democracy for forty five minutes.
The truth of the matter is that the 45-day radius clause was a symptom of a larger problem. Weird Canada's petition to end the 45-day radius clause received over 3,000 signatures, but the real meat came with Executive Director Marie LeBlanc Flanagan's survey, containing hundreds of allegations about NXNE's treatment of artists, volunteers, venue managers, and pass-holders.
Marie's research pointed to a large, systemic problem with NXNE, most of it having very little to do with the radius clause.
Here are my thoughts about the panel, and a call to action for the future.
Quashing the Radius Clause: Not a Victory
Let's get this straight. The press-release about quashing the radius clause contained more damning information than the clause itself. It became finally transparent that the clause had nothing to do with treating artists like "gourmet cheeseburgers" (NXNE's words), but everything to do with CMW moving their dates forward and encroaching on NXNE's territory. What does this mean and why is it so offensive? In response to CMW, NXNE punished the entire local arts community by introducing a 45-day radius clause. NXNE used local artists; NXNE abused their position of power, using artists to leverage themselves against CMW.
If your organization relies entirely on the local arts community for its cumulative impact and success, then you need to think very carefully about your actions. The inability to see how a 45-day radius clause, introduced to protect NXNE from CMW, would negatively affect local and emerging artists is a massive warning sign on the long-term viability of NXNE.
The Panel Title
The panel was titled "Why NXNE Sucks?" Before the panel Marie and I sat down to discuss how we would best represent our community. One of the things she pointed out was:
(paraphrased) NXNE has gathered incredible leaders of the local community, and instead of asking them "how do we make NXNE the best possible music festival?" They've turned the whole thing into a marketing gimmick.
Marie LeBlanc Flanagan
Marie's observation blew my mind. While I didn't like the VICEification of the title, I never considered that NXNE effectively wasted an amazing opportunity: they had Dan Seligmann from Pop Montreal, Aubrey Jax from BlogTO, Paul Lawton of The Ketamines, and myself, together, and instead of us talking about what makes festivals amazing, we were there to complain.
Sure "Make NXNE The Most Rad" is not as catchy, but if your goal is to actually improve NXNE by engaging with the local arts community, then the title is less important.
Who Was On The Panel
Let's talk about who was on this panel. I believe the first person who joined was Aubrey Jax. From there, she recommended that Paul Lawton join, and Paul recommended me. I am grateful to both of them for including me. However, NXNE should have taken a more responsible approach to finding better representation: three collaborators, a NXNE insider, and someone from Pop Montreal hardly makes a representative panel. I believe both Aubrey and Paul would agree with this, and while NXNE will fall on the "panel is too small" excuse, I'd argue working around that limitation in the name of better representation would have been better and more responsible.
Even worse, my proposal to have Marie LeBlanc Flanagan on the panel was rejected. Weird Canada is run jointly by Marie and myself (truthfully: mostly it's Marie), so to have me and not both of us on that panel seemed strange. I petitioned and was rejected. When I told them I'd rather have Marie on that panel as most of our ideas come from her, they said they "wanted an artist." When I told them I wasn't an artist, they asserted even more "no."
Marie wrote an incredible FAQ about the sexism of NXNE's panel. And if you looked on that stage every single person was white.
Before you start storming down on the "PC police," simply ask yourself: why wasn't there better representation?
NXNE Is Accountable To Whom?
Gone are the days where businesses were allowed to project their idioms onto the public; in the age of digital empowerment the most successful businesses are platforms for humans and communities to create their own identities and take charge of their own lives (unfortunately often on the terms the platform). Think google, twitter, facebook, tumblr, pinterest, change, airbnb, etsy, groupon, dropbox, etc. at the end of the day these are all products that are defined and tuned (often mathematically) to better represent their users. Startups pivot as desires change, and new products emerge in places previously thought impossible.
Where does this leave a meagre music festival? If they want to take their work to the next level, not just the level that SXSW has achieved, but even further, to a state where they are both a unique reflection of and tool of empowerment for the greater Toronto area, where artists large and small travel from around the world to participate in a unique cultural and economic exchange of ideas; if NXNE wants to think beyond each year, beyond musical and digital programming, and far into a sustainable future of immense growth and respect, they need to immediately address their foundational misunderstanding of the role the local arts community plays. Once they become a tool of empowerment for this community, there is no limit.
Local and emerging arts communities are extremely agile, performant, and strong; they have immense vision, greater flexibility, better representation, and are able to achieve more with fewer resources. They are intimately connected to their communities and able to enable them to achieve massive projects. By becoming a platform for the empowerment these organizations, there is no limit on the possibilities for NXNE (or any organization acting in this capacity).
Unfortunately, given NXNE's limited vision, structure, and status within the community, I do not think this is possible for them.
So What?
It's time we take charge of our community, of our technology, and our vision. It's time we smash the shackles placed on us by the likes of NXNE, CMW, and public arts funding. It's time we start building tools to empower our community and end the abusive terms demanded by these organizations; organizations that exist to empower the privileged, maintain the status quo, and serve the interest of individuals instead of our community. It's time for us to start building organizations that think years into the future; a future beyond capitalism; a future capable of empowering individuals and communities to bring humanity to the furthest possible heights.
What Next?
(the below ideas are personal do not necessarily represent those of Weird Canada or Wyrd Arts Initiatives)
I am calling for a massive artist boycott of both NXNE and CMW in 2015.
I am calling on artists and the community to program music and art events during those festivals under a common banner.
I am calling for the overhaul of all public art funding, beginning with introducing more resource-based funding .
I am calling on the City of Toronto to establish a permanent all-ages venue, freely bookable on a combination of first-come-first-serve, lottery, and fixed-use basis, focused on usage by under-age bands and promoters.
I am calling on the City of Toronto to fund educational programs in said venue.
I am calling on the City of Toronto to fund and establish an annual music and arts festival in and around said all-ages venue.
I am sick and tired of not thinking big; I am sick and tired of asking for less in an effort to meet the terms of others. This is our life and we get one chance to live it.
Act now.
26 notes
·
View notes
Text
Want to Fix MySpace? Buy Bandcamp
I was thinking about MySpace yesterday and what I would do if I was given the opportunity to rejuvenate it (as a thought experiment). It's easy to point out the failings in an idea, but nontrivial to actually fix one. So, here is my idea:
Buy Bandcamp. Don't replace it. Continue running it as it is. Now, for every Bandcamp user, MySpace exists as a social layer (with music powered by Bandcamp). You register for Bandcamp, upload your songs, and the Bandcamp experience remains the same. However, you now also have a MySpace page, where you can get in touch with other artists, network, book shows, discover, etc. In the past, this was the only useful part of MySpace from an artist perspective.
Next, harmonize data between MySpace and Bandcamp and build a very powerful recommendation and discovery engine using Bandcamp's transactional data and MySpace's artist-artist graph. This recommendation engine can be used to power discovery on Bandcamp and MySpace, making the non-artist user experience great on both sites.
This marriage would infuse MySpace with artists while giving Bandcamp artists the feature artists most desperately need: social connectivity.
Bandcamp is a really great service. We don't need or want to replace it. They've got a clean and minimal interface, support payments, and do well for artists and users. However, they don't have a social component and, as of the last time I checked, do not plan on it. For those that remember what being in an indie band (or fan of indie music) was like when MySpace was around, it was very amazing. Sure, the UI sucked, but the amount of tours and connections made through that platform was marvellous. Powered by the simple Friends, Favourites, and Wall comments, MySpace was a simple-to-use discovery paradise. And now that social cohesion is gone.
Through Weird Canada, which existed during and after MySpace's domination, I experienced the shift in social cohesion first-hand. I've seen the decline in shows and networking as bands have greater difficulty discovering bands in other cities. We desperately need a social network geared towards artists and sometimes I wonder if the time when MySpace did just that is gone. But then again, Bandcamp adoption is quite high, so perhaps there's a hope!
PS - you may ask: why not SoundCloud? I love SoundCloud (both as a user and as an engineer). However, it's a large product with a (unique) social layer that misses some of the key needs of traditional "bands." It could work, though, but Bandcamp's simplicity and focus on the traditional artist->album->track model makes it an easier platform to integrate with (IMO).
PPS - implicit in my idea is that popularity is not driven by high-profile, celebrity users, but "regular" creative human beings. MySpace's currently tactic of empowering and promoting celebrities is not likely to win the hearts and minds of artists and users to actually use the site.
14 notes
·
View notes
Text
hello
I'm going to try to reboot this for the fifth time. I'm finally honing in on using tumblr as my main authoring tool (with markdown support).
Just testing out the code blocks here:
forkLife :: IO Human -> IO ThreadId
And now for an inline style.
Thanks for your patience.
0 notes