<![CDATA[foldr ::]]>http://arianvp.meNodeJS RSS ModuleSun, 06 Nov 2016 15:39:09 GMT60<![CDATA[LXD and Ansible for staging and development]]>Today I'll discuss a technique I use at my current gig to simulate our production environment using LXD.

LXD is a container hypervisor by Cannonical. It's a bit experimental, but feels a lot more attractive than Docker. It's a system based on LXC, which allows users to run unprivileged containers. Unlike docker, the LXC philosohpy is that an entire OS should be able to run in it, including init system. This is similar to the systemd-nspawn philosophy, which we also evaluated, but because we are on ubuntu LTS, we still use upstart so that was not an option.

LXC is a bit raw, you can simply run rootfs's as containers and that's all it gives. Furthermore, it creates a bridged network and allows you assign IPs and domain names to containers with dnsmasq. Either using DHCP, or fixed IPs.

Because we wanted to create a private network for our containers that simulate our production environment, we didn't opt for docker because we couldn't find a way to easily configure the network like with LXC, which is just writing some dnsmasq config files which most sysadmins are already familiar with.

Because the containers in LXD are just simple ubuntu cloud instances (with upstart and everything), we can just easily provision them with Ansible, which we already use for our production environment. It's a simple case of just creating a new inventory file in Ansible and we're all set.

Importing an image

Before we start, we should make sure the user on your system is in the lxdgroup:

$ newgrp lxd

Lets start with some basics, how do we create a container? We can download images from the image repository using the lxd-images command. Or we can import an existing base image into LXD using lxc image import.

At work we use a predefined base image which is simply a tarball with a rootfs and some cloud-config template files. The cloud-config template files are used for setting the hostname of the container for example.

├── cloud-init-meta.tpl
├── cloud-init-user.tpl
├── cloud-init-vendor.tpl
└── upstart-override.tpl
├── bin
├── boot
├── dev
├── etc
├── home
├── lib
├── lib64
├── lost+found
├── media
├── mnt
├── opt
├── proc
├── root
├── run
├── sbin
├── srv
├── sys
├── tmp
├── usr
└── var

To import a base image we simply do:

 $ lxc image import base.tar.gz --alias=base

Or if you don't have a base image at hand, you can download one:

$ lxd-images import ubuntu --alias=base

Creating a container

Well that's super easy!

$ lxc launch base my-container

And we're in!

$ lxc exec my-container bash

You should have networking connectivity now and be able to install packages using apt-get. You can set up users, and add ssh keys or whatever. But of course, we want to automate this. This is where Ansible gets into play. But before we come to that, we need to do some network configuration.


Make sure that both dnsmasq and lxc-net services are running:

# service dnsmasq restart
# service lxc-net restart

Now edit /etc/default/lxc-net.

Make sure that the following line is uncommented. Then the lxc-net daemon will automatically created a bridged network for your containers


Next in the file is the configuration of the private network for your containers. You can leave them as is or change the network. We decided to use the subnet for our containers, which is the following config:


Furthermore make sure that LXC_DOMAIN="lxc" is uncommented. It signals dnsmasq to assign containers <containername>.lxc domain names.

To actually make sure that dnsmasq does this, we'll have to edit the dnsmasq config in /etc/dnsmasq.d/lxc. Set the server to whatever you set in LXC_ADDR. In our case


Also make sure that whenever you edit dnsmasq or lxc-net configs that you restart the services to register the changes.

$ service lxc-net restart
$ service dnsmasq restart

Now if we restart our container, we should be able to connect to it!

$ lxc restart my-container
$ ping my-container.lxc

Also, if you installed ssh on the container with apt-get and added your ssh key to a user, you should be able to ssh into it aswell. (Which is pre-installed on the ubuntu base image, and ssh keys of all our developers are in the base image already)

$ ssh dev@my-container.lxc

Nitpick with launching new containers

Currently, there is a little bug in lxd, that causes containers not to register with dnsmasq on first launch. So if you launch a new container, make sure to restart it immediatelly to make it register a dns name.

$ lxc launch base new-container && lxc restart new-container

Provisioning with Ansible

Ansible provisioning is reall easy now. Create a container for each server your want to run in your development environment:

$ lxc launch base frontend && lxc restart frontend
$ lxc launch base postgres && lxc restart postgres
$ lxc launch base workers  && lxc restart workers

And make a new inventory file, for example named dev:


Now simply run your ansible playbook:

$ ansible-playbook --ask-sudo-pass ./provision.yml -i ./inventory/dev -e development=true

Your containers should be provisioned now!

Staging server

At work we also use this technique to run our staging server. We have a staging server running at staging.internal which has ansible and lxc installed. If we log into it with SSH Agent Forwarding. The base image has the public keys of our development machines, so with agent forwarding, we can provision the servers from the staging server.

ssh -A dev@staging.internal  

Once we're in, we can simply start new containers and provision them with ansible as shown above.


Sometimes lxd can be a bit grumpy (it's not fully stable yet). It might not always succeed in claiming a domain name. In that case I usually first try to restart the container lxc restart containername and if that doesn't work I restart both dnsmasq and lxc-net just to be sure.

http://arianvp.me/lxd-and-ansible-for-staging-and-development/da90392d-7a45-47b6-8665-8ae23243bd26Tue, 26 Jan 2016 09:31:57 GMT
<![CDATA[Why functional programming?]]>As developers we want to write robust and correct code. This is our ultimate goal.

To reach this goal, we write tests that assure us of the correct behaviour of our code.

To make code testable, we define code in isolated units, that are also tested in isolation.

Dependencies between units should soley be described through dependency injection. A unit describes on what units it depends. If unit C depends on unit A and B, and A and B are tested to function correctly, then if we test unit C then we can be assured that A and B won't cause tests to fail suddenly as we know how A and B behave. This way of thinking is what we call unit testing and is one of the most important parts of Test Driven Development.

So what if we would limit a language to only be able to define units that can only depend on behaviour of well-functioning other units? So that it's output (its behaviour) is soley dependend on its input?

Well in mathematics we have a concept called functions. Here have one $ f(x) = 3x$. I'm sure you've used them in high school. Behaviour of functions is fully dependent on its input.. $f(5)$ will always yield $15$. No matter how many times you call it. If we compose functions, then one function is dependent on the behaviour of another function. Say we have $g(x) = x+2$ then $f(g(x))$ is $f$ with as input the behaviour of $g$. and because we know the behaviour of $g$ is well-defined, we know that $f$ well defned as well. $f(g(5))$ will always yield $21$, not matter how much you call it.

In mathematics, a function is a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output.

So basically a function is exactly the definition of a unit and nothing more. It encapsulates exactly what we expect a unit to be. And this forms the basis of functional programming.

In functional programming, our programs are soley composed of functions, and nothing else. And this has a very powerful consequence. It means our code is unit-testable by default, because functions are exactly what makes a unit a unit.

Well-testable and well-tested software is easier to refactor, extend, and to trace back bugs in. This makes functional languages a great tool for robust software with an easy development cycle.

http://arianvp.me/the-functional-programming-mantra/d2d70ccc-18e8-4c93-9120-fbea3e12ac59Thu, 23 Jul 2015 18:36:43 GMT
<![CDATA[Joylent - Day 1]]>So I've decided to not eat anything for a week. And live soley on Joylent.

Day one was heavy. I had major headaches and felt fatigued, but that might had more to do with the party I had the day before.

Anyhow. I decided to take the strawberry shake today. The flavour was subtle and nice. The shake tastes like pancake batter with a slight strawberry touch. The structure is soft but thick, and always feels warm, even if you cool it in the fridge, which is really weird... I gues it's the structure that your body doesn't associate with 'cold'.

I took a bottle with me and drank it over an hour or two. Don't do this. You just end up being hungry all the time. In the afternoon I just ad-fundumed the next batch and it was a lot more pleasant.

Around 3 I got a real craving for food, So I ate a slice of bread. It was really good.

In the evening I stole one slice of pizza of my friend. I really love diner, and it's hard to not have diner.. That was a real struggle.

Later that night I went out. And I noticed I would get tipsy very quickly. After 2 drinks I already felt light-headed and had some headaches. So I quit drinking more.

Later the night I had a great Durum Doner. I just craved for food.

So yeah. Day one was okay, but a bit of a failure. I'm not sure I can do without food. But Soylent is great for in the morning, as usually I don't have time to make breakfast. So it's a quick way to get something into my system to get the engine running.

http://arianvp.me/joylent-day-1/6ec6c186-1833-4ef1-a577-29922e0ea653Fri, 12 Jun 2015 13:30:01 GMT
<![CDATA[Writing interpreters and compilers - An introduction to folds and algebras]]>So lets say we are building our own programming language because we're cool like that.

For now our language will support simple arithmatic.

So lets define an abstract syntax tree for it.

data Expr = Val Int -- an expression is a value
          | Expr :+: Expr -- Or two expressions added together
          | Expr :-: Expr -- Or two expressions subtracted

We can now formulate abstract syntax trees:

expr = Val 3 :+: Val 4
expr2 = expr :-: Val 2
expr3 = Val 3 :+: (Val 2 :-: val 4)

Now we can create an interpreter that evaluates our abstract syntax trees
using direct recursion:

interpret :: Expr -> Int

-- interpreting a value is simply interpreting the value itself

interpret (Val x) = x

-- interpreting addition is interpreting the left and righthand side and
-- adding them together
interpret (x :+: y) = interpret x + interpret y

-- similarary for subtraction
interpret (x :-: y) = interpet x - interpret y


> interpret expr
> interpret expr2
> interpret expr3

Interpreters are cool and all. but I heard compilers are way cooler
Say we have a simple stack machine that supports the following operators:

data Instr   = PUSH Int -- pushes an integer to the stack
            | ADD      -- pops two integers from the stack, adds them and pushes
                        -- the result
            | SUB      -- similrary but subtracts

Then a compiler is simply:

compile :: Expr -> [Instr]

compile (Val x) = PUSH x
compile (x :+: y) = compile x ++ compile y ++ [ADD]
compile (x :-: y) = compile x ++ compile y ++ [SUB]


> compile expr
> compile expr2
> compile expr3

Someone who has worked with lists before knows that we tend to avoid explicit
recursion and favor maps and folds because we can easily reason about resuable blocks

The question that arrises is, if we can abstract recursion on lists, can we abstract recursion on our custom abstract syntax tree?

The answer is yes.

We begin by defining a corresponding Algebra for our AST

data ExprAlgebra  e= ExprAlgebra
  { val :: Int -> e
  , add :: e -> e -> e
  , sub :: e -> e -> e

What an algebra does is encapsulate the evaluation strategies of each component of our AST in a datatype.

For example. An interpreter would look like this:

interpreter :: ExprAlgebra Int
interpreter = ExprAlegbra
  { val = id
  , add = (+)
  , sub = (-)

An algebra by itself isn't very useful. But once you recurse over
your Abstract syntax tree, you can apply the algebra on it. You're recursing over your abstract syntax tree with a provided evaluation strategy.

foldExpr :: ExprAlgebra a -> Expr -> a
foldExpr alg (Val i)     = (val alg) i
foldExpr alg (e1 :+: e2) = (add alg) (foldExpr e1) (foldExpr e2)
foldExpr alg (e1 :-: e2) = (sub alg) (foldExpr e1) (foldExpr e2)

The result is that if you feed our algebra/ evaluation strategy to this recursion scheme, you get back
an interpreter!

interpret' :: Expr -> Int
interpret' = foldExpr interpreter


> interpret' expr1
> interpret' expr2
> interpret' expr3

So what did we gain by splitting up our recurive function into
an evaluation strategy and a recursive part?

Well! we can now write a compiler without writing another recursive function! Just define a new evaluation strategy!

compiler :: ExprAlgebra [Instr]
compiler = ExprAlgebra
  { val = [PUSH]
  , add = \x y -> x++y++[ADD]
  , sub = \x y -> x++y++[ADD]

See how we can define the function compile by reusing foldExpr?
We have eliminated quite some code duplication!

compile' :: Expr -> [Instr]
compile' = foldExpr compiler


> compile' expr
> compile' expr2
> compile' expr3

Of course we've made a tradeoff. Now we don't have duplicate recursion for operations on the same datatype. But say we now want to build a new language with different features and thus a different abstract syntax tree. Do we have to write a new fold for that? Can we automatically deduce folds for abstract syntax trees?

The answer is: for simple languages, Yes! For less-toy-examply-languages (lets say C#) the answer is (as far as I know), no.

But that's something for next time. I will then introduce F-algebras and catamorphisms to make our live even easier. They basically formalize the steps we have taken to transform explicit recursion into a fold. Allowing us to easily deduce folds on any datatype.
They're really interesting but as stated have their limitations, which I will also cover.

http://arianvp.me/writing-interpreters-and-compilers-an-introduction-to-folds-and-algebras/ceee010c-508f-4a14-a133-aa249e006de1Mon, 04 May 2015 10:15:57 GMT
<![CDATA[Lenses and prisms for modular clientside apps]]>Warning: This is just a mind dump. It's not a coherent story and it's mostly for me to read back on my thought. I'm publishing it in the hope that other people can learn from my thought process.

To recap for the ones who aren't aware. Hom is a (hypothetical) library for writing (hopefully) blazingly fast clientside web applications using GHCJS. Its inspired by ReactJS which uses DOM diffing to calculate the optimal way to execute view changes. It basically turns your app into State -> Html . A great fit for the functional paradigm. Because I hate JS as much as you guys I decided to bring this lovely goodness to haskell. Luite and I made a start with the ghcjs-vdom library which is a thin wrapper around the js dom diffing algorithm. I was really enthousiastic but the project somehow ended up on my shel of "I'll start working on this project again when I feel like it"-projects.

Currently Hom is in the "showerthought" phase when every once in a while during a shower I come up with ways to improve my library.

This week I had to quickly create a dashboard for a TV at my university. I decided to use Elm. Development ended up being a joy and shipped the project after a day of elm-hacking.

The joy of working with elm-html was awesome. elm-html has similar goals to Hom though elm is a lot less powerful language than haskell.

I suggest reading up on Elm architecture tutorial before continuing.

One of the things that annoyed me during the project is that there is a lot of code duplication. Especially a lot of the same pattern matches to delegate actions down to sub-components etc.

Last night I was reading some lens stuff (one of those things I am trying to grock at this moment) and I started reading about prisms. "First class patterns" they're supposed to be. Well that sounds like something I could use right? I see that I keep doing the same kind of pattern matches for different scenarios. A 'pattern' seems to emerge (pun intended). Maybe I can abstract it with prisms? This was the shower thought.

So this morning I started type-hacking and I ended up with some awesome stuff.

I'm gonna assume for now that all elm's APIs are accessible in my hypothetical Hom library. All the following code is haskell.

We're gonna build an app that combines two apps into one. Namely it includes a Facebook and a Twitter widget.

The facebook and twitter widgets are defined as follow:

module Facebook where
import Control.Lens
import VirtualDom
import Signal

data Model = Model
 { _likes :: Int
 , _comments :: [String]

data Action = Like
            | Comment String
            deriving Show

update :: Action -> Model -> Model
update Like = ...
upate (Comment s) = ...

action :: Signal Action
action =  -- facebookAPISource  + input events etc

view :: Model -> Html
view m = ... code that renders the facebook widget ...

And Twitter:

module Twitter where
import VirtualDom
import Signal

data Model = Model
 { _status :: String

data Action = Tweet String
            | Favorite
            deriving Show
update :: Action -> Model -> Model
update (Tweet s)  = ...
update (Favorite) =  ...

action :: Signal Action
action = -- twitter api and input events etc.

view :: Model -> Html
view m = -- code that renders the widget --

Now we want to combine these widgets. So our new model is going to be the union of those two widgets:

data Model = Model
  { _facebook :: Facebook.Model
  , _twitter  :: Twitter.Model

And the actions is going to be the union of the two widgets

data Action = FBAction Facebook.Action
            | TAction Twitter.Action
            | Close -- our own action for our own component

And our update function would be lame pattern matches. Every time we add a new subcomponent...

update :: Action -> Model -> Model
update Close m =  ... change the state so the app is closed ...
update (FBAction a) m = m { _facebook = Facebook.update a (_facebook m) }
update (TAction a) m = m { _twitter = Twitter.update a (_twitter m) }

Render the views...

view :: Model -> Html
view m =
    div []
        [ Facebook.view (m^.facebook)
        , Twitter.view  (m^.twitter)

And run the app ...

app :: Signal Html
app = Signal.foldp update initialState action

I don't know why but I wanted to hack this into something more nice with lenses and prisms.

Now we do:

makePrisms ''Action
makeLenses ''Model

and we can start doing magic!

After reading the Prism documentation and stack overflow ( http://stackoverflow.com/questions/20774475/use-a-parameter-as-pattern-in-haskell) I found a way to easily extend your action handler.

say we have:

update' :: Action -> Model -> Model
update' Close =  .. state to close the app ..

and we want to add the facebook widget to that. We end up doing this:

update :: Action -> Model -> Model
update = update' & outside _FBAction .~ over facebook
                    . Facebook.update

If we want to add a twitter widget to that we can just add it to the chain:

update = update' & outside _FBAction .~ over facebook
                    . Facebook.update
                 & outside _TAction  .~ over twitter 
                     . Twitter.update

We can extract this pattern into a utility function:

You should see it as: If we have a prism that given an action might give us a local action apply the local action to the global state by lensing into the global state to a part that is our local state and update that.

( I simplified the types a lot. Apparently this works for any Profunctor. not just (->) . Though I'm not sure what that means. Heck I don't even know what a Profunctor is. This lens library thing sure is complex.)

updateWith :: APrism' action localAction
           -> Setting' (->) model localModel
           -> (localAction -> localModel -> localModel)
           -> (action -> model -> model)
           -> (action -> model -> model)
updateWith action lens update =
    outside action .~ over lens . update

So our code becomes:

withWidgets :: (Action -> Model -> Model) -> (Action -> Model -> Model)
withWidgets = updateWith _FBAction facebook Facebook.update
            . updateWith _TAction  twitter  Twitter.update

update = withWidgets update'

So now we can easily add as many widgets as we want using function composition! Nice! Because the local states of widgets don't overlap, the order in which we compose these updateWiths doesn't matter. widgets actions are commutative.

Okay so we got state updating covered. How do we delegate signals from the main component to subcomponents? Lets see how we used to do it in elm.

action_ :: Signal Action
action = FBAction <$> Facebook.action
      <|> TAction <$> Twitter.action

Actually this is quite elegant. But I want to do it with prisms because heck why not.

With prisms we end up with the following code:

action :: Signal Action
action =  review _FBAction <$> Facebook.action
      <|> review _TAction  <$> Twitter.action

We can extract a utility function:

mountAction :: Functor f => AReview t a -> f a -> f t
mountAction r = (review r <$>)

action =  mountAction _FBAction Facebook.action
      <|> mountAction _TAction Twitter.action

This type is super general! I am going to dub it liftReview. It's simply review lifted into a functor.

Anyhow. Here my type-hacking endevaours stop. It was nice and I learned a lot about lenses. Haskell on the clientside web is awesome. I need to build this hypothetical library. It's gonna be sick.


http://arianvp.me/lenses-and-prisms-for-modular-clientside-apps/3cf632f1-a1e4-4c53-ab47-d51f0a3ac569Sun, 22 Feb 2015 18:51:47 GMT
<![CDATA[Hacking together GHCJS support for Servant]]>Source Demo

So tonight I wanted to hack something together in 3 hours and I did!

I wanted servant to work in the browser.

Servant allows you to define routes at the typelevel and it automagically creates a whole freakin rest api from that. Yay for haskell.

Anyhow. Having type-safe communication between server and browser just sounded awesome. So I had to hack some prototype together.

Hacking it

Luckily the library maintainers did a lot of work for me already. They split up the servant package in servant and servant-server on my request. With servant-server containing all the server specific stuff.

servant compiled perfectly under GHCJS. Neat.

Okay so now we need not only a way to create serverside code. We also need clientside functions. servant-client to the rescue! Lets try compile that.

Darnit. the http-client dependency can't be compiled by ghcjs. Probably due to it being dependent on network. Okay so lets add a conditional to the Cabalfile that http-client should only be loaded when ghcjs is not used at the compiler.

if !impl(ghcjs)

Ok nice. that works. Now we got a bunch of errors because of HTTP.Client not being in scope. Kinda makes sense. Okay so lets
just use the C Preprocessor to check if the GHCJS compiler is present. and if so, dont import that module anywhere its used. Do the same for any code that uses the module.

so now only one function is complaining it lacks an accompanying binding. Neat! One function should be doable to implement.

instead of using http-client we use the JavaScriptFFI to use the XMLHTTPRequest API to make HTTP calls. A little bit of hacking and marshalling later I discover I can't marshal Lazy bytestrings from and to javascript.... No time left, I need to go to bed! Okay lets just add another CPP #ifdef and just import the strict version if we use GHCJS... That seems to work! Except that some external function expects a Lazy bytestring. Okay lets just convert the strict bytestring to a lazy bytestring for that specific function call. Super hack.

Okay so now everything compiles. It's super hacky. but it compiles...

So I set up a little test environment ... and.... IT WORKS! WOOHOO :)

I've only tested the GET HTTP method. But I dont see why others wouldn't work. Also I haven't done any form of exception handling but that's something for later as I need to go sleep now. I'm glad this works!

Check out the source and the demo:
Source Demo

And the test setup!


data Book = Book { title :: String
                 , author :: String
                 } deriving (Generic,Show)
instance FromJSON Book
instance ToJSON Book
type MyApi =  "books" :> Get [Book] :<|> "static" :> Raw
data Book = Book { title :: String
                 , author :: String
                 } deriving (Generic,Show)
   instance FromJSON Book
instance ToJSON Book
type MyApi =  "books" :> Get [Book] :<|> "static" :> Raw


getBooks :: EitherT (Int, String) IO [Book]
getBooks  = return [Book "yo" "yo"]
server = getBooks :<|> serveDirectory "static"
main = Network.Wai.Handler.Warp.run 3000 (serve bookApi $ server)


getAllBooks :: BaseUrl -> EitherT String IO [Book]
(getAllBooks :<|> raw) = client myApi

main = runEitherT $ do
  case parseBaseUrl "http://test.arianvp.me" of
    Left s -> liftIO $ print s
    Right u -> do
      books <- getAllBooks u
      liftIO . appendToBody . fromString . show $ books
http://arianvp.me/hacking-together-ghcjs-support-for-servant/32457281-601a-4868-9dcb-2debd318b3b7Tue, 23 Dec 2014 01:00:37 GMT
<![CDATA[Hom - React.JS for Haskell!]]>Reddit discussion

Github repo

I've recently discovered Om which I think is a beautiful example of why FP matters in modern day computing.

Basically what Om showed was that by using immutable data for state in React.js one could easily outperform similar apps written in OOP style. The reason why it is so much faster is that component should update only has to check for referential equality instead of deep equality. This is a much more efficient way to trigger updates.

Om uses constructs similar to lenses and zippers to easily modify state (they call them cursors) which is extremely cool.

Haskell has a strong lens implementation (we kinda coined that term, didn't we? ;-)) and that's why I started wondering wether I could port Om (or React) to Haskell.

That way I can use my favorite language to build robust and super fast web applications. A dream come true.

So yeah. Here we go. it's time to build Hom! A haskell version of Om! (I'm kind of excited :-))

I'm Considering several options for Haskell->JS conversion. The following Pros and Cons are constructed from first impressions I got from skimming documentations so they might be incorrect. Please correct me when one of the pros or cons is wrong!


  • Fully compactible with all GHC packages
  • straightforward FFI
  • very large runtime system
  • packages draw in a lot of dependencies, creating huge generated code


  • Small runtime system
  • Small generated code
  • Pretty darn fast
  • Great FFI.
  • No typeclasses so no Lenses. Kinda defeats the purpose of choosing haskell in the first place


  • Compactible with most GHC-Compactible packages
  • Smaller runtime system than GHCJS
  • Good concurrency constructs
  • Not sure how good the FFI is. What I've seen so far looked a bit clunky compared to UHCJS but maybe I need to get a bit more familiar with it.


  • Very extensive javascript FFI. probably the best I've seen.
  • Maintained by my university (Not really a pro. just exciting)
  • Doesn't support GHC. UHC is a different standard. I think it supports upto haskell2010 though

I think I will go for Haste for now. Though UHCJS sounds really attractive as well to be honest. Fay is just a no-go because I want the sexy lens library!

Next time I'm gonna work out some API design decisions etc. The problem is I really don't have a lot of time to start coding yet because of university work. I really want to start building this library because I want it so bad!

Any feedback is appreciated. Also if someone is already working on something similar to this, let me know.

http://arianvp.me/hom-react-js-for-haskell/a6e98bac-eece-419c-ae0b-774a39cac70fTue, 20 May 2014 13:01:13 GMT
<![CDATA[Fun with the halting problem.]]>Lets start by proving the following result.

Result 1

There is no program bool isDeadCode(Code j, Code c) that can check whether a program j contains dead code c.

What we mean by dead code is the following:

if (true) { Console.WriteLine("Hey there") }
Console.WriteLine("I will never be reached. I am dead code");

Anyhow, back to the proof...

Suppose there does exist a program bool isCodeDead(Code j, Code c) that can check whether a program j contains dead code c.

In other words. Something like this:

bool isCodeDead(Code j, Code c)
    ... // some magic here that tells if  c is dead in j

Now let k/0 be an arbitirary program for which we want to know whether it halts or not. We start by getting rid of all occurrences of c in k, producing k'. We can be sure that c isn' t dead code in k' because c is absent in k'.

Now lets consider the following pseudocode:

proc g:

It should be clear that c will only be reached when k' halts. Since we assumed that we have a program bool isCodeDead(Code j, Code c) that checks whether a program contains dead code, we can tell whether g has dead code. But that also means we can tell whether k' halts. That leads to a contradiction because the halting problem tells us we cannot tell whether an arbitirary program halts or not.

Thus now we know our assumption was wrong and that the original result must hold by reductio ad absurdum. $\blacksquare$

Now lets use this result to proof an interesting Corollary.

Corollary 1

There is no program List<Code> markDeadCode(Code c) that can mark dead code in an arbitrary program.

The following proof is very similar to the previous one. Lets asume the contrary, that there is a program List<Code> markDeadCode(Code c) that can mark dead code in an arbitrary program.

The source code of such a program would be as follows:

// returns a list of codes that are dead in c.
List<Code> markDeadCode(Code c)
    ... // extreme Coding skills that can mark dead code 

Now we can write the following program:

bool isCodeDead(Code j, Code c)
    List<Code> deadCodes = markDeadCode(j);
    foreach (Code deadCode in deadCodes)
        if (deadCode == c)
            return true;
    return false;

Wait hold on a second!? Hadn't we proven in Result 1 that no program bool isCodeDead(Code j, Code c) could ever exist? Exactly. we have found a contradiction, and thus there is no way a program List<Code> markDeadCode(Code c) could ever exist by reductio ad absurdum.


I found these proofs rather beautiful and wanted to share them with you :-)

http://arianvp.me/fun-with-the-halting-problem/2a69cec5-90dc-4793-a423-0ab5a6016232Sun, 19 Jan 2014 23:20:50 GMT
<![CDATA[Every boy loves a girl]]>Every boy loves a girl can actually mean two things:

  • Every boy has a girl that he loves
  • There is a girl which every boy loves

We can use predicate logic to remove disambiguity:

  • B is the unary predicate letter that determines wether one is a boy.
  • G is the unary predicate letter that determines whether one is a girl.
  • L is the binary predicate letter that determines whether one loves another.

Now when we want to say "Every boy has a girl that he loves", we say
$$ \forall x (Bx \Rightarrow \exists y (Gy \land Lxy))$$.

And if we want to say "There is a girl which every boy loves", we say
$$ \exists x (Mx \land \forall y (By \Rightarrow Lyx))$$.

By using predicate logic, we have removed every form of ambiguity.

http://arianvp.me/every-boy-loves-a-girl/1be73e74-ff0b-4612-8541-93fb223c7d83Sat, 16 Nov 2013 17:11:22 GMT
<![CDATA[Proofs about infinite series]]>Today we will discuss a proof about infinite series.

Result to prove

The infinite series $$\sum^{\infty}_{k=1}\dfrac{1}{k\left(k+1\right)}$$ converges to 1.

First, we consider the sequence $\left\lbrace s_{n } \right\rbrace$ of partial sums for this series. Since
$\sum^{\infty}_{k=1}\dfrac{1}{k\left(k+1\right)} = \dfrac{1}{1\cdot 2} + \dfrac{1}{2\cdot 3} + \dfrac{1}{3\cdot 4} \ldots$, it follows that $s_1 = \dfrac{1}{1\cdot 2} = \dfrac{1}{2}, s_2 = \dfrac{1}{1\cdot 2} + \dfrac{1}{2\cdot 3} = \dfrac{1}{2} + \dfrac{1}{6} = \dfrac{2}{3},$ and $s_3 = \dfrac{1}{1\cdot 2} + \dfrac{1}{2\cdot 3} + \dfrac{1}{3\cdot 4} = \dfrac{1}{2} + \dfrac{1}{6} + \dfrac{1}{12} = \dfrac{3}{4}$.

Based on these three terms, it appears that $s_n = \dfrac{n}{n+1}$ for every positive integer $n$. We prove that this is indeed the case.


Lemma 1

For every positive integer $n$

$$s_n = \dfrac{1}{1\cdot 2} + \dfrac{1}{2\cdot 3} + \dfrac{1}{3\cdot 4} + \ldots + \dfrac{1}{n\left( n + 1 \right)} = \dfrac{n}{n+1} $$

Proof of Lemma 1

We proceed by induction. For $n=1$ we have $s_1 = \dfrac{1}{1\cdot 2} = \dfrac{1}{1+1}$ and the result holds.

Assume that $s_k = \dfrac{1}{1\cdot 2} + \dfrac{1}{2\cdot 3} + \dfrac{1}{3\cdot 4} + \ldots + \dfrac{1}{k\left( k + 1 \right)} = \dfrac{k}{k+1} $, where $k$ is a positive integer. We show that
$s_{k+1} = \dfrac{1}{1\cdot 2} + \dfrac{1}{2\cdot 3} + \dfrac{1}{3\cdot 4} + \ldots + \dfrac{1}{\left(k+1 \right)\left( k + 2 \right)} = \dfrac{k+1}{k+2} $

Observe that $s_{k+1} = \left[ \dfrac{1}{1\cdot 2} + \dfrac{1}{2\cdot 3} + \dfrac{1}{3\cdot 4} + \ldots + \dfrac{1}{k\left( k + 1 \right)} \right] + \dfrac{1}{\left(k+1 \right)\left( k + 2 \right)}$
$= \dfrac{k}{k+1}+\dfrac{1}{\left(k+1 \right)\left( k + 2 \right)} $ $= \dfrac{k(k+2)+1}{(k+1)(k+2)}=\dfrac{k^2+2k+1}{(k+1)(k+2)}$ $= \dfrac{(k+1)^2}{(k+1)(k+2)} = \dfrac{k+1}{k+2}$

By the Principle of Mathematical Induction, $s_n = \dfrac{n}{n+1}$ for every positive integer $n$.


Lemma 2

$$lim_{n\rightarrow \infty} \dfrac{n}{n+1} = 1$$

Recall that $lim_{n\to \infty} s_n = L \equiv (\forall \epsilon > 0)(\exists N \in \mathbb{N})(\forall n > N) : \left|S_n -L \right| < \epsilon$

Thus for a given $\epsilon > 0$ we are required to find a positive integer $N$ such that if $n > N$, then $\left|\dfrac{n}{n+1}-1\right| < \epsilon$.Now $\left|\dfrac{n}{n+1}-1\right|=\left|\dfrac{n-n-1}{n+1}\right|=\left|\dfrac{-1}{n+1}\right|=\dfrac{1}{n+1}$.

The inequality $\dfrac{1}{n+1} < \epsilon$ is equivalent to $n+1 > \dfrac{1}{\epsilon}$, which in turn is equivalent to $n > \dfrac{1}{\epsilon} - 1$. If $n > \dfrac{1}{\epsilon}$ then $n > \dfrac{1}{\epsilon} - 1$. We can now present a proof of this lemma.


Proof of Lemma 2

Let $\epsilon > 0$. Choose $N=\left\lceil\dfrac{1}{\epsilon}\right\rceil$ and let $n>N$. Then $n > \dfrac{1}{\epsilon} > \dfrac{1}{\epsilon} - 1$. So $n > \dfrac{1}{\epsilon} - 1$. Thus $n+1 > \dfrac{1}{\epsilon}$ and $\dfrac{1}{1+n} < \epsilon$. Hence $\left|\dfrac{n}{n+1}-1\right| = \left|\dfrac{-1}{n+1}\right| = \dfrac{1}{n+1} < \epsilon$.


Recall our original result to prove.

The infinite series $$\sum^{\infty}_{k=1}\dfrac{1}{k\left(k+1\right)}$$ converges to 1.

Proof of the result

The $n\mathrm{th}$ term of the sequence $\left\lbrace{s_n}\right\rbrace$ of partial sums of the series $\sum^{\infty}_{k=1}\dfrac{1}{k\left(k+1\right)}$ is $s_n = \dfrac{1}{1\cdot 2} + \dfrac{1}{2\cdot 3} + \dfrac{1}{3\cdot 4} \ldots + \dfrac{1}{n(n+1)}$.

By Lemma 1 $s_n = \dfrac{1}{1\cdot 2} + \dfrac{1}{2\cdot 3} + \dfrac{1}{3\cdot 4} \ldots + \dfrac{1}{n(n+1)} = \dfrac{n}{n+1}$ and so $s_n = \dfrac{n}{n+1}$. By Lemma 2, $lim_{n\to \infty} \dfrac{n}{n+1} = 1$. Since $lim_{n\to \infty} s_n= 1$, it follows that $\sum^{\infty}_{k=1}\dfrac{1}{k\left(k+1\right)} = 1$.


http://arianvp.me/proofs-about-infinite-series/99a16f87-bd08-40da-8e9a-ef0a355e05fbSun, 03 Nov 2013 12:38:48 GMT
<![CDATA[A Journey in learning haskell]]>Introduction

Hey there. In these ongoing blog series I will document my journey is progessively building a LISP interpreter as a case study for a 'larger' haskell program. I'm a beginner haskell programmer who is looking to learn a bit more. Follow along as you'll (hopefully) see me evolve into a haskell pro

I'm not sure what functionality the LISP will have, but I'm going to add more and more as time progresses.

I hope that the issues I run into, and the ways I will solve them, will be educational for you.

Day 1

Cabal sandboxes

Today I'm going to set up the project. Cabal 1.8 has just been released and it introduces sanboxing.. So our project environment will be a cabal sandbox to escape Dependency hell. In the past we'd use cabal-dev for this, but this package has become obsulete with the introduction of sandboxes.

I've never worked with cabal 1.8 sandboxes before, so it will be fun to experiment with it.

P.s. I will post a link to the github repository soon once I' ve got my development environment all set up..

Design ideas

I've got some ideas lying around from previous failed LISP toy projects. I also want to see this project as an opportunity to learn about new technologies that I haven't worked with before. Here follows a list of things that I might use in the project

  • Lexical scoping and variable binding with the bound library by Edward Kmett.
  • For parsing we'll use parsec. Most grammars of scheme/lisps are in Backus-Nauer Form. This means the grammar of a lisp is context-free. Context free means we can restrict ourselves to Applicative code and can avoid monads.
  • I might want to do something with F-algebras and typed lambda-calculus as intermediate form. But I only half know what I'm even saying. So I will have to read into that first.
  • Once I'm there, I want to look at tail-call optimisation. But this seems like premature optimisation for now *padum tsss*
  • I want to use monad transformers in some way (I've never really used them before) to learn more about them. Not sure what that goal means yet, but I just want to learn about monad transformers in this journey.
  • While we're at it. Learn a thing about proper error handling. As haskell exceptions seem to be a big mess.

The Dangers

I'm kind of chaotic and impulsive. This means I have a hard time finishing projects and get bored quickly. So this series might turn into a " How to slay a dragon" tutorial half way through. You've been warned.

The reward

a better insight into building haskell applications.

http://arianvp.me/haskell-journey/4618a2c6-e295-4040-9885-50ef5cfab560Mon, 21 Oct 2013 17:39:32 GMT
<![CDATA[Internet: een bron van kennis]]>Ik heb mijn passie, mijn hobby en mijn toekomstige studie te danken aan het internet. Geert-Jan Bogaerts vraagt in zijn artikel "Het internet maakt ons slimmer dan ooit" of het internet ons dom maakt. Ik zeg nee! Op een dag, toen ik een computerspel aan het spelen was, vroeg ik mij opeens af hoe computerspellen eigenlijk gemaakt worden. Ik zocht het op op het internet en ik ontdekte de bruisende wereld van het programmeren. Ik was betoverd door het idee dat je een computer alles kon laten doen wat je maar wilde. Via Google ging ik op zoek naar allerlei websites over programmeren, want ik wilde het leren. Het is nu vier jaar later en ik programmeer nu op hoog niveau en ik heb ontzettend veel plezier. Zoveel zelfs dat ik heb besloten informatica te gaan studeren om mij nog verder te verdiepen in dit vakgebied.

Maar ik ben geen uitzondering. Ik heb veel mensen leren kennen op het internet die de zelfde passies delen als ik. Ook zij hebben meestal hun kennis te danken aan het internet. Het internet biedt ontwikkelingskansen voor de nieuwsgierige surfer. Als hij eenmaal op een onderwerp is gefixeerd, heeft hij alle informatie tot zijn beschikking. Mensen kunnen werkelijk alles leren op het internet.

Toch is de hoeveelheid informatie op het internet redelijk intimiderend. Men raakt snel de weg kwijt op het wereld wijde web. Hoe moeten mensen erachter komen wat van belang is en wat niet? Er is een grote kans dat mensen veel van hun tijd verdoen aan zoeken van goede informatie, of zelfs het lezen van verkeerde informatie! Je moet redelijk gestructureerd te werk gaan, wil je efficiënt met zoveel informatie kunnen omgaan en dat kan niet iedereen.

Ik ken waarschijnlijk meer mensen die het niet voor elkaar krijgen om goed te leren programmeren dan mensen die dat wel voor elkaar krijgen, gewoon door het feit dat ze niet weten waar ze moeten beginnen. Mensen hebben vaak de instelling dat ze nooit mogen aannemen dat wat op het internet staat ook klopt. men moet er juist vanuit gaan dat alles op het internet klopt, want dan zal men zien dat bepaalde zaken op meerdere websites terugkomen terwijl andere zaken slechts op een website worden genoemd. Je kunt veel beter besluiten wat onzin is; als je aanneemt dat alles wat op het internet staat klopt.

Maar het internet is niet alleen een medium voor informatie, het internet biedt ook communicatie met andere gebruikers en dit speelt een essentiële rol in het vergaren van kennis. Het idee is dat je mensen opzoekt die de zelfde passie delen als jij, maar die er meer vanaf weten. Men leert als het ware van de meester. De meester weet welke informatie educatief is en welke niet. Wat te moeilijk voor je is en wat te makkelijk. Je moet dus een soort gidsen vinden die je zullen begeleiden in de wirwar van het web.

Alles is te vinden op het internet. En het internet zit vol met mensen die je kunnen helpen met leren. Als deze twee feiten worden gecombineerd, dan wordt het internet een ware bron van kennis die iedereen in staat stelt zichzelf te kunnen verrijken.

Zowat alle informatie is beschikbaar gesteld op het internet. Men moet leren met die grote hoeveelheid informatie om te gaan voordat je pas wat van het internet kan leren. Dit hoef je echter niet te doen zonder hulp. Er zijn genoeg mensen op het internet met meer kennis, die graag andere mensen willen helpen met het leren op het web. Kortom, het internet is een ware bron van kennis voor iedereen die leert hoe hij het moet gebruiken.

http://arianvp.me/bron-kennis/12c3810a-ae23-4ea5-99a7-f0ce3f23341fTue, 15 Oct 2013 10:00:00 GMT