Tuesday, December 30, 2025

Functional programming in JavaScript (6)

Please take a look at other posts about functional programming in JavaScript:

  1. Part 1 - what Functional Programming is about
  2. Part 2 - Functional Pipelines
  3. Part 3 - the Y Combinator
  4. Part 4 - Monads
  5. Part 5 - Trampoline
  6. Part 6 - Lenses
  7. Part 7 - Church encoding (booleans)
  8. Part 8 - Church encoding (arithmetics)
  9. Part 9 - Church encoding (if, recursion)
  10. Part 10 - Church encoding (lists)

State management is one of fundamental topics. Consider a state:

let state = { 
  user: { 
    age: 35,
    address: { 
      city: 'Warsaw' } 
    } 
  };

When state changes, an usual way of modifying it would be to just overwrite a part of it:

state.user.address.city = 'Prague';

In the functional world, updating an object is considered a bad practice. Instead, in a functional approach the state would be immutable which means that instead of a local modification, a new state is created. There are good reasons for that:

  • a complete new state means that there is no chance of race condition issues in a concurrent environment
  • a state change history can be tracked which allows so called time travel debugging

Because of this, it is common to encourage immutable states and if you ever studied/used React, you certainly saw something like:

function reducer(state, action) {
  switch (action.type) {
    case 'incremented_age':
      return { ...state, user: { ...state.user, age: state.user.age + 1 } };
    case 'changed_city':
      return { ...state, user: { ...state.user, address: { city: action.new_name } } };
    default:
      throw Error('Unknown action: ' + action.type);
  }
}

console.log( JSON.stringify( state ) );
state = reducer(state, { type: 'changed_city', new_name: 'Prague' } );
console.log( JSON.stringify( state ) );

Note how inconvenient is to create a new state when a deeply nested property is modified. The JavaScript's spread operator (...) helps a bit but still, imagine having dozens of such statemens in your code where modified props are deep down in the state. People often call this the Spread Operator Hell.

A functional approach to this problem, where a new state is to be created from existing state in an atomic way, are Lenses. A Lens lets us have a functional getter and setter (for a property, for an index of an array) and focuses on just a tiny portion of a possibly huge data. A Lens has two operations:

  • view - gets the value the lens focuses on
  • over - applies a given function to the value the lens focuses on
const lens = (getter, setter) => ({
  view: (obj) => getter(obj),
  over: (fn, obj) => setter(fn(getter(obj)), obj)
});

Simple? To view a value, you use the getter. To modify, you use the gettter to get the value, apply a function over it (that's why it's called over) and use setter to set the value back.

We can define our first lens, the Prop lens or Prop-based lens as it takes a property name of an object:

const prop = (key) => lens(
  (obj) => obj[key],                      // The Getter
  (val, obj) => ({ ...obj, [key]: val })  // The Setter (immutable!)
);

Still simple? Should be, JavaScript helps us here as both getting and setting a property of an object, given the property's name, is supported by the language.

And, that's it. Let's see the example. It composes three property lenses into a new lens that focuses on a property deep down in the state:

const userL    = prop('user');
const addressL = prop('address');
const cityL    = prop('city');

// compose them to focus deep into the state
const userCityLens = {
  view: (obj)     => cityL.view(addressL.view(userL.view(obj))),
  over: (fn, obj) => userL.over(u => addressL.over(a => cityL.over(fn, a), u), obj)
};

const currentCity = userCityLens.view(state); 
console.log( currentCity );
state = userCityLens.over(z => 'Prague', state);
console.log( JSON.stringify( state ) );

Take a close look of how the composition is done and make sure you are comfortable with the order of view/over application in the composed lens.

An interesting feature of the lens is that the function can actually operate on the original value (just like an ordinary property setter; lens just does it using a function!) e.g.:

state = userCityLens.over(z => z.toUpperCase(), state);

Still, isn't such manual composition kind of disappointing? Let's see what can be improved here - creating a new lens from existing lenses should be automated!

Well, here it is:

const compose2Lenses = (l1, l2) => ({
  // To view: go through l1, then l2
  view: (obj) => l2.view(l1.view(obj)),  
  // To update: l1.over wraps the result of l2.over
  over: (fn, obj) => l1.over(target => l2.over(fn, target), obj)
});

// A variadic version to compose any number of lenses
const composeLenses = (...lenses) => lenses.reduce(compose2Lenses);

Note how we start by combining just two lenses and then we make a variadic version that just combines an arbitrary number of lenses.

The composed lens can be now defined as:

const userCityLens = composeLenses(userL, addressL, cityL);

Nice! Ready for another step forward? How about replacing the Prop-based lens that takes a property name with another lens, let's call it the Path-based lens (or Selector-based lens) that just takes an arrow function that points to the exact spot in the state object we want to focus on? So we could have:

const userCityLens = selectorLens( s => s.user.address.city );

Compare the two, which one looks better and feels more understandable? I'd prefer the latter. However, to have it, we'd have to somehow parse the given arrow function definition so that the lens is able to learn that it has to follow this path: user -> address -> city. Sounds difficult?

Well, it is. There are two approaches. One would be to stringify the function and parse it to retrieve the path. Another one, preferable, would be to use the JavaScript's Proxy to record the path by just running the function over an object! I really like the latter approach:

const tracePath = (selector) => {
  const path = [];
  const proxy = new Proxy({}, {
    get(_, prop) {
      path.push(prop);
      return proxy; 
    }
  });
  selector(proxy);
  return path;
};

const setterFromPath = (path, fn, obj) => {
  if (path.length === 0) return fn(obj);
  const [head, ...tail] = path;
  return {
    ...obj,
    [head]: setterFromPath(tail, fn, obj[head])
  };
};

const selectorLens = (selector) => {
  const path = tracePath(selector);
  return {
    view: (obj) => selector(obj),
    over: (fn, obj) => setterFromPath(path, fn, obj)
  };
};

A bit of explanation.

First, the tracePath is supposed to get a function and execute it over an empty object. Assuming the function is an arrow function like s => s.user.address.city, the getter will be called 3 times and the path ['user', 'address', 'city'] would be recorded.

Second, the setterFromPath is just a recursive way of applying a function over a path to a given object.

And last, the lens just uses the two auxiliary functions to define view and over.

And how it's used? Take a look:

const userCityLens = selectorLens( s => s.user.address.city );

const currentCity = userCityLens.view(state);
console.log( currentCity );
state = userCityLens.over(z => "Prague", state);
console.log( JSON.stringify( state ) );

And yes, it works! That ends the example.

There are other interesting lenses, like the Prism which handles the case of possible non-existence of the data, in a way similar to the Maybe monad. It's similar to the conditional getter built into the language:

const city = state?.user?.address?.city;

but the interesting thing about the Prism is that it handles both the getter and the setter, while the built-in ?. operator cannot be used at left side of the assignment.

Friday, December 19, 2025

Functional programming in JavaScript (5)

Please take a look at other posts about functional programming in JavaScript:

  1. Part 1 - what Functional Programming is about
  2. Part 2 - Functional Pipelines
  3. Part 3 - the Y Combinator
  4. Part 4 - Monads
  5. Part 5 - Trampoline
  6. Part 6 - Lenses
  7. Part 7 - Church encoding (booleans)
  8. Part 8 - Church encoding (arithmetics)
  9. Part 9 - Church encoding (if, recursion)
  10. Part 10 - Church encoding (lists)

Another interesting technique that is commonly used in functional programming is the Trampoline. Is a hybrid way of avoiding stack overflows that possibly occur during deep recursive computations.

Functional environments that support Tail Call Optimizations can possibly go deep into recursion. But in V8's JavaScript, stack depth is limited by memory. Since each stack frame consumes a portion of this memory and the stack memory limit is around 1M, depending on how many actual arguments your stack calls have, you can possibly execute about 10000 simple recursive calls. In case of heavy calls (with multiple arguments), this goes down quickly.

Consider this example:

function naiveFac(n) {
  return n <= 1 ? 1 : n * naiveFac(n - 1);
}
console.log( naiveFac(5) );
console.log( naiveFac(10000) );

On my current V8 (node 24), the former computes 120 correctly, the latter throws:

Uncaught RangeError RangeError: Maximum call stack size exceeded
    at naiveFac (c:\Temp\app.js:2:3)

And this is where the Trampoline can be used.

But first, let's refactor this to the Accumulator Passing Style we've already covered:

function naiveFac2(n) {
   return (function factStep(n, accumulator = 1) {
      return n <= 1
         ? accumulator
         : factStep(n-1, n*accumulator);
      })(n);
}
console.log( naiveFac2(5) );
console.log( naiveFac2(10000) );

It still fails for 10000 but the important refactorization is already there.

Now, consider the Trampoline:

function trampoline(fn) {
  while (typeof fn === 'function') {
    fn = fn();
  }
  return fn;
}

First, note that it's imperative, it uses while. And yes, it's not pure, not quite functional but it does the job in a hybrid world of JavaScript. Then, note what it does. It gets a function, executes it and if the return value of the function is another function, it executes it again. And again, until something else than a function is returned. Do you get the idea? Our recursive function, instead just calling itself, will now return a function that calls it but the Trampoline will execute this new function. It avoids recursion but also requires this specific approach that involves the accumulator.

We can now go back to the original issue and refactor it to use the Trampoline:


function trampolineFac(n) {
   function factStep(n, accumulator = 1n) {
      return n <= 1n 
        ? accumulator
        : () => factStep(n-1n, n*accumulator);
   }
   return trampoline( () => factStep(BigInt(n)) );
}

console.log(trampolineFac(5));  
console.log(trampolineFac(10000)); 

Note that it requires some subtle tweaks to actually support BigInts as the result is a huge number. But, hey, it works! The recursion limit is gone!

Friday, December 12, 2025

Functional programming in Javascript (4)

Please take a look at other posts about functional programming in JavaScript:

  1. Part 1 - what Functional Programming is about
  2. Part 2 - Functional Pipelines
  3. Part 3 - the Y Combinator
  4. Part 4 - Monads
  5. Part 5 - Trampoline
  6. Part 6 - Lenses
  7. Part 7 - Church encoding (booleans)
  8. Part 8 - Church encoding (arithmetics)
  9. Part 9 - Church encoding (if, recursion)
  10. Part 10 - Church encoding (lists)

Monad is one of the most interesting patterns in functional programming. In short, it allows to operate on values that have an extra information attached.

There are various monads and technically, a specific monad is conceptually similar to a class (from object languages). Instances of the monad wrap values, a monad instance contains the wrapped value and an extra state.

A generic monad definition says that you need:

  • a type constructor – that creates a monad type M from some value type A. JavaScript doesn't really care as there are no types at compile time.
  • a return function – that packs/wraps a value into the monad: a → (M a)
  • a bind function – that takes a monad instance, a monadic operation on a pure value and executes this operation and returns a new monad instance: (M a) → (a -> M b) → (M b)

In some programming languages, monads have their own syntactic sugar. For example, in Haskell, the bind is represented by >>= operator.

Our JavaScript example is the Maybe monad. Instead of just saying what problem it solves, let's see the problem.

Let's take a function that takes a string. We want to parse it to convert it to a number and reverse the number. There are two possible points of failure: the string can possibly not represent a number (parsing fails) and the number can possibly be 0 (reversing fails). In a typical imperative/object language, there are two patterns of handling such failures:

  • nested ifs
  • exception handing

The first approach is the least elegant, each possible point of failure requires an extra condition which causes the code to form a pyramid:

function process(input) {

    const value = Number(input);

    if ( !isNaN(value ) ) {
        if ( value != 0 ) {
            const reciprocal = 1 / value;
            return reciprocal;
        }
    }

    return null;
}

console.log( process("not-a-number") );
console.log( process("0") );
console.log( process("2") );

The other approach - a global exception handler - is much cleaner as each possible failure is caught by the very same exception handler:

function process(input) {
    try {
        const value = Number(input);
        const reciprocal = 1 / value;

        return reciprocal;
    }
    catch {
        return null;
    }
}

console.log( process("not-a-number") );
console.log( process("0") );
console.log( process("2") );

The Maybe Monad is a functional alternative to such global exception handling section. At each possible failure point we will have a Maybe instance that will either contain a non-empty value or an empty value and checking the non-emptiness will either be performed in each consecutive step or inside the bind (which will skip a step in case of empty value).

If you come from imperative/object languages that support Nullable wrappers over values (yes, C#, it's about you!) then yes, Maybe instance is really similar to Nullable instance. What is the extra feature here is bind that will allow us to avoid the Pyramid of Doom.

But, we'll have to put some effort into that first.

Let's start with Monad class:

class Maybe {
  constructor(value) {
    this.value = value;
  }

  bind(f) {
    if (this.value === null || this.value === undefined) {
      return Maybe.Nothing();
    }
    return f(this.value);
  }

  static of(value) {
    return new Maybe(value);
  }

  static Nothing() {
    return new Maybe(null);
  }
}

Note how simple it is. The bind is here, the return is here, too (it's called of) and we even have an extra constructor for the empty value (Nothing).

We are ready to implement the two monadic operations. Go back to the very start and consult the signature of bind - monadic operations are functions that take pure values and return monad instances:

// parsing
function parse(str) {
  const n = Number(str);
  return isNaN(n) ? Maybe.Nothing() : Maybe.of(n);
}

// reversing
function recp(n) {
  return n === 0 ? Maybe.Nothing() : Maybe.of(1 / n);
}

And ... that's it!

Oh, really?

Yes! We can already write monadic code:

function process1( i ) {
  return Maybe.of(i) // wrap input
    .bind(input => 
      parse(input) // parse
        .bind(number => 
          recp(number) // reverse
            .bind(reciprocal => 
              Maybe.of(reciprocal) // wrap output
            )
        )
    );
}

console.log( process1( null ) );
console.log( process1( "0" ) );
console.log( process1( "2" ) );

Note that there's no single if in the code because ifs are hidden inside monadic operations and inside bind!

But wait! It's disappointing! The Pyramid of Doom is still there!

How other functional languages tackle this? Take a look at Haskell, a raw bind has the same problem, you need to pass an arrow function as an argument to bind and each consecutive monadic operation is a new arrow function next to the previous one:

  getLine >>= \name -> putStrLn ("Hello, " ++ name ++ "!")

The first possible step into right direction would be - since bind returns a monad instance - to flatten the code:

function process1(i) {
  return Maybe.of(i) // wrap input
    .bind(input =>
      parse(input))  // parse
    .bind(number =>
      recp(number))  // reverse 
    .bind(reciprocal =>
      Maybe.of(reciprocal) // wrap output
    );
}

That looks much better, the pyramid is gone. But it has a serious drawback. In the previous approach, each intermediate result (input, number, reciprocal) was caught by a closure and was available in the very next nested arrow function. All three are available at the end.

In this new, flatten version, this feature is gone. No closures, each bind is side-by-side with other binds and has only its own value. To be able to pass all intermediate values down the pipeline, we'd have to wrap them in arrays and return monadic instances that wrap two, three raw values. That doesn't sound right.

How Haskell solves this? They have the syntactic sugar called the do notation which is basically a special block of code where monadic instances can be unwrapped into temporary variables using the <- operator.

main = do
  name <- getLine
  putStrLn ("Hello, " ++ name ++ "!")

And yes, this is the ultimate syntax. The do notation is the crucial element of the language that makes monads feel natural in the language. It's so important that other functional languages have it, OCaml has the let*, Scala has for { } blocks!

Can we have it in JavaScript? Well, we can't add our own syntactic sugar into the language. But we can have a function!

Let's face it. It's not obvious. Monadic operations return monad instances and it's bind that unwraps monad instances into raw values. Such do-like function should allow us to unpack monads and also, keep all unpacked values in a single closure-like "scope" so that all of them are available after obtained.

One of the very beautiful ways to do this involve using JavaScript's generators! We will write the do-block as a generator function and we will unpack monad instances using yield! JavaScript is unique - yield is a two way operator. It not only returns a value to the caller but also lets the caller to push a value back to the generator so that yield can be used at the right side of the assignment operator! No other language that has yield support this two-way communication!

JavaScript's do is surprisingly short then but very clever. We call it doM as do is a reserved keywork in the language:

function doM(gen) {
  function step(value) {
    const result = gen.next(value);
    if (result.done) return result.value;
    return result.value.bind(step);
  }
  return step();
}

A side note: this function uses this highly unique feature of JavaScript that allows us to call gen.next(value) to push values into the generator. You can study this contrived example:

// generator returns 1,2,3 to the client
function* gen() {
    var a = yield 1;
    console.log( `a = ${a}` );
    var b = yield 2;
    console.log( `b = ${b}` );
    var c = yield 3;
    console.log( `c = ${c}` );
}

var it = gen();
var i = 0;
var a = [null, 17, 18, 19];
// the client pushes 17, 18, 19 back into the generator
while (true) {
  const { value, done } = it.next(a[i++]);
  if (done) break;
  console.log(value);
}

Back from the side note, with doM monadic code can be refactored to clean, linear code where there's no bind at all and monadic operations appear on the right sides of the yield operator but they yield raw values on left sides of assignments. And since a generator is a function, all raw values are available in it after they are assigned!

function process2( i ) {
    return doM(
        function* () {
            const input      = yield Maybe.of(i); 
            const number     = yield parse(input);
            const reciprocal = yield recp(number);

            return Maybe.of(reciprocal);
        }()
    );
};

console.log( process2( null ) );
console.log( process2( "0" ) );
console.log( process2( "2" ) );

Holly molly, that's nice! There's no try-catch but it feels like one, whenever an operation fails, a monad with empty value is passed down the pipeline and bind just skips over it!

There's one almost-monad built into JavaScript - it's the Promise monad, the one that allows async operations. It has its own syntax, no do block and no generator with yield but there's async as a function decorator and await for unpacking raw values.

async function process3( i ) {
   const input      = await Promise.resolve(i); 
   const number     = await parseAsync(input);
   const reciprocal = await recpAsync(number);
   
   return Promise.resolve(reciprocal);
};

Note that async-await is not about exception handling, it's about handling async operations. If you just thought that there could be other interesting monads that solve other issues but also benefit from this kind of do-notation syntax then yes, functional languages like OCaml and Scala have other monads and Haskell features so called monad transformers where you combine different monads in one do block!

Monad Promise
Maybe.of Promise.resolve
bind then
do function async decorator
yield inside the do block await in an async function

What we just demonstrated, though, is that we can have similar feature, built in a functional way. And yes, async/await is internally implemented very similarily to yield generators. Before async/await syntactic sugar was introduced, people were coding their own async monads using the pattern we've just demonstrated, using yield and an auxiliary do-like function. These early attempts can still be found, take a look at the co library for example.

Happy coding!

Functional programming in JavaScript (3)

Please take a look at other posts about functional programming in JavaScript:

  1. Part 1 - what Functional Programming is about
  2. Part 2 - Functional Pipelines
  3. Part 3 - the Y Combinator
  4. Part 4 - Monads
  5. Part 5 - Trampoline
  6. Part 6 - Lenses
  7. Part 7 - Church encoding (booleans)
  8. Part 8 - Church encoding (arithmetics)
  9. Part 9 - Church encoding (if, recursion)
  10. Part 10 - Church encoding (lists)

Functional programming is also about recursion. Take a simple factorial:

const fac = n => n > 1 ? n * fac(n-1) : 1

console.log( fac(5) );

That looks super easy but what if you want to write a recursive function without using a recursive function name in its definition?

In the above example, this would mean you can bind a function to a variable fac but you can't use the name fac in the function definition.

Not possible?

Consider this, then:

const fac = 
  (f => n => n > 1 ? n * f(f)(n-1) : 1)
  (f => n => n > 1 ? n * f(f)(n-1) : 1);

console.log( fac(5) );

which technically means that you don't even need to bind it to a separate name (fac):

console.log(
  (f => n => n > 1 ? n * f(f)(n-1) : 1)
  (f => n => n > 1 ? n * f(f)(n-1) : 1)
  (5)
)

Hold on, most of us need time to comprehend this. Let's examine.

The core definition of the function:

n => n > 1 ? n * fac(n-1) : 1

just got a new level of abstraction:

f => n => n > 1 ? n * f(n-1) : 1

but to actually be able to piggyback the function onto itself, we need to pass it to itself as f and in the same time, pass the passed function to itself in every call, f(f):

  (f => n => n > 1 ? n * f(f)(n-1) : 1)
  (f => n => n > 1 ? n * f(f)(n-1) : 1)

This is clever and one could ask: what if I don't want to repeat the definition twice so that I can pass it to itself?

This is doable but with an auxiliary helper called the Y combinator.

Consider this:

const Y = f => (x => f(y => x(x)(y)))(x => f(y => x(x)(y)));

const fac = Y(f => n => n > 1 ? n * f(n-1) : 1);

This auxiliary helper function, called Y, basically wraps this passing-a-function-to-itself. Note that its core body

x => f(y => x(x)(y))

is indeed repeated twice. This higher order function, Y, is now a factory of functions and can make any function recursive, the recursive call is resolved not by-name but by-binding: in our example, Y is applied to:

f => n => n > 1 ? n * f(n-1) : 1

and note that f binds the supposed recursive function name here.

And, because this core body is repeated twice, Y can be refactored to:

const Y = f => (x => x(x))(x => f(y => x(x)(y)));

const fac = Y(f => n => n > 1 ? n * f(n-1) : 1);

And that's it. The Y is undeniably an impressive idea.

Thursday, December 11, 2025

Functional programming in JavaScript (2)

Please take a look at other posts about functional programming in JavaScript:

  1. Part 1 - what Functional Programming is about
  2. Part 2 - Functional Pipelines
  3. Part 3 - the Y Combinator
  4. Part 4 - Monads
  5. Part 5 - Trampoline
  6. Part 6 - Lenses
  7. Part 7 - Church encoding (booleans)
  8. Part 8 - Church encoding (arithmetics)
  9. Part 9 - Church encoding (if, recursion)
  10. Part 10 - Church encoding (lists)

The pipeline operator is one of features of many functional languages. Take a look at this F# example:

[1; 2; 3; 4;]
|> List.filter (fun x -> x % 2 <> 0)
|> List.map (fun x -> x * x + 1)

What problem this syntax solves?

Think of a typical app. User inputs some data and the app processes the data. There's validation, then some processing, then mapping input to an output.

A usual flow in an imperative/object language, is a sequence of loops (for/while). Each loop does some filtering, ordering, mapping, groupping etc.

var list = [1,2,3,4];
for ( var i=0; i < list.length; i++ )
{
    // validation
    if ( ... ) { ... }
}
for ( var i=0; i < list.length; i++ )
{
    // processing
}
// ...

It's usually hard to follow such code and also not easy to maintain it. Loops can have side effects, the list is modified or maybe it's not, loop indexes have to be carefully inspected to find possible subtle mistakes. When a loop groups data lists into dictionaries, things get even more complicated.

But, if a list has its own programming interface, the code can be refactored to much cleaner:

var result = 
    [1,2,3,4]
        .filter( x => x < 2 ) // validation
        .map( x => x * 2 )    // processing
        ...

This clean syntax is only possible because of existence of these specific methods, filter or map. These methods take a list and return another list which is an argument to the very next method call. The code flow is very clear, arguments are clear.

But what if you want to call a method that is not built into the array interface?

var result = 
    [1,2,3,4]
        .filter( x => x < 2 ) // validation
        .saveToDatabase()     // ?? missing, there's no such method
        ...

In functional approach, such pipeline can be implemented using functions that have exactly one parameter - a list and one return value - a list.

function filterEvens( xs ) {
    return xs.filter( x => x % 2 == 0 );
}
function double( xs ) {
    return xs.map( x => x*2);
}

console.log(
    double(
        filterEvens(
            [1,2,3,4,5]
        )
    )
)

This is a step in a right direction. It no longer relies on built-in methods, instead, custom functions can be easily implemented. The saveToDatabase would be just another function that takes a list and returns a list (so that in can be chained further).

The problem here is that the syntax is unfortunate. The actual argument (the data) is in the middle of a possibly long sequence of calls and functions that are applied to the data are written bottom-to-top instead of top-to-bottom. In the above example, what we see is double followed by filterEvens while in fact it's first filterEvens that is applied, followed by double. Such code where the argument is in the very center (or at the top, just tilt your head to the right) is called a Pyramid of Doom and it's a problem not only in JavaScript.

Let's try to fix that by introducing the pipeline operator, just like F#'s |>. It's the operator that makes the code clean and functions applied in the very same order they appear in the code.

In JavaScript, we can't override a custom operator. You just can't have the |>. But, we can have a function that does the same.:

  
function pipe(xs, ...[f, ...fs]) {
    return f ? pipe( f(xs), ...fs ) : xs;
}

This pipe function is really simple. It takes a list (xs) and a list of functions (using the spread operator). It uses recursion to apply a next function to return value of previous function. Calling:

pipe( xs, f1, f2, f3 )

would yield:

f3( f2( f1( xs )))

Make sure it really does so! Using this simple function, we can rewrite the previous pipeline to:

console.log( 
    pipe( 
        [1,2,3,4,5],
            filterEvens,
            double
    )
);

Another step in the right direction. No, we still don't have the |> but we have the pipe function that does the same.

But - do we really need an extra function in the pipeline for every possible operation we would like to perform? In the example, to filter even numbers we have a function. Does it mean that any other filtering, like odd numbers or persons that are older than 18 years or orders that are ready to be shipped, would require its own extra function?

No, we can generalize that!

How do we generalize a function - like a filtering function - so that it takes just one parameter (a list) but in the very same time, another parameter, the filtering predicate?

Just make a function with two parameters?

Nope, we need exactly one!

Well, technically yes and no. We can have a higher order function. A function that takes parameters but returns another function with its own parameters. A function that creates functions. A function factory.

function map( f ) {
    return function( xs ) {
        return xs.map( f );
    }
}

function filter( predicate ) {
    return function( xs ) {
        return xs.filter( predicate );
    }
}

function sort( compareFn ) {
    return function( xs ) {
        return xs.sort( compareFn );
    }
}

function reduce( foldf ) {
    return function( xs ) {
        return xs.reduce( foldf );
    }
}

function flat() {
    return function( xs ) {
        return xs.flat();
    }
}

function take(n) {
    return function( xs ) {
        return xs.slice(0, n);
    }
}

console.log( 
    pipe( 
        [1,2,3,4,5],
            map( n => n + 1 ),
            filter( x => x < 4 ),
            map( n => n.toString() ),
            map( n => [n, n] ),
            flat(),
            sort( (n,m) => n-m ),
            map( n => +n ),
            reduce( (prev, curr) => prev + curr )
    )
);

With this approach, creating new pipeline operators is easy. And, compare this syntax to the F# example from the very beginning of this post.

To make it even more fun, all these pipeline operators that actually kind of wrap existing array methods can be also generalized using yet another level of abstraction:

function curryArrayMethod(method) {
  return function curried(func) {
    return function(array) {
      return method.call(array, func);
    };
  };
}

const curriedFilter = curryArrayMethod(Array.prototype.filter);
const curriedMap    = curryArrayMethod(Array.prototype.map);

console.log( 
    pipe( 
        [1,2,3,4,5],
        curriedFilter( x => x < 4 ),
        curriedMap( x => x * 2 )
    )
);

Have fun with functional pipelines in JavaScript.

As an excercise, you can add your own operator (like the group operator) or modify the sort operator so that instead of a comparer:

  ...
  sort( (n,m) => n-m )

it takes a function that gets the sort key of a single value and internally uses this function to compare two values, so that you can call sort like:

...
  sort( person => person.name )

Saturday, December 6, 2025

Functional programming in JavaScript (1)

Please take a look at other posts about functional programming in JavaScript:

  1. Part 1 - what Functional Programming is about
  2. Part 2 - Functional Pipelines
  3. Part 3 - the Y Combinator
  4. Part 4 - Monads
  5. Part 5 - Trampoline
  6. Part 6 - Lenses
  7. Part 7 - Church encoding (booleans)
  8. Part 8 - Church encoding (arithmetics)
  9. Part 9 - Church encoding (if, recursion)
  10. Part 10 - Church encoding (lists)

This post start a short series where we discuss functional programming. We'll discuss what functional approach is, what functional pipes and monads are.

Functional programming is about functions. In an ideal world, functions are pure means they have no side effects. Also, variables should be immutable which means that once variable value is set, it's never modified. This effectively means simple loops (for, while, do) are replaced with recursion.

Why? Well, often, the code is easier to read and reason about.

Before we dig into more complicated examples, let's start with discussing the difference between imperative and functional code. Suppose we need to map an array through a function.

A naive imperative approach would be:

function map( xs, f ) {
    var ret = [];
    for ( var i=0; i<xs.length; i++ ) {
        ret.push( f(xs[i]) );
    }
    return ret;
}

var a = [1,2,3,4,5];
console.log( map( a, _ => _**2 ) )

There's nothing wrong with the code. Note however, some subtle drawbacks. First, the loop index must be carefully controlled. Also, both the index and the return value accumulator are mutable. Under usual circumstances, this is safe, however, in a complicated, multithread code, mutable variables can be a pain if are used without a care.

A functional version of this will still have two parameters but there should be no loop but a recursion. In JavaScript, we can have an auxiliary function hidden inside the original one.

Also, we can show few possible approaches.

Let's start with a common technique, the Accumulator Passing Style. This approach is often the easiest one to understand by imperative programmers. It consists in passing an extra parameter between recursive calls, the parameter that accumulates the return value between recursive calls. In our map function, we start with empty array and then we add a new mapped value in each recursive call:

function map( xs, f ) {
    return (function rec(i, acc) {
        if ( i<xs.length ) {
            return rec(i+1, acc.concat(f(xs[i])));    
        } else {
            return acc;
        }
    })(0, []);
}

Easy. The index is still there, the accumulator is the extra parameter, the recursion seems rather straightforward.

Next approach demonstrates a common technique in which there's no explicit accumulator. Instead, the return value of the inner function is used to accumulate the return value of the outer function:

function map( xs, f ) {
    return (function rec(i) {
        if ( i<xs.length ) {
            return [f(xs[i])].concat( rec(i+1) );    
        } else {
            return [];
        }
    })(0);
}

Please take time to carefully study the difference. Debug if you need.

Next approach is a step forward into the functional world. Instead of passing the index in recursive calls (starting from 0), we will pass the array as a list and in each iteration we'll split the array (the list) into the head and the tail. JavaScript's array mimicks a list, the head (the first element) is just a[0] and the tail is just a.slice(1):

function map( xs, f ) {
    return (function rec(ys) {
        if ( ys.length >= 1 ) {
            return [f(ys[0])].concat( rec(ys.slice(1)) );    
        } else {
            return [];
        }
    })(xs);
}

We are almost there. The next refactoring splits the array (the list) into the head and tail in an explicit way, using the possibility to destructurize function parameters. Note also how the spread operator is used to obtain the tail:

function map( xs, f ) {
    return (function rec([head, ...tail]) {
        if ( head ) {
            return [f(head)].concat( rec(tail) );    
        } else {
            return [];
        }
    })(xs);
}

Technically of course, the if ( head ) is wrong because of null coercions, correct it to if ( head !== undefined ) if you need.

It can be further refactored to be slightly less verbose:

const map = ( xs, f ) =>
    (function rec([head, ...tail]) {
        return head 
        ? [f(head)].concat( rec(tail) )
        : []
    })(xs);

Let's stop there. Go back to the very top and compare both functions, the first imperative one and the last functional one. It's still the same language but two different programming styles.

That's why we say such languages are hybrid - if both paradigms, the imperative (or even object-oriented) and functional styles are possible and feel natural in the language.

Monday, November 10, 2025

Time to move away from classic captchas

Starting from January 2026, reCAPTCHA changes their rules. Site keys are migrated to Google Cloud and you have to provide billing information, otherwise captcha basically doesn't work after 10k monthly assessments (technically it works but it always succeeds, meaning you are vulnerable).

Since reCAPTCHA is used on millions of large and small websites, I don't even imagine what it means. Many of these websites are possibly poorly maintained and their owners won't even notice.

We did some research some time ago, looking at possible alternatives. One of the important factors is captcha's compliance with accessibility. Classic captchas (including reCAPTCHA) provide two interfaces, with additional audio-based challenges that are supposed to be accessible. I always believed this is a wrong approach because it basically gives two completely different vectors of a possible misuse - depending on which interface is easier to bypass, an attacker can focus on one or the other.

Also, reCAPTCHA doesn't provide the audio interface in other languages, try Polish and you'll find that it speaks in English.

What we ultimately decided is a custom version of a Proof-Of-Work captcha. Instead of going with existing solutions, we came with our own. This gives us a 100% control on how difficult the computation is at the client side. There were some critical changes in how SHA256 is computed with the subtle.crypto, namely, despite it's async, it no longer goes back to the event loop each time you await it. The UI is not updated but the performance is much higher. You just adapt to this new behavior by raising the difficulty of the client-side work to be done.

Since it's 6 weeks remaining, take your time to inspect all your reCAPTCHA websites, consider either sticking to it or moving away. But do not let yourself wake up in January and find out that things changed without your awareness.

Thursday, October 16, 2025

KB5066835 went terribly wrong

Yesterday, on 15.10.2025, KB5066835 was released for Windows 11. And guess, what, it causes IIS and IIS Express to reject client connections.

This effectively not only stops people who develop using VS + IIS Express but seems that some websites are affected

kb5066835 breaks IIS Express - Developer Community

KB5066835 update causing IIS Service to not work - Microsoft Q&A

Localhost not working anymore after 2025-10 cumulative update Windows 11 - Microsoft Q&A

Localhost applications failing after installing "2025-10 Cumulative Update for Windows 11 Version 24H2 for x64-based Systems (KB5066835) (26100.6899)" - Stack Overflow

windows - Http 2 Protocol error connecting to localhost web sites - Server Fault

Visual Studio - IIS Express - Site won't run / connect / SSL error - Umbraco community forum

Most people suggest uninstalling the KB, however, if your Windows refuses it, try the workaround from the last link above:

  1. In the registry, navigate to: HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\HTTP\Parameters
  2. Under the Parameters folder, right-click in the right-hand pane and select New > DWORD (32-bit) Value.
  3. Name the value EnableHttp2Tls and set its data to 0 (zero).
  4. Repeat the process to add another DWORD (32-bit) Value named EnableHttp2Cleartext and set its data to 0.
  5. Restart the machine.

Edit:Seems that it has been patched in a seemingly unrelated Security Intelligence Update for Microsoft Defender Antivirus - KB2267602 (Version 1.439.210.0).

Thursday, May 15, 2025

OldMusicBox.ePUAP.Client.Core 1.25.5

I am proud to announce that the NET8 port of the OldMusicBox.ePUAP.Client is published at GitHub and Nuget.
Please find more details on the package home page.

Wednesday, March 26, 2025

Tests in the very same console app - compilation error in .NET Core

Having unit tests in the very same console app causes a compilation error:

Program has more than one entry point defined. Compile with /main to specify the type that contains the entry point.

Kind of unexpected, there's definitely a single Main.

There problem seems to be there for all these years, despite being first described back in 2017

The solution is alread provided in the link above, just add

<GenerateProgramFile>false</GenerateProgramFile>

to the *.csproj

A brave little cookie-footer

One of our apps contains a page and the page has a div. The div's class name is cookie-footer.

The div itself has nothing to do with actual cookies, it contains a content that user is supposed to see.

And what? Turns out Brave doesn't show that div. It just adds:

// user agent stylesheet
.cookie-footer {
    display: none !important;
}

What's bizzare, Brave doesn't add this always. We have the app on multiple domains, the user agent style is added on most of domains but not all of them!

Tried other variants:

  .cookie-footer  - blocked
  .cookiefooter   - blocked
  .cookie--footer - works
  .cookiee-footer - works
  .coookie-footer - works

Great times. It's not only the legal regulation that can block your content, it's also your browser heuristic.

Monday, March 17, 2025

A fairy tale of misusing the C# typesystem

Once upon a time in a kingdom far far away someone wrote a code that required two string arguments:

    public class Worker
    {
        public void DoWork( string name, string surname )
        {
            Console.WriteLine( $"{name} {surname}" );
        }
    }

All the people used the code for years without any issues:

    new Worker().DoWork( "john", "doe" );

Then, someone in a hurry did something bad which should never happen. Arguments were swapped in a call:

    new Worker().DoWork( "doe", "john" );

Consequences were severe.

The culprit was tried and expelled from the kingdom. The king called for his best wizards and asked them to do something so that it never ever happens in the future.

One of the wizards suggested that introducing types would make it clear of what real intentions of arguments are:

    public class Name
    {
        public Name( string value )
        {
            this.Value = value;
        }

        public string Value { get; set; }

        public override string ToString()
        {
            return this.Value;
        }
    }

    public class Surname
    {
        public Surname( string value )
        {
            this.Value = value;
        }

        public string Value { get; set; }
        public override string ToString()
        {
            return this.Value;
        }
    }


    public class Worker
    {
        public void DoWork( Name name, Surname surname )
        {
            Console.WriteLine( $"{name} {surname}" );
        }
    }

Initially people complained a bit but then started to get used to the new calling convention:

    new Worker().DoWork( new Name( "john" ), new Surname( "doe" ) );

The problems were gone. Everyone was happy.

Years passed, some wizards were gone, new wizards came to the kingdom. One of new wizards reviewed the code and came up with an idea.

- Why this convention is that awkward, why wrap strings in auxiliary types? - thought the wizard.

And he came up with an idea to add implicit conversion operators:

    public class Name
    {
        public Name( string value )
        {
            this.Value = value;
        }

        public string Value { get; set; }

        public override string ToString()
        {
            return this.Value;
        }

        public static implicit operator Name( string value )
        {
            return new Name( value );
        }
    }

    public class Surname
    {
        public Surname( string value )
        {
            this.Value = value;
        }

        public string Value { get; set; }
        public override string ToString()
        {
            return this.Value;
        }

        public static implicit operator Surname( string value )
        {
            return new Surname( value );
        }
    }
 

The new wizard was very proud of himself. He barely told anyone of his conversion operators so everyone else was still using the well established convention:

   new Worker().DoWork( new Name( "john" ), new Surname( "doe" ) );

But, since the conversion was now implicit, the wizard was able to make his own code shorter:

   new Worker().DoWork( "john", "doe" );

Years passed, new people arrived and then, someone in a hurry did something bad which should never happen. Arguments were swapped in a call:

    new Worker().DoWork( "doe", "john" );

Consequences were severe.

Was the culprit tried and expelled from the kingdom, same as last time?

Not really, the Wizard Council blamed the new wizard, the one who introduced both implicit conversion operators.

He was tried and expelled from the kingdom. His changes were reverted forever and everyone lived happily ever after.


This is based on a (almost) true story.

But can it run DOOM?

I barely repost news from elsewhere but this time this is extremely impressive.

It was announced that Dimitri Mitropoulos from the TypeScript team was able to build a WebAssembly interpreter in the TypeScript typesystem and then make it run DOOM, still inside the type system. Well, not quite "run", just render the first frame which took 12 days.

Anyway, it's inspiring to hear such news. Congratulations to the team!

Sunday, March 16, 2025

XNADash ported to .NET8/Monogame

Years ago, in 2011, I've blogged about an old DOS game I've ported to XNA. This weekend I've found the code and spent a while to make sure it works.

The new version targets .NET8/Monogame and is available on my Github.

Monday, March 10, 2025

OldMusicBox.ePUAP.Client 1.25.3

Bumped the OldMusicBox.ePUAP.Client to 1.25.3.

For some unknown reason that doesn't seem to be announced, the TpSiging5::GetSignedDocument changed the format of natural person's data (givenname, surname, personal identification number).

From the very beginning of the old TpSigning service, the document contained user signatures in the PodpisZP node. Later on, when TpSigning5 was introduced, they changed the signature to EPSignature node (different node with different model).

And guess what, starting from somewhere between 07-03-2025 and 10-03-2025, the new service (TpSigning5) returns the natural person's data in the old format (back to PodpisZP). Be aware of this and update your code accordingly.