hcoelho.com

my blog

Title only : Full post

User Affinity Tool: grouping and finding patterns for users

:

One of the last steps in our project is building the user affinity tool. In this post I will explain what it is going to do and how it works; there are some details I cannot disclose, but I will try to describe the general idea behind this tool.

Let's suppose we have some users in our database, and we are recording a history of what they read in our website:

users: [
    {
        name: "John",
        articlesVisited: [
            {
                tag: "Engineering",
                title: "Nanomaterials"
            }, {
                tag: "Engineering",
                title: "Nanoscale Sensors"
            }, {
                tag: "Engineering",
                title: "Challenges of Nanotechnology"
            }, {
                tag: "Arts",
                title: "Origins of Music"
            }
        ]
    }, {
        name: "Mark",
        job: "Artist"
        articlesVisited: [
            {
                tag: "Arts",
                title: "Syncing Music to Video"
            }, {
                tag: "Arts",
                title: "Music Lessons"
            }
        ]
    }, {
        name: "Lynda",
        job: "Engineer"
        articlesVisited: [
            {
                tag: "Engineering",
                title: "Nanomaterials"
            }, {
                tag: "Engineering",
                title: "Nanoscale Sensors"
            }, {
                tag: "Engineering",
                title: "Challenges of Nanotechnology"
            }, {
                tag: "Engineering",
                title: "Milling Processes"
            }, {
                tag: "Engineering",
                title: "3D Printing for Manufacturing"
            }, {
                tag: "Engineering",
                title: "The Automotive Industry"
            }, {
                tag: "Arts",
                title: "Music Lessons"
            }, {
                tag: "Arts",
                title: "Origins of Music"
            }
        ]
    }, {
        name: "Mary",
        job: "Artist"
        articlesVisited: [
            {
                tag: "Engineering",
                title: "The Automotive Industry"
            }, {
                tag: "Arts",
                title: "Music Lessons"
            }, {
                tag: "Arts",
                title: "Origins of Music"
            }
        ]
    }
]

Let's also assume that we have an anonymous user visiting our website - we don't have any information about him, but we are tracking his browsing history via cookies. This is how his history look like:

{
    articlesVisited: [
        {
            tag: "Arts",
            title: "Music Lessons"
        }, {
            tag: "Arts",
            title: "Origins of Music"
        }
    ]
}

Here is the challenge of working with big data: recording and keeping a bunch of information is easy, the hard part is turning it into something useful. What can we do with information like this? How can we turn this into something beneficial for the users and for the website?

There are two routes we took with our user affinity tool to work with this kind of data:

1- Generate recommendations based on the user history: if we know what the user is interested in and what the user look like, we can recommend content based on this

2- Make unknown users known: based on browsing patterns, we could assume characteristics for the user even when they haven't provided it

The question now is: how are we going to categorize the users? We have several possibilities depending on what we are recording, and we don't have to pick only one. We could categorize them based on their author preference, the tags of the articles, what sections they visit, among other characteristics. Since I am only using tags in this example, I'll use these tags to form clusters of users.

To group the users based on their tag preference, I will calculate the percentage of visits that the tags have for every user in relation to the total number of visits. For example: if the user visited 10 articles, where 8 articles were about engineering and 2 articles were about arts, the engineering tag will receive 80% and the arts tag will receive 20%.

User name Engineering Arts
John 75% 25%
Lynda 75% 25%
Mary 33.3% 66.6%
Mark 0% 100%
-anonymous user- 0% 100%

We are starting to see some patterns here, aren't we? Notice that:

1- John and Lynda have very similar browsing patterns

2- Mark and the anonymous user have very similar browsing patterns

3- Mark and the anonymous user are more similar to Mary than John and Lynda

Based on these characteristics, we could generate some scores to rank users based on their similarities (where 10 means "identical" and 0 means "completely different"). Let's supposed these are the scores the users got:

John Lynda Mary Mark -anonymous user-
John - 10 4 2 2
Lynda 10 - 4 2 2
Mary 4 4 - 6 6
Mark 2 2 6 - 10
-anonymous user- 2 2 6 10 -

Making unknown users known

With that information, maybe now we can start assuming characteristics for the users:

First, John and the anonymous user did not specify their jobs, but based on the scores we got, we can assume that:

1- John is probably an engineer (he is very similar to Lynda, who is an engineer), but there is a fairly small chance that he is actually an artist (since he is moderately similar to Mary, but he is very different than Mark, and they are both artists).

2- The anonymous user is probably an artist: his browsing history is very similar to Mark's and moderately similar to Mary's, who are artists; there is a very small chance that he is an engineer, since Lynda (who is an engineer) has a very small similarity to him.

Making recommendations

And we can also start recommending content to users based on their browsing history: by looking at what articles other people who are similar to them visited, we can take the articles that they read but our user didn't, and recommend them.

For example, with the user John: John visited the articles Nanomaterials, Nanoscale Sensors, Challenges of Nanotechnology, and Origins of Music.

1- John is very similar to Lynda, who visited the articles Nanomaterials, Nanoscale Sensors, Challenges of Nanotechnology, Milling Processes, 3D Printing for Manufacturing, The Automotive Industry, Music Lessons, and Origins of Music. Notice that Lynda visited some articles that John didn't: Milling Processes, 3D Printing for Manufacturing, The Automotive Industry, Music Lessons. Since John and Lynda are so similar, we can recommend these articles to John with a very high priority and assume he will be interested on them.

2- John and Mary are moderately similar, and Mary visited an article that John did not visit: Music Lessons. Although John is less similar to Mary when compared to Lynda, he has some interest in arts, and we can recommend this article to him with a lower priority.

3- John is very different than Mark and the anonymous user, but we can recommend some articles from them too, only with a much smaller priority.

Identity dilemma: what the users say they are, and what they look like

Suppose we have another user called Jane with this profile:

{
    name: "Jane",
    job: "Artist"
    articlesVisited: [
        {
            tag: "Engineering",
            title: "Nanomaterials"
        }, {
            tag: "Engineering",
            title: "Nanoscale Sensors"
        }, {
            tag: "Engineering",
            title: "Challenges of Nanotechnology"
        }, {
            tag: "Engineering",
            title: "Milling Processes"
        }, {
            tag: "Engineering",
            title: "3D Printing for Manufacturing"
        }
    ]
}

As you can see, we are in trouble: Jane says she is an artist, but her browsing history says she is an engineer. Do we recommend articles that artists like Mark visited, or do we recommend articles that engineers like Lynda visited?

We could separate these two identities of the user in "what the user say they are" and "what the user actually looks like" - I am going to call the first one a persona and the second one a profile.

There is no right answer to this, but there are some ways out of this problem. I think the easiest ones are:

1- What is the degree of certainty that we have when we say that Jane looks like an engineer? Just because she read 1 article about engineering, doesn't mean she actually looks like an engineer; but if she read 100 articles, all of them about engineering, then it's much safer to say that she looks like an engineer.

2- We could reserve a percentage of the articles recommended to the persona, and another to the profile.



In conclusion, grouping users into clusters in order to assume their characteristics and recommend articles is not necessarily hard, but the algorithms that make the calculations must be finely tuned. Just because one clustering method works for a website doesn't mean it will work for another one; it is important to make it easy for the clients to change their algorithms as they need, as well as providing some ways to test the performance of these methods (with some A/B testing, for instance).

cdot algorithms machine learning 

Concurrent functions with Go using channels

:

I've always been a big fan of new technologies and languages: there is always something new and interesting in them. For the past weeks, I've been experimenting with Go, a free and open-source programming language made by Google. It is imperative, strong typed, and compiled, and with a syntax that remind me of C; just like C, it also has pointers, but it has a garbage collector.

One thing that really caught my attention was how concurrent (asynchronous) functions can be made and synchronized: they are called goroutines, described as "light-weight threads of execution", and can be synchronized using channels - a First In First Out queue. In this post, I want to show how I one example of how these goroutines with channels work.

This example is a little application that simulates a pizzeria: we will have a line that makes the sauce, a line that makes the dough, and a line that prepares the toppings; after all these 3 lines have their ingredients ready, the pizza is assembled and baked, and then a receipt is printed. In normal synchronous programming, first, we would make the sauce, then the dough, then we would prepare the toppings, assemble and bake, and then print the receipt. In asynchronous programming, however, we can fire the functions to make the sauce, the dough and prepare the toppings all at the same time, then, after we have all these 3 steps, we can assemble the pizza and bake it. In asynchronous JavaScript, we would start executing the three first functions, and then, when the last one was finished, it would execute a callback to assemble and bake the pizza, and then, it would execute another callback to print the receipt. In Go, we can use Goroutines and channels for this task.

To keep things simple, let's suppose that every line (the line that makes the dough, for example) can work on several orders at the same time. For example: they can make the pizza dough for 3 clients at the same time.

First, I'll declare the name of my package and make the imports for the modules I need:

package main

import (
    "fmt"
    "math/rand"
    "time"
    "sync"
)

Now I will make a struct (as far as I know, there are not classes in Go, only structs; but you can attach methods to them and turn them into classes) for a Pizza. A pizza will have a client (name of the client, string), some details about it (how the dough was made, how the sauce was made, etc. They will be a channel (I will show you why later) of a string), some boolean values that indicate which steps were completed (they are also channels), and a function that can be called when everything is ready and the pizza is finished.

type Pizza struct {
    client  string
    details struct {
        dough     chan string
        sauce     chan string
        toppings  chan string
        assembled chan string
    }
    completed struct {
        dough     chan bool
        sauce     chan bool
        toppings  chan bool
        assembled chan bool
    }
    Done func()
}

I also made a little function that will give me a random integer so every step will take a different amount of time to be completed:

func randomTime() time.Duration {
    r := time.Duration(rand.Int31n(9))
    return time.Second * r;
}

Now, the three functions that will be ran at the same time: makeDough, makeSauce and prepareToppings. They are just normal functions, the difference is how they get executed; this is what makeDough looks like:

// This function receives the name of the client, a string channel for it
// to record a message (details), and a bool channel for it to record when the dough
// is ready
func makeDough(client string, message chan<- string, completed chan<- bool) {
    fmt.Print("Starting making pizza dough for #", client, "\n")

    // We take a random amount of time for the function to be completed
    time.Sleep(randomTime())

    fmt.Print("Finished pizza dough for #", client, "\n")

    // Recording the message and "true" in the channels
    // You can imagine the channel as being "cout" from C++
    // and the <- operator being "<<": you are recording
    // something into the channel
    message <- "Pizza Dough"
    completed <- true
}

And here are the other functions:

func makeSauce(client string, message chan<- string, completed chan<- bool) {
    fmt.Print("Starting making pizza sauce for #", client, "\n")

    time.Sleep(randomTime())

    fmt.Print("Finished pizza sauce for #", client, "\n")

    message <- "Pizza Sauce"
    completed <- true
}

func prepareToppings(client string, message chan<- string, completed chan<- bool) {
    fmt.Print("Starting preparing pizza toppings for #", client, "\n")

    time.Sleep(randomTime())

    fmt.Print("Finished preparing pizza toppings for #", client, "\n")

    message <- "Pizza Toppings"
    completed <- true
}

Simple enough, right? Channels are like queues, where you input data, and then you can pop it later. But here is the catch: channels will block the execution of the function until the other "side" is ready; in other words: if you push something in the channel, it will block the function until you pop it - it also works on the opposite: if you try to pop something from an empty channel, it will block the function until there is something there to be popped. This can be used to pause/unpause goroutines.

Now, if you go back to the functions that I described, you can imagine what is going to happen in this case:

func prepareToppings(client string, message chan<- string, completed chan<- bool) {
    fmt.Print("Starting preparing pizza toppings for #", client, "\n")

    time.Sleep(randomTime())

    fmt.Print("Finished preparing pizza toppings for #", client, "\n")

    // The following line will be executed and then the goroutine will stop: it will
    // only continue when we remove the string from the channel
    message <- "Pizza Toppings"

    // This line will only be executed when the message "Pizza toppings" is
    // removed from the channel above
    completed <- true
}

So, to make sure we don't reach a deadlock, we must make sure that the channels are properly emptied and closed: I will show how to extract the data from a channel and how to close them in this next function. This function will listen to the "completed" channels to make sure the sauce, the dough, and the toppings are prepared - we can only assemble and bake the pizza if we have these three parts ready:

func assembleAndBake(pizza Pizza) {

    // Here I am extracting the boolean from the "dough" channel. Since
    // we don't care about the values, we just discard them
    // Notice that the execution will be blocked here until there is a "completed" value for
    // dough that we can pop; in other words: the function will not execute past this
    // until we get a boolean from the "makeDough" function
    <- pizza.completed.dough

    // After we got the message, we can close the channel to prevent any more writing into it
    close(pizza.completed.dough)

    <- pizza.completed.sauce
    close(pizza.completed.sauce)

    <- pizza.completed.toppings
    close(pizza.completed.toppings)

    fmt.Print("Starting assembling and baking pizza for #", pizza.client, "\n")

    time.Sleep(randomTime())

    fmt.Print("Finished assembling and baking pizza for #", pizza.client, "\n")

    // If we reached here, it means that the pizza is now assembled and baked: we
    // record a message and a boolean for this event in the channels
    pizza.details.assembled <- "Assembling and baking"
    pizza.completed.assembled <- true
}

Now I am going to receive the details (message) in my function to print the receipt - I want to print the messages in the receipts. This is how my function look like:

// This function receives the Pizza object
func printReceipt(pizza Pizza) {

    // "defer" tells the function to execute this line only when the function finishes: it will
    // tell the program that this pizza is done and the "chain" is over for this client.
    // I will explain what this part does in more details later - I need to show you the 
    // rest of my script first. For now, just ignore it.
    defer pizza.Done()

    // Here I am taking whatever message we have in the details for the dough and
    // recording it in a variable called 'msg1'.
    msg1 := <- pizza.details.dough
    close(pizza.details.dough)

    msg2 := <- pizza.details.sauce
    close(pizza.details.sauce)

    msg3 := <- pizza.details.toppings
    close(pizza.details.toppings)

    msg4 := <- pizza.details.assembled
    close(pizza.details.assembled)

    // Here I am popping the boolean value from the "completed" field of the pizza. Since
    // I don't really care what the value is, I am not saving it anywhere
    <- pizza.completed.assembled
    close(pizza.completed.assembled)

    // If we reached here, it means that the pizza was assembled and baked - we can now
    // print the receipt
    fmt.Print("--------------------------------------------------\n" +
              "Receipt for #", pizza.client, ":\n" +
              ". ", msg1, "\n" +
              ". ", msg2, "\n" +
              ". ", msg3, "\n" +
              ". ", msg4, "\n" +
              "--------------------------------------------------\n")
}

Alright, these are the functions we need to assemble the pizza, now we just need the main function.

The main function will be responsible for launching the goroutines for three different clients: John, Alan and Paul. However, it also needs to wait for their orders to finish before the process exits - how can we ensure this will happen?

To make sure our process will not exit before everything is done, we can use a WaitGroup: imagine it as an class that you start and can specify how many groups you want to wait for (in this case, three: one for every client), and every time a group is completed, it calls the function waitGroup.Done() - so when all of them were called, the waitGroup is finished.

This is how my main function looks like:

func main() {
    // Seeding a random time
    rand.Seed(time.Now().UTC().UnixNano())

    // Creating a wait group
    var wg sync.WaitGroup

    // Making a list of clients
    clients := []string {
        "John",
        "Alan",
        "Paul",
    }

    // Getting the number of clients
    clientsNo := len(clients)

    // Looping through every client
    for i := 0; i < clientsNo; i++ {

        // For every client, we add one more group in the WaitGroup
        wg.Add(1);

        // Instantiating a new Pizza for the client
        pizza := Pizza{}
        pizza.client = clients[i]
        pizza.details.dough     = make(chan string)
        pizza.details.sauce     = make(chan string)
        pizza.details.toppings  = make(chan string)
        pizza.details.assembled = make(chan string)
        pizza.completed.dough     = make(chan bool)
        pizza.completed.sauce     = make(chan bool)
        pizza.completed.toppings  = make(chan bool)
        pizza.completed.assembled = make(chan bool)

        // This part is important: remember that line that I "deferred" a method call for
        // Done()? This is where it comes from: when the pizza is done, it tells the
        // WaitGroup that there is one less group to wait for
        pizza.Done = wg.Done

        // Here we are launching the asynchronous functions: the "go" prefix specifies
        // that these are not ordinary functions, but goroutines. To these routines, I am
        // passing the channels and other data they need
        go makeDough(pizza.client,       pizza.details.dough,    pizza.completed.dough)
        go makeSauce(pizza.client,       pizza.details.sauce,    pizza.completed.sauce)
        go prepareToppings(pizza.client, pizza.details.toppings, pizza.completed.toppings)
        go assembleAndBake(pizza)
        go printReceipt(pizza)

    }

    // Here we are telling the WaitGroup to wait until all the groups are done
    wg.Wait()

}

These are the outputs:

For only one client (Paul)

Starting preparing pizza toppings for #Paul
Starting making pizza sauce for #Paul
Starting making pizza dough for #Paul
Finished pizza dough for #Paul
Finished pizza sauce for #Paul
Finished preparing pizza toppings for #Paul
Starting assembling and baking pizza for #Paul
Finished assembling and baking pizza for #Paul
--------------------------------------------------
Receipt for #Paul:
. Pizza Dough
. Pizza Sauce
. Pizza Toppings
. Assembling and baking
--------------------------------------------------

For all three clients

Starting making pizza dough for #Alan
Starting preparing pizza toppings for #John
Finished preparing pizza toppings for #John
Starting preparing pizza toppings for #Alan
Starting making pizza sauce for #Alan
Starting making pizza sauce for #Paul
Starting making pizza dough for #Paul
Starting preparing pizza toppings for #Paul
Starting making pizza sauce for #John
Starting making pizza dough for #John
Finished pizza dough for #Alan
Finished pizza dough for #Paul
Finished preparing pizza toppings for #Alan
Finished pizza dough for #John
Finished pizza sauce for #Paul
Finished preparing pizza toppings for #Paul
Starting assembling and baking pizza for #Paul
Finished pizza sauce for #Alan
Starting assembling and baking pizza for #Alan
Finished pizza sauce for #John
Starting assembling and baking pizza for #John
Finished assembling and baking pizza for #Alan
--------------------------------------------------
Receipt for #Alan:
. Pizza Dough
. Pizza Sauce
. Pizza Toppings
. Assembling and baking
--------------------------------------------------
Finished assembling and baking pizza for #John
--------------------------------------------------
Receipt for #John:
. Pizza Dough
. Pizza Sauce
. Pizza Toppings
. Assembling and baking
--------------------------------------------------
Finished assembling and baking pizza for #Paul
--------------------------------------------------
Receipt for #Paul:
. Pizza Dough
. Pizza Sauce
. Pizza Toppings
. Assembling and baking
--------------------------------------------------

cdot go 

A problem with Redux: how to prevent the state from growing forever

:

In my last blog post I explained how we used Redux to organize the data flow in an application; however, Redux has a rare problem that doesn't seem to have a simple solution (with simple, I mean not having to install another 26 libraries): as we create new states, the old states get archived, and this can mean several megabytes of data stored in the client.

Now, there are good reasons why this should not be a problem for 99% of the applications:

  1. When we make a new state, we create a shallow copy of the previous one, not a deep copy. This means that the references will still point to the same data, except for the ones that changed. In other words, if your old state took 200kb and your new state created another 1kb, the total amount will be 201kb, and not 401kb.
  2. Most websites don't store that much data, so even if you use the same single-page app for days, you'll likely not even reach 1Mb

Despite being rare, it is a problem. So how can we solve it?

I'll explain it with an example: an application in which you can turn a lightbulb on and off, and also select its colour (red, green, and blue). It also has a little side effect: if the lamp is off and you change its colour, it turns on.

I will first make an application using only React + Redux, and then, I will use Flux (a paradigm similar to Redux, but that only stores one state instead of the whole archive) to solve the problem.

This is how we could build this application with React + Redux:

Observations:

I will make this application in a single file, so if you simply copy and paste the code below in order, it should work.

This is how my imports look like:

import React from 'react';
import ReactDom from 'react-dom';
import { Provider, connect } from 'react-redux';
import { createStore, combineReducers } from 'redux';

We should also try to imagine how the state would look like in order to plan our reducers. Since we need to store colour and power of the lightbulb, we could model our state this way:

// This is not actual code, but just a representation of how the state would look like
// You do not need this in the file
{
  isOn: false,
  colour: '#FF0000'
}

This means that we will have 2 reducers: isOn and colour.

#2 Making the actions to toggle the light on/off and also the colour

// Action types
const TOGGLE_LIGHT = 'TOGGLE_LIGHT';
const CHANGE_COLOUR = 'CHANGE_COLOUR';


// Actions
const actions = {

  // Receives nothing
  toggleLight() {
    return {
      type: TOGGLE_LIGHT,
    };
  },

  // Receives a value for the new colour
  changeColour(value) {
    return {
      type: CHANGE_COLOUR,
      payload: value,
    };
  },

};

#3 Now we create the reducers

// Reducers
const reducers = {

  // Reducer for the 'isOn' attribute
  isOn(state = false, action) {
    const type = action.type;

    switch(type) {
      case TOGGLE_LIGHT:
        return !state;
        break;

      // When the user changes a colour, we turn
      // on the lights
      case CHANGE_COLOUR:
        return true;
        break;

      default:
        return state;
        break;
    }
  },

  // Reducer for the 'colour' attribute. The default
  // colour will be red
  colour(state = '#FF0000', action) {
    const type = action.type;
    const payload = action.payload;

    switch(type) {
      case CHANGE_COLOUR:
        return payload;
        break;

      default:
        return state;
        break;
    }
  },

};

#4 Combine all the reducers into a root reducer

Now that we have all the reducers, we must put them together into a single one:

// The root reducer groups all the other reducers together
const rootReducer = combineReducers({
  isOn: reducers.isOn,
  colour: reducers.colour,
});

#5 Create the store

Now we create the store and pass the root reducer:

// Store
// The resulting state that we get from the reducers
// would look like this, if the light was turned on
// and the colour was green:
// { isOn: true, colour: '#00FF00' }
const store = createStore(rootReducer);

#6 Create the React component

In this case, I am using a shorthand for creating React components: it receives the props isOn, colour, toggle (function), and changeColour (function):

// React component for the lightbulb
const Lightbulb = ({
  isOn,
  colour,
  toggle,
  changeColour,
}) => (
  <div>
    {isOn ? (
      <span style={{ color: colour }}>ON</span>
    ) : (
      <span>OFF</span>
    )}
    <br />
    <button onClick={toggle}>Turn {isOn ? 'off' : 'on'}!</button>
    <button onClick={() => changeColour('#0000FF')}>Blue light</button>
    <button onClick={() => changeColour('#00FF00')}>Green light</button>
    <button onClick={() => changeColour('#FF0000')}>Red light</button>
  </div>
);

#7 Bind the React component to Redux

Here I am using the connect function provided by Redux to connect the component to our state and dispatcher:

// Element to be rendered (Lightbulb connected to Redux)
const LightbulbElement = (() => {

    const mapStateToProps = (state) => ({
    isOn: state.isOn,
    colour: state.colour,
  });

  const mapDispatchToProps = (dispatch) => ({
    toggle() {
      dispatch(actions.toggleLight());
    },

    changeColour(colour) {
      dispatch(actions.changeColour(colour));
    },
  });

  return connect(
    mapStateToProps,
    mapDispatchToProps,
  )(Lightbulb);

})();

#8 Make the Application component

The Application component will be the main component: we will use the Provider component from redux in order to bind the store:

// Application (the element with redux bound to the store)
const Application = (
  <Provider store={store}>
    <LightbulbElement />
  </Provider>
);

#9 Rendering

Now we can render the Application component in the dom:

// Rendering the app in the #app div
ReactDom.render(Application, document.getElementById("app"));

Done!

Ok, here is the problem: what if "colour" was actually a string of 20Mb? If you don't care about the old versions, you probably should not be archiving them. To solve this problem, we could implement our own separated store only for the colour; this store would be responsible for keeping only the newest version of the string and notify the components when it gets changed.

This is very similar to what Flux does (another pattern, like Redux), so I am going to use it in my solution. Ok, I know I said I did not want to "use 26 more libraries", and I do recommend you to build your own methods for it; in this case, however, I am going to use Flux and its libraries because 1- this is just a quick explanation, 2- it's fun, 3- I feel like doing it. Sorry.

Observations

Again, this application will be in a single file, so you can just copy and paste the code.

My imports:

import React from 'react';
import ReactDom from 'react-dom';
import { Provider, connect } from 'react-redux';
import { createStore, combineReducers } from 'redux';

// Two new imports:
import { Dispatcher } from 'flux';
import { EventEmitter } from 'events';

In this case, the state would be different: we no longer will be holding the colour, only the isOn attribute:

// This is not actual code, but just a representation of how the state would look like
// You do not need this in the file
{
  isOn: false
}

#1 Creating the Flux dispatcher

For Flux, we need to instantiate our own dispatcher:

// Flux dispatcher
const dispatcher = new Dispatcher();

#2 Making the actions to toggle the light on/off and also the colour

The actions are going to be almost identical, with one exception: the action for changing the colour will be returned and dispatched to Flux:

// Action types
const TOGGLE_LIGHT = 'TOGGLE_LIGHT';
const CHANGE_COLOUR = 'CHANGE_COLOUR';

// Actions
const actions = {

  toggleLight() {
    return {
      type: TOGGLE_LIGHT,
    };
  },

  changeColour(colour) {

    // Action to be returned and dispatched
    const act = {
      type: CHANGE_COLOUR,
      payload: colour,
    };

    // Flux dispatch
    dispatcher.dispatch(act);

    return act;
  },

};

#3 Now we create the reducers

Since we are not storing the colour in the Redux state anymore, we will only have one reducer: isOn. We will listen to the CHANGE_COLOUR action, but only to toggle the lights on if we change them - the new colour will be ignored.

// Reducers
const reducers = {

  // Reducer for the 'isOn' attribute
  isOn(state = false, action) {
    const type = action.type;

    switch(type) {
      case TOGGLE_LIGHT:
        return !state;
        break;

      // When the user changes a colour, we turn
      // the lights on
      case CHANGE_COLOUR:
        return true;
        break;

      default:
        return state;
        break;
    }
  },

};

#4 Creating the root reducer and the Redux store

These steps are almost the same, but now I only have one reducer:

// The root reducer groups all the other reducers together
const rootReducer = combineReducers({
  isOn: reducers.isOn,
});


// Redux Store
// The resulting state that we get from the reducers
// would look like this, if the light was turned on
// and the colour was green:
// { isOn: true }
const store = createStore(rootReducer);

#5 Creating the Flux store to hold the colour

This part is new: this is where we will store the colour of the lightbulb, also providing a method for components to listen to the store in case it changes (using an event emitter) and providing a method to set a new value.

// Flux store for the colour: the store can emit events, so we
// inherit methods from the EventEmitter
const colourStore = (() => {
  let cache = '#FF0000';

  return Object.assign({}, EventEmitter.prototype, {

    // Getters and setters
    getColour() { return cache; },
    _setColour(v) { cache = v; },

  });
})();


// Registering the Flux colour store in the dispatcher: when we
// dispatch an action, we'll check if it is of the right type, and
// then we'll set the colour in the store
dispatcher.register((action) => {
  switch(action.type) {

    case CHANGE_COLOUR:
      colourStore._setColour(action.payload);

      // When the store changes, we emit an event to notify
      // the components that are subscribed
      colourStore.emit('change');
      break;

  }
});

#6 Creating the React component

This React component will not be as simple as the previous one: it will have a state; the state will carry the colour of the lightbulb. When we create the component, we get the initial state from the store (colourStore.getColour()) and we will also subscribe to the store (celularStore.on('change', () => { ... })): when the store changes, we will get the new colour and set the new state (this.setState).

// React component for the lightbulb
class Lightbulb extends React.Component {

  constructor(props) {
    super(props);

    // Getting the initial state
    this.state = { colour:  colourStore.getColour() };

    // Listening for changes in the store: we update the
    // state whenever it changes
    colourStore.on('change', () => {
      this.setState({ colour: colourStore.getColour() });
    });
  }

  render() {

    // We are not getting the colour from the props anymore
    const {
      isOn,
      toggle,
      changeColour,
    } = this.props;

    return (
      <div>
        {isOn ? (
          <span style={{ color: this.state.colour }}>ON</span>
        ) : (
          <span>OFF</span>
        )}
        <br />
        <button onClick={toggle}>Turn {isOn ? 'off' : 'on'}!</button>
        <button onClick={() => changeColour('#0000FF')}>Blue light</button>
        <button onClick={() => changeColour('#00FF00')}>Green light</button>
        <button onClick={() => changeColour('#FF0000')}>Red light</button>
      </div>
    );
  }
}

#7 Binding the React component to Redux, making the Application component, and Rendering

Everything is the same now, except that we are not passing the colour as a prop anymore:

// Element to be rendered (Lightbulb connected to Redux)
const LightbulbElement = (() => {

    const mapStateToProps = (state) => ({
    isOn: state.isOn,
  });

  const mapDispatchToProps = (dispatch) => ({
    toggle() {
      dispatch(actions.toggleLight());
    },

    changeColour(colour) {
      dispatch(actions.changeColour(colour));
    },
  });

  return connect(
    mapStateToProps,
    mapDispatchToProps,
  )(Lightbulb);

})();


// Application (the element with redux bound to the store)
const Application = (
  <Provider store={store}>
    <LightbulbElement />
  </Provider>
);


// Rendering the app in the #app div
ReactDom.render(Application, document.getElementById("app"));

Done! Redux will keep a history of the isOn property of the lightbulb, but the colour will not be archived.

cdot redux javascript react flux