Category Archives: Uncategorized

PCF8574 Arduino Keypad With Interrupt

Interfacing with a keypad takes a lot of IO doesn’t it? There isn’t really space to do much else on an Arduino of ESP8266 after having 7 or 8 pins taken for a keypad. Luckily there’s a cheap solution for this one, using an I2C I/O expander. In this case the PCF8574AP· The ‘A’ being the alternate address version.

The first part to do, if it’s not supplied with your keypad is to figure out which pins connect to which keys. This is nearly always done in a grid configuration with rows and columns. To read this configuration you set all the pins to high/input-pullup except for a single column and then see which row is pulled low.

To figure out my keypad layout I attached a multi-meter to each of the pins on the keypad while in buzzer mode and noted which pins correlated to which button. Then it’s just a matter of finding the common theme and making your grid as below.

Keypad pin out claculator

Most keypads don’t tend to have such a complicated pin-out and will often group the columns together.

Next step is to wire it all together, for this you will need:

  1. 2x 4.7kOhm Resistors
  2. Arduino Uno/Due
  3. Keypad
  4. PCF8574
  5. Misc cables
  6. (Optional) Breadboard

Then wire it up to the PCF8574 as follows:

PCF8574 Pin out

PCF8574 Pin Arduino Pin Keypad Pin
SDA/15 A4 (and 5V through 4.7k resistor)
SCL/14 A5 (and 5V through 4.7k resistor)
INT/13 2
VDD/16 5V
INT/13 2
P[0..6] [0..6]

After that it should like something like the following:

PCF8574, keypad, and Arduino connected together

Then the last step, the code:

#include <Arduino.h>
#include <Wire.h>

// This is the address for the 'A' variant with all address pins not connected (high).
// Check the datasheet for other addresses.
#define PCF_ADDR (0x27)
#define INTERRUPT_PIN (2)
#define DEBOUNCE_DELAY (200) // milliseconds

bool keyPressed = false;
int32_t lastPress = 0;

// Interrupt called when the PCF interrupt is fired.
void keypress() {
  if (millis() - lastPress > DEBOUNCE_DELAY) {
    keyPressed = true;

// The pins for each of the columns in the matrix below. P7 to P0.
uint8_t kColumns[] = { B00000100, B00000001, B00010000 };
uint8_t kRows[] = { B00000010, B01000000, B00100000, B00001000 };
char kMap[][3] = {
  { '1', '2', '3'},
  { '4', '5', '6'},
  { '7', '8', '9'},
  { '*', '0', '#'}
// This is the value which is sent to the chip to listen for all
// key presses without checking each column.
uint8_t intMask = 0;

// The PCF chip only has one I2C address, writing a byte to it sets the state of the
// outputs. If 1 is written then they're pulled up with a weak resistor, if 0
// is written then a transistor strongly pulls it down. This means if connecting an LED
// it's best to have the chip as the ground terminal.
// If passing a byte value to this, eg B11111111 the left most bit is P7, the right
// most P0.
void write8(uint8_t val) {
  int error = 0;
  // This will block indefinitely until it works. This would be better done with a
  // timeout, but this is simpler. If the I2C bus isn't noisy then this isn't necessary.
  do {
    error = Wire.endTransmission();
  } while(error);

uint8_t read8() {
  uint8_t read = 0;
  do {
    read = Wire.requestFrom(PCF_ADDR, 1);
    if (read == 1) {
  } while (read != 1);

void setup() {

  // When a pin state changed on the PCF it pulls its interrupt pin low.
  // It's an open drain so an input pullup is necessary.
  attachInterrupt(digitalPinToInterrupt(INTERRUPT_PIN), keypress, FALLING);

  // Calculate the interrupt mask described above.
  for (uint8_t c = 0; c < sizeof(kColumns) / sizeof(uint8_t); c++) {
    intMask |= kColumns;
  intMask = ~intMask;

// This goes through each column and checks to see if a row is pulled low.
char getKey() {
  for (uint8_t c = 0; c < sizeof(kColumns) / sizeof(uint8_t); c++) {
    // Write everything high except the current column, pull that low.
    uint8_t val = read8();
    // Check if any of the columns have been pulled low.
    for (uint8_t r = 0; r < sizeof(kRows) / sizeof(uint8_t); r++) {
      if (~val & kRows[r]) {
        return kMap[r];
  return '\0';

void loop() {
  if (keyPressed) {
    char key = getKey();
    // Key may not be read correctly when the key is released so check the value.
    if (key != '\0') {
      lastPress = millis();
    keyPressed = false;
  // After getKey this needs to be called to reset the pins to input pullups.

When uploaded and everything wired together the sketch will output the pressed key to the serial console.

Video Streaming With Zebkit JS UI

Following on from my previous post, so what if I now want to make a nice client side, but web-based application for playing videos. We can avoid using HTML for this purpose because we don’t need any search engine indexing, this means we get to use HTML5 canvas’s and a UI framework to speed up development.

Zebkit happens to be my favourite Javascript Canvas UI framework, it’s pretty quick to develop with and has most of the features of a standard desktop UI framework. What I found though is the video streaming is a bit lackluster, sure the support is technically in there but it made me long for the wonderful on-screen controls of the HTML5 video tag.

So then we can’t use the video tag in full as far as I know within a canvas, we could have a UI canvas element and a separate video element but that’s just messy. Although the video panel included with Zebkit doesn’t support drawing overlays, luckily they’ve made a very extensible API. So what we can is extend the existing video panel, overwrite the draw method and throw in some simple controls, such as seeking, pausing etc. The result of which is below…


The source for this Zebkit plugin can be found on Github, here.

HTML5 Split Video Streaming

Recently I’ve wanted to look into how video streaming of local content can be done with NodeJS and HTML5’s video tag. I’ve only found one example to base upon, here. This demo though didn’t really cut it for me, for one it grabbed the file and split it in the client Javascript and then appended it to the video. Ideally, this would be done server side. It also has another bug where the split file doesn’t append at a time offset, and overwrites the original buffer.

I’ve created a demo that addresses these issues by creating a server in NodeJS that scans a videos directory and transcodes and splits the videos on the fly with ffmpeg. And uses mse_webm_remuxer to fix the format of these files in some magical way to make them compatible with HTML5’s MediaSource.

The demo below shows the client side code, it can be found here. This is a client-side only demo of sorts, in that it mimics the NodeJS server statically because the server (being a demo) is incredibly insecure.

So, how the whole process works:

  1. The client makes a request to video/ to see what videos are available.
  2. This propogates a list in the client, the user then clicks a video.
  3. Upon clicking a video, the client makes a request for the videos metadata, returning data such as duration, chunk size and information on the streams.
  4. The client then sets up the video, mostly the duration and requests the first chunk.
  5. The server would then get the source video and transcode the first 30 seconds to a webm format and run mse_webm_remuxer on it, sending the URL of this new file to the client.
  6. The client then buffers this chunk at the correct location in the video.
  7. As the client gets to different points new chunks are transcoded and buffered, allowing jumping around the video and transcoding the necessary sections on the fly.

Video chunks? But why?

There’s a few benefits to using video chunks, such as how YouTube does it. The primary advantage to YouTube I suspect is bandwidth. By only buffering sections of the video they can control how much data to send to the client in advance, this saves a heck of a lot of data if they don’t buffer the whole thing without quitting.

The reason for chunks in this demo though is timing. To transcode a 320×240 30s length of video it will take about 15s, that means the user can start viewing fairly quickly, rather than waiting for a full transcode operation to complete. By measuring the time it took to receive a video chunk in the client side, the program can know when to request the next chunk to ensure smooth playing.

Source Code

The source code for this demo has been put up on my Github, here.