Author Archives: James Keppel

After playing with a few animations, I wanted to make some edits to them, and try converting some existing gifs into boot animations.

I found that when editing the series of images, if I pulled few out. edited them, and put then back into the those particular frames did not play. And if I replaced all of the images, the phone wouldn't even boot.

After incorrectly assuming it was the image format, and size, and bit depth, the following column in 7 Zip tipped me off.


Ensuring the files were not compressed when added them back into the Zip worked. The zip only functioned as a container for the images, the compression should be done at the image level.

At about the same time a family member needed a new phone, some old units at work were getting thrown out. So i though i would grab them, and also go through my collection and see if i could update them all to the latest Cyanogenmod  or equivalent.


From Top Left

  • Stock Android 2.3 on a Galaxy SII.
  • Stock Android 4.1 on a Galaxy SII.
  • Modded AOSP 4.1 Rom on a My Touch 4G Slide aka Doubleshot
  • Stock Android 4.1 on a Galaxy Note (N7000).
  • Stock Android Desire Z (Vision).
  • Broken CM Rom on a Galaxy SII (Was given to me in the pictured looping boot state)

Photo was taken after I had already loaded Cyanogenmod on the Note & Doubleshot, while the Doubleshot was very straight forward as I had root & ClockWorkMod already on it, the Note was one of the nastier I've encountered.

Galaxy Note N7000

The Cyanogenmod instructions require a bit of USB driver juggling via Zadig, which I have used before on USB SDR Sticks & USB Missle Launchers alike. This allowed Heimdall to talk to  the Note. Heimdall, which allowed me to avoid using Odin for this particular Samsung phone, let me inject the recovery image into the device.

Or at least it did eventually,

C:\>heimdall flash --kernel zImage --no-reboot

Heimdall v1.4.0

Copyright (c) 2010-2013, Benjamin Dobell, Glass Echidna

This software is provided free of charge. Copying and redistribution is

If you appreciate this software and you would like to support future
development please consider donating:

Initialising connection...
Detecting device...
Claiming interface...
Setting up interface...

Initialising protocol...
Protocol initialisation successful.

Beginning session...

Some devices may take up to 2 minutes to respond.
Please be patient!

Session begun.

Downloading device's PIT file...
PIT file download successful.

ERROR: Partition "kernel" does not exist in the specified PIT.
Ending session...
Releasing device interface...

As this Note was bought at Android 2.3, and memories of the 3 or 4 tense reboots needed when I suggested she run the OTA Jellybean update popped into my head.

The cyanogenmod forums mentioned stating KERNEL in uppercase. I have that a try and was told:

Initialising connection...
Detecting device...
Claiming interface...
Setting up interface...

Initialising protocol...
ERROR: Protocol initialisation failed!

Releasing device interface...

As this seemed to error before even connecting, let alone looking for the Partition, I assumed this was the Downloader getting stuck issue on the How to Guide tips & tricks section.

Persistence payed off as a reboot of the PC and the device payed off.

However, while Cyanogenmod booted without a hitch, from the depths of ROM hell spawned my next adversary...

Yellow Triangle on Boot

The yellow triangle with black exclamation mark pops up on the kernel boot screen when the device detects a modification to the kernel. Obviously a feature to avoid warranty repaired on rooted devices, I wanted to get rid of it, as while long out of warranty, it was just plain unsightly.

Luckily for me Chainfire, one of the most respected developer in the community, has released a tool to fix this, Triangle Away.

While there is no shortness of warnings, are you sure's, confirmation dialogs and telling you know in on uncertain terms, that is can brick your phone, one caught my eye. Advising me that SuperSU, Chainfire's Super User app is recommended. While SuperSU is a bit more feature rich, CM's opensource implementation does the job, and allows it to ship with the ROM.

However, I was going to risk a bricking this far in. Just installing SuperSU did not meet the criteira, and Triangle Away still used the CM root app when launching. However after loading SuperSU as a zip via recovery, the dialog didn't appear.

After a few tense pauses during the process, by Triangle, was in fact away.




After a friend told me about their own SDR setups, I though I would take the sub $10 investment to have a tinker myself.

While I have never been into the HAM radio scene, at this low cost, having the ability to scan a wide range of frequencies via my desktop was enough to get my attention.

The two most common entry level USB's, commonly sold as cheap USB DVB-T dongles both sport a RTL2832U chipset, there are older references on the web to the E4000 Elonics Tuner, rare enough for an asking price 4 times what it was 24 months ago for the places that have stock, and the more common and newer R820T Tuner.

While it would be some time until I would be at a level that I would notice a difference from one tuner to the other, outside of the slighly limited frequency range of the E4000 (52MHZ -to 1100MHz & 1250MHz - 2200MHz) compared to the R820T (24MHZ - 1770MHz), I wanted to try for a E4000 as my friend had the R820T.

Selling R820T's as E4000's

Alibaba has a ton of these, or at least claim too of which I got two of the variety below. Not sure of the exact sellers as I gave a few links to my Wife for birthday ideas as few months ago, though I know she got them from 2 different vendors

Some are boxed in the Digital Energy Branded Mini Digital TV Stick, sporting the text DVB-T+DAB+FM.


The hardware ID PID 2838 matched to the E4000 according to this SDR Wiki.


However, closer investigation shows this was in fact a R820T.

Here is the PCB, case & antenna of my Sticks



And here is one I found on Superkuh's Blog, which he stated he purchased early 2012.

Notice the PCB Layout Differences.

Close up of the two Tuner Chips.

E400 on Left, R280T on Right

E400 on Left, R280T on Right

It seems this enclosure used to hold a E4000 years ago, and Alibaba vendors are selling them as E4000's, which are more desirable for those wanting to utilise the higher frequencies.

Not a big deal I kind of though it was too good to be true to get a E4000 via Alibaba at a R820T price, and since the current going price on a E4000 ($65 at the time of this post), I wasn't out anything I would have been sourcing a pair of R820T's.

There is a MCX plug on the side, which resulted in a purchase of a MCX to Coaxial adapter that will be arriving soon though some USB models have a RF Coaxial instead.


After the un-boxing, it was time to try out some software.

Windows SDR Software

There are plenty of blogs and pages that go into the details of installing the USB drivers, and setting up SDR# (SDR Sharp), and the add-ons needed for the RTL2832U, and a ton of other plugins. As well as setup for another common entry level SDR package, HDSDR.

While, I am lucky to get a FM signal at all thanks to the mini antenna, I do manage to get the 3 main local FM stations on SDR#


Similar results on HDSDR.


Not all too impressive quite yet.

Once the MSX to Coax arrives I'll be able to plug it into the TV Antenna and get a much better signal. and start scanning more frequencies




Getting my hands on a new Server, and before we move the Fusion IO Drives over to the new server on top of a Hyper V layer, where the File System will be stored inside VHD files, I though I would do a few benchmarks as a baseline to compare to on the VM.

We have a IO Drive (Gen 1) 160GB SLC and a IO Drive 2 600GB SLC

Both are at 80% factory capacity (which uses the extra 20% for extra write performance), A.K.A, High Performance Mode. Swap Support is also Enabled.

Running in an IBM HS23 Blade, with 2x 8 Core 2.8Ghz X5560 Xeons & 64GB RAM

First the newer, and much larger capacity Fusion IO 2 600GB.



And the earlier Fusion IO 160GB.




Compared this to the 6x900GB NL SAS RAID10 Array in the 10GBE attached IBM V7000.

Not a great comparison, but just what happened to be attached to the server.




As well as the very unimpressive 2xRAID1 10K 146GB 2.5" SAS, the server's C Drive.





Have had the Fusions for a few years now, and can never go back to anything other than PCI SSD's now I've been spoilt with performance like this.


AMD's Clawhammer AKA the Athlon 64 was the CPU architecture to have in the early 2000's. I still had mine won from Epics Make Something Unreal Competition where our Star Wars themed Total Conversion Mod, UT 2004 Troopers landed a runner up place.

Great thing about old hardware, I landed a great Gigabyte Neocooler 8 Pro Cooler for the long forgotten Socket 754 for $5.50. The guys down at EYO must have had sitting on a shelf or the best part of a decade. As of the date of this post they even had a few more in stock.



Lord Vader's LED lightsaber illuminating the installation of the CPU Cooler.

The extra long 7800GS, one of the last hi-end cards made for AGP, powered up fine, though I do remember I overclocked her to squeeze a bit more juice out of her, so she may well be in her last legs.

A few quirks came to light with the old girl. I always thought my days of loading Disk Driver prior to a windows install were long behind me. Not the case with the Gigabyte GA-K8NPro in windows 7. After trying one before, my second attempt of The 64 Bit Silicon Image  Drivers managed to get a drive appearing in the Windows & installation. Zip filenames was for those coming here looking for them.

Next, once the OS was installed, there were constant lockups and freezes that I remember plagued her towards the end of her role as my primary PC. I always chalked it up to Windows rot, though the fresh Win 7 install was showing the same symptoms.

Turned out that disabling the onboard RAID on the Gigabyte GA-K8NPro seemed to be the cause. Every time I left the SATA Raid enabled, and just did not set up a array, as opposed to disabling the RAID (setting to BASE in BIOS), the lockup issues disappeared. The old Silicon Image RAID under the N-Force 150 chipset seemed to be much content in RAID mode, even if powering a single drive.

This beast of yesteryear will now live of at a relatives place until being replaced by something an order of magnitude faster likely at a cost lower than what i payed for the Video card alone.

I knocked over this long standing member of my reading list this week, a staple of the programming, if not entire comp sci. industries reading list, and I can see why.

If you were putting off reading this like I was, I suggest diving in. It is much more relevant that you would think that a book written in the 70's about coding in the 60's, as the author put it, would actually be.

I also recommend getting the anniversary edition, published 1995, as it has a great retrospective on the entire 1975 edition, including many of the predictions in the No Silver Bullet essay, and the author has responded to many criticisms on statements he made in his 1975 edition.

While this is sure to just be another in a long serious of posts on this book over the last 40 years, I will only go into two points. What stuck with me, and of course, my 2 cents on the No Silver Bullet theory.

The statement that moving a program, from a 'garage app' program, to a commercial program woulds increase dev time by a factor of three struck home with me. Along with the sister theory that an app, to a app that integrates into a systems also requires are factor of three in effort. This culminated in the sum of the two theories that a consumer facing system integrating app equates to an increase of a factor of 9, and related to my experience with writing my very simple console app for slack.

A dirty app to raise an REST message via a console would be maybe 2 hours, (less if you have raised the scaffolding a few times before). Integrating that into both Jenkins, by reading the /api/json responses (append /api/json or /api/xml into any URL, very impressive.), and integrating into Visual SVN by reading the Env. Vars in the hselle that calls my console app and passing them into SVN look to full out the JSON responses did propel development time quite easily towards the 6 hour mark.

After all the pieces were getting along, my desire to look into best practice for console apps pointed me towards the Apache Commons CLI and the .Net CLI Implementation.

Then into Costura/Fody, and packaging it into a single exe.

After that, there was fair bit of refactoring, which I admit is due to my wanting to move it from a one trick pony console app, to something I wouldn't be ashamed of should my colleagues wish to dig around under the hood and extend it. As a result, should I have planned to it to be extensible initially, the refactor wouldn't have been required.

Though, back to the factor of 9 rule, I soon realised I had spend closer to 2 working days on this little console app, and it is still geared towards internal use in our team, than something I would want to distribute with the intention to maintain, my motivation was towards best practices, and a chance to do delve considerably deeper than I typically did with my console apps.

There have been a number of internal software projects I have come across, and internal staff often bring up that we could package it and turn it into a revenue stream. I think is some cases a factor of 3 to put that internal app you use in house, to something you would comfortable putting on the app or play store under you name is pretty conservative.

As for my thoughts on No Silver Bullet, the idea that there would be no single development in a 10 year period would increase productivity by an order of magnitude (factor of 10), is something I agree with. That being said, I believe that development package management, such as nuget, npm, bowser may warrant a brief mention, though more likely a foot note should there ever be a 2015 anniversary edition.

Package management alone does not met the silver bullet criteria of course, in that its only a slight automation over the previous common practice of having your personal repository of packages, or adding the most recent repository into your own projects. Also developers in the 60's if not 40's knew all to well, if not more that their modern counterparts, to not try and re-invent the wheel. I see the change has been in the overall accessibility of packages in general, along with one click integration, which is already taken for granted in how easily we can increase our output by standing on the shoulders of giants, and turn a empty project file into a highly extended framework and customised stack in literal seconds.

I am sure when I revisit The Mythical Man Month later in my career, when its pushing past its 1/2 century since iniltial publishing, there will still be much more than not, which still holds true.


After Integrating Visual SVN & Jira with Slack, I decided to replace the existing bat file calling a python script with something a bit more extensible.

I also wanted to change Slack's SVN integration to a custom one that would point to the revision on our Fisheye server, which would show the changes made and link to Jira when we added the  ticket <ProjectName-Ticket#> in the Commit Notes, which Fisheye does out of the box

.Net REST Console App

I decided to build a console app that call the slack incoming webhooks API. It needed to:

  • Accept a channel name, title & API Token
  • Accept a SVN Projectname & revision number to call SVNlook & get author & log details. (When fired by SVN Server)
  • Accept a Jenkins name to hit up the Jenkins JSON API for build details
  • Parse success/fail messages and convert them to the Slack notification color names (good, warning & danger)
  • Create a JSON Object
  • Post the JSON object to the slack API
  • A verbose option for debugging
  • Manually enter message text, and author to integrate with other apps down the line.

.Net Apache Common CLI

As there is quite a large set of parameters, I made use of the .Net port of the Apache Commons CLI libraries by Akutz. This handles all aspects of console arguments, while adhering to best practices and existing expectation when passing argument to a cone application.

An example of the init & usage syntax is below.

options.AddOption("a", "apiToken", true, "API token.");
  if (_apiToken == null && Globals.CMD.HasOption('a'))
     _apiToken = CMD.GetOptionValue('a');
  return _apiToken;

This is a huge help in argument management, and also handled the help messages.

There was very little documention on .Net CLI, though using the Apache usage documentation was fine, just remeber to capitalise the method name i nthe .ent vesrion, for example option.AddOption instead  of option.addOption, option.HasOption instead of option.hasOption and so on.

And one last catch, the Apache DefaultParser was called the BasicParser in the .Net port.

BasicParser parser = new BasicParser();
CommandLine commandLine = parser.Parse(options, args);


Restsharp likely needs no instruction has a library for simple Rest messaging.

I found it very easy to use, apart from one hitch that had me stumped for much longer than I would like to admit.

The following code (I thought) added the query string with the API token for slacker to the url.

 var request = new RestRequest(resource, Method.POST);
request.AddParameter("token", "CfkRAp1041vYQVb");

However, doing so, and then trying to add any details to body via request.AddBody resulted in it not being added, nor raising an error attempting to do so.

request.RequestFormat = DataFormat.Json;
request.AddBody(json); //Ignored if AddParameter was previously called

Opening up Wireshark showed that the query string was being added to the body.

After a bit of head scratching I found that AddParameter took a third argument,

request.AddParameter("token", "CfkRAp1041vYQVb", ParameterType.QueryString);

SVN versioning

As I planned to use this exe on various production servers, automated SVN versioning was the next logical step.

I utilised Avi Turner's SVN versioning script on stackoverflow to update the $WCREV$ tag I inserted into the AssemblyFileVersion in AssemblyInfo.cs and rev.subwcrev-template used to track the currently checked out & built version.


Finally, I though I would try my hand at weaving assemblies into the .exe for a single file deployment.

I settled on Costura.Fody as many had said that it Just Works™

After adding both Fody and Costura.Fody via the VS2013 package manager, I assumed the various build xml files automatically would need to be tweaked. Though, when I hit build, I realised that I didn't need to touch them at all, and ended up with a exe file where every assembly set to copy local, in this case RestSharp, was embedded into the exe.

Finding a Suitable Collaborative Chat Tool

After trying Campfire, Bitrix24 and HipChat, I finally settled on Slack as my preference for a collaboration tool for our team.

Campfire was very bare bones, and the various window client options just didn't do it for me.

Hipchat had huge potential, though when i get into it, most of the features I wanted where not in the software, but on their suggestions page. Some with hundreds of votes but no reply.

Bitrix was close, but is a whole intranet in the cloud so was a bit of overkill. I only needed the chat. and it does that ok, though didn't have the IRC like room set up I was after.


Slack was everything I was after. I was a bit critical when the instructions for the windows client consisted of how to turn the webpage into a chrome application shortcut. I originally wanted something that would give the taskbar a notify glow (not a blink as some clients did) so the dev's new that there was a conversation going on, but not annoying enough ot break them out of the zone if there were not participating.

A quick search confirmed that neither a script, or extension that would glow the task bar, though the use of notifications would likely do the job, if not a bit more distracting than I would like.

The chat, room and search features were everything I wanted, but the real power cam via the integration tools.

In a few clicks a Jira channel was showing new & closed tickets from my on premises install. Though I was sure to vote for confluence integration in their integration feedback survey, as that was not out of the box,. though likely not difficult to craft my own.

SVN Integration

Next was SVN, this was a bit trickier as the instructions for integration was a perl script on github.

The next challenge was that I was not suing SVN on a linux box,. but SVN server in windows. This meant that Visual SVN called a batch file, passed in the repository name and revision as arguments, which needed that to fire the Perlscript.

First of all, I needed Perl. and grabbed a download of Active State Perl

Next, triggering the script via Visual SVN.

Right clicking on the repository in Visual SVN, then Properties then the Hooks tab brought up a window, where i could Edit the Post-commit hook.

Here I entered the DOS command to fire my batch file.

cd\Program Files (x86)\VisualSVN Server\bin\
hook %1 %2

Next I created hook.bat and placed it in the folder above. This woulds take the 2 args and pass it to the PERL script.

"C:\Perl\bin\perl.exe" "C:\Program Files (x86)\VisualSVN Server\bin\" %1 %2

With the plumbing ot the Perl script out of the way, I modified the PERL script to run on windows, which involved changing the linux friendly...

my $log = `/usr/bin/svnlook log -r $ARGV[1] $ARGV[0]`;
my $who = `/usr/bin/svnlook author -r $ARGV[1] $ARGV[0]`;

To the more windows command line friendly...

my $log = `svnlook log -r $ARGV[1] $ARGV[0]`;
my $who = `svnlook author -r $ARGV[1] $ARGV[0]`;


After testing by firing my bat file manually, and a few SVN commits, I had all my SVN updates posted to their own channel in Slack.

Confluence is next, so that updates and new pages can go into their own channel too. 


Though I would give an assembly Hello World a go, and get into some low level programming.

After reading up in MASM, NASM & FASM, I decided on MASM, and soon came across a great blog detailing how to set up VS2013 to run with MASM32.

After setting up the environment, and running the hello world app below, I noticed that this use of the MASM32 libraries seemed to vary greatly from the Assembly code I have previously seen that typically utilsie a series of 3 and 4 letter instructions mixed with memory addresses,

.model flat, stdcall
.stack 4096
option casemap : none

include macros.asm

includelib masm32.lib
includelib user32.lib
includelib kernel32.lib

message   db "Hello world!", "$"

main PROC
	print "Hello World!"
	invoke ExitProcess, eax
main ENDP
END main

In my travels, the assembly I have glanced upon seemed to be much more like the below example, which I bumped into while I was setting up Visual Studio

.model small
message   db "Hello world", "$"
main    proc
mov   ax, seg message
mov   ds, ax
mov   ah, 09
lea   dx, message
int   21h

mov   ax, 4c00h
int   21h
main    endp
end main

I naively assumed that this was what MASM was like when you didn't utilise the MASM32 Libraries references in the first example. That was, until trying to compile the above code hit me with this...

1>  Assembling source.asm...
1>source.asm(7): error A2004: symbol type conflict
1>source.asm(16): warning A4023: with /coff switch, leading underscore required for start address : main

As it seem this is a common mistake, replies at the masm32 forums, and stackoverflow pointed out the difference between 16bit MASM, and MASM32.

I still wanted to push forward with 16bit MASM, but with Win7 x64 not supporing 16bit, I figured I may have to use DOSbox.

I now knew I needed a 16bit Linker, and a bit of digging showed me that there was one in my MASM32 install. I tried looking in the project configuration, such as the Microsoft macro assembler to see if I could find a place to poitn to the linker16.exe, with no luck.

I then came across this very detailed article on both 16 and 32 bit set up in VS2012 by Kip Irvine.

With his directions. I then went down the path of a batch file triggered by Visual Studio  External Tools. Hoeever I wanted to dig a bit deeper and make my own batch file.

After finding a githib reference to the make16.bat in his tutorial, it seemed that he utilised ml.exe that was part of Visual Studio, not the MASM32 downloads. Running a modified version gave me the following error.

MASM : warning A4018: invalid command-line option : -omf

MSDN ML Command Line Reference advised me that this was due to my 64bit install.

Generates object module file format (OMF) type of object module. /omf implies /c; ML.exe does not support linking OMF objects.
Not available in ml64.exe.

I decided to go with the ML.EXE installed with the MASM32 libraries, along with the commands I came across on stack overflow. I Modified the bat to utilise the args passed from VS External Tools.


ML.EXE /DMASM /DDOS /Zm /c /nologo /I"c:\masm32\Include" "%1.asm"
link16.exe /NOLOGO "%1.obj" ;

The semicolon I added at the end of the link16.exe args use default settings, so do not require input. Perfect if you want the build result in the VS output window instead of a DOS window.


Now I got my MASM16 hello world assembled, I just needed a 16bit platform to run it.

I went with DOSbox as it has the command line arguments i was hoping for, so I could integrate it with VS External Tools.

I created the following batch file, accepting the filename from External Tools as %1.

"C:\Program Files (x86)\DOSBox-0.74\dosbox.exe" C:\Dev\MASM\Masm32\%1.exe

Though, it seems that External Tools wraps this in quotes, resulting in the location of the newly assembled exe for DOSbox to run, not being valid.

C:\Dev\MASM\Masm32>"C:\Program Files (x86)\DOSBox-0.74\dosbox.exe" C:\Dev\MASM\M

The build scripts seemed to be ok with this, as it also ucrred there. However the following command trimmed the double quotes and allowed the exe to be passed into Dos Box

"C:\Program Files (x86)\DOSBox-0.74\dosbox.exe" C:\Dev\MASM\Masm32\%FILE%.exe

And success.


Getting into the angular phonecat demo, in windows, and early on hit a snag.

This one was from trying to update karma, a TDD framework for angular powered by node.js

Seemed pretty straight forward from what I have learnt from node so far.

npm install karma

Nothing like error messages from an unfamiliar environment to make you sit up in your chair.

npm ERR! peerinvalid The package karma-requirejs does not satisfy its siblings'
peerDependencies requirements!
npm ERR! peerinvalid Peer karma@0.10.10 wants karma-requirejs@~0.2.0

npm ERR! System Windows_NT 6.1.7601
npm ERR! command "C:\\Program Files\\nodejs\\\\node.exe" "C:\\Program Files\\nod
ejs\\node_modules\\npm\\bin\\npm-cli.js" "install"
npm ERR! cwd C:\Dev\angularTutorial\angular-phonecat\scripts
npm ERR! node -v v0.10.26
npm ERR! npm -v 1.4.3
npm ERR!
npm ERR! Additional logging details can be found in:
npm ERR!     C:\Dev\angularTutorial\angular-phonecat\scripts\npm-debug.log
npm ERR! not ok code 0

After I quick google search, with nothing initially, I realised I have some something really weird, that caused a error the likes of which no one has ever seen, or the solution was starting me in the face.

npm ERR! peerinvalid Peer karma@0.10.10 wants karma-requirejs@~0.2.0 that.

npm install karma-requirejs

Karma then installed.

However, then the test.bat as part of the angular tutorial, which runs the karma command in DOS, returned this old favourite.

'karma' is not recognized as an internal or external command,
operable program or batch file.

Before clogging up my PATH variable, the -g (--global) variable in npm install came to mind.

However this did not work, even after first uninstalling the package.

Though, this time a  Stackoverflow answer at the top of google let me know that at least this time I was not so alone.

npm install -g karma-cli