Author Archives: aenikata

Challenges of studying while working

Having spent many years working in IT, I’ve done more than enough programming to be competent across a range of languages, frameworks and tools to be productive doing web development, desktop development and even some mobile development. However, the range types of program, number of languages and sheer pace of progress of the field means that there’s essentially no chance of being an expert in all areas. As a result I was but a novice (at best) in other fields outside those I was working.
Having left university without finishing for a number of reasons (primarily financial), when I decided I wanted to round out my knowledge the first thing I did was join the BCS and take their undergraduate modules. In the space of just over a year I took and passed all of the modules for the Professional Graduate Diploma (Exams Level) – mainly taking that long because some of the courses only ran once a year. This entailed studying a stack of around 20 books and numerous past papers. Much of it was already familiar, but if your learning is primarily on the job then you’ll have some gaps in what you’ve needed to learn. This part wasn’t particularly difficult, despite the compressed timescale, involving mainly reading at any opportunity and near the exams doing some past papers to identify any weak spots. Taking exams after a decade away from education was daunting but in some ways easier than before as I felt less pressure having already settled into a career. Cost-wise the books weren’t that expensive, so between the books and the exam fees this was around £1k, less than many 2-3 day courses that teach you much less. The challenge here was mainly to keep up the focus, but since the level of learning required wasn’t that high compared to the demands of professional work, it was only hard when reaching a point where you need a break and things aren’t settling into your memory so well.
After this I looked for more advanced topics in areas which were less familiar. Fortunately at this time Stanford decided to run its online courses in Machine Learning and Artificial Intelligence, so I had an opportunity to join some surprisingly high quality courses without spending any additional money. This was one of the easiest bits of studying while working, because it was fresh, new, interesting (if occasionally challenging me on statistics-related maths I hadn’t needed until then), and with the online format providing high quality videos, a good feel of your progress from exercises, and good community discussion amongst students, it was pretty easy to stick with this.
Subsequent to these 2 courses, I’ve taken many more through Udacity, Coursera and edX, from Control of Mobile Robots to Contract Law to Computational Finance and many more besides. The range of modules available means you can learn a language, a science, programming, management techniques and more besides. There are a few duff courses around on them, but many are run by the top universities in the world and represent some of the best education available, with or without cost. If you’re interested in learning something new, I can’t recommend these sites enough. However, partly because you don’t have a financial commitment and you’re remote from your ‘classmates’, you have to rely on yourself for motivation without the fear of throwing away a significant financial commitment. It’s easy to stick with a course for a few weeks and then drift off onto other things and not finish. Equally, because you can sign up for whatever you like, you can easily over-commit and find yourself trying to do hours of studying a day alongside work, so I’ve learned to pace myself and not sign up to several courses unless (like Udacity) they’re self-paced rather than running to a fixed schedule, allowing you to dip in and out where you have time.
The other disadvantage of these courses aside from the ease of dropping them again on a whim is that they aren’t given academic credit. This is changing, so many have a paid option to get credit or other formal recognition of completion, rather than a simple generated certificate of completion, but you’re then incurring substantial fees (though less than traditional university costs). I decided instead to join De Montfort University on their Intelligent Systems MSc course, as the cost was comparable and I preferred a UK awarding institution.
Doing the De Montfort distance learning route has in some ways been a step backwards from the Coursera modules, because they simply don’t have the funding of the prestigious Ivy League universities, and the materials are somewhat less refined. However, the financial commitment (this time in the order of several thousand pounds spread over a few years) is a much stronger incentive to keep working towards completion. The challenges here aren’t really so different to taking a course on Coursera in that you have to commit several hours a week (at least) to your studies or projects. It’s easy for a while, but working on these by distance learning you may not have other people with similar interests to discuss your ideas with. You have to be very motivated, particularly near deadlines when you’re putting in dozens of hours on top of full-time work and feeling burnt out. With Distance Learning you miss out of the Fresher’s Week and similar events. However, when the time comes to graduate, I’ll be looking to attend the university in person at least once for that.
Significant ongoing studying towards certifications and academic qualifications can be tiring and expensive. It can also be difficult to avoid conflict between work commitments and academic ones, particularly near deadlines. Family commitments can be challenging, too. At the moment I work away from home during the week, so I naturally have a time to focus on things like studying, but that hasn’t always been the case, and then it can be difficult to balance the emotional demands of family (particularly small children) with a combined work and academic workload that can take up 80+ hours a week at times. You have to enjoy learning, then you’ll ‘get the bug’ and find yourself signing up for another course days after you’d sworn to take a nice long break (of ‘just working full time’ ;) ). Seriously studying while working full time is demanding and tiring but satisfying. In short, try it, find something you’d love to learn more and dive in. It’s worth it.

Why I’m so excited about autonomous vehicles

One of the areas that I’ve been interested throughout my recent years working with Machine Learning has been autonomous vehicles. Even individual elements like traction control and crash avoidance/mitigation systems are able to save lives, while human error when driving accounts for hundreds of thousands of road deaths a year worldwide, giving a massive pool of people that could potentially be saved by improved technology. Many areas of medicine are clearly worthy but wouldn’t be able to save as many lives, and that’s before you start considering the effects of life-changing injuries that are significantly more common. Very few wars have had a casualty rate to compare with traffic accidents.

I am a technologist. I believe that technology can build us a brighter, better and happier future – for the masses, not just the few. The car has been one of the miracles of technological development, but it’s not perfect, and there still ways we can improve. One of those is the control element. Humans can be pretty decent drivers, but time and again it has been demonstrated how catastrophic a momentary loss of attention can be when in control of a machine weighing tons traveling at speed. Worse still, a proportion fail to recognise when they are demonstrably incapable of driving safely, and continue to drive when intoxicated, excessively tired or distracted. Think about that for the moment. Before we start talking about autonomous vehicles that can do all the driving, what about one that can take over when you’re not fit to drive, so you’re not left stranded? I think many people are looking for either a perfect AI driver or one that is better than any human driver. I’m starting at a somewhat lower bar – one that is better than our worst driving – and that’s (while still technically very complex), eminently achievable now.

You see, there are multiple levels of autonomy, When we talk about self-driving cars, we’re mostly picturing the top end – Level 5 – where it can drive itself in all conditions through all stages of a trip. And that’s great. I want one of those too. But for me the step below that, Level 4, is where I’m excited. You see, we’re about ready for that today, not a few years from now, maybe, but today. Vehicles are on the roads successfully running in a rapidly increasing range of scenarios and demonstrating that they can run shuttle routes while integrating with human-driven traffic. They may not be able to replace all your trips, but parts of many. This level of capability may handle at least most of the daily commute for many – a dull chore where it mainly needs to handle one particular route day in day out. These systems may also be set as always-on monitoring systems mitigating accidents and saving lives even before they’re given full control. Just as many drivers can’t imagine how they managed without rear parking sensors to avoid minor scrapes, in the not too distant future I’m sure cars will be mandated to have AI-based safety features, and certainly large vehicles like lorries, and people will wonder why it took so long for these safety features to be a requirement.

Another thought – each year there are several articles about the tragedy of another cyclist killed on the streets by a lorry turning across them or crushing them while turning. The comprehensive sensor arrays of autonomous systems would allow a lorry to detect that near-side cyclist and stop before a fatal injury occurs. It couldn’t stop a cyclist carelessly going into the side of it, but it certainly could detect that cyclists that the driver can’t easily see and take action to prevent crushing them. A driver may be required for certain unplanned actions, but introduce a truck that can (safely and legally) take over on a long Motorway trip so the driver can have downtime while still traveling further in a day, and which can prevent a significant proportion of the serious accidents that lorries are normally involved in, and you’ve got a serious social, ethical and financial incentive to take Level 4 systems forwards sooner rather than later – lives are at stake. As an added bonus you wouldn’t see as many of those 5 mile overtaking exercises you regularly see where one lorry overtakes another with a speed differential of around 1mph which cause their own problems.

Yes, there will be some losers – as Level 5 systems come in taxi drivers, bus drivers, lorry drivers, even train and tube drivers and couriers will see the number of jobs decrease. However, there will also be reduced costs for many for deliveries, for any goods that need to be transported (i.e. most goods), reduced pressure on A&E departments, and increased mobility for vulnerable groups like the disabled, and I’m confident that the benefits massively outweigh the negatives. Even the smallest shop will be able to offer delivery services cost-effectively, helping them compete with larger competitors, although the likes of Amazon are trialling robot and drone-based deliveries to offer faster, cheaper, more convenient deliveries, too.

Let me come back to one particular group just mentioned – disabled people. There are schemes to provide adapted vehicles for those with limited mobility, but while they can give someone without use of their legs hand controls, it doesn’t grant mobility to someone with insufficient motor skills to safely operate a manual vehicle. Learning difficulties which result in poor decision-making or an inability to demonstrate the necessary level of awareness to drive safely also significantly limits individual mobility. Those who are blind or near blind also have limited transport options and significant obstacles to personal, private mobility. Each of these groups may often struggle with public transport, because of the ‘last mile’ issue between their house and the nearest point where public transport is available. Many stations are also not especially friendly environments for such users, with crowded areas, stairs and a requirement to interpret various signs to reach your destination and potential difficulties boarding sometimes several vehicles to reach your destination.

The current alternative is to provide taxi services and the like, being reliant on assistance by more able-bodied helpers. This is both expensive and limiting – with limited funding their mobility is less than they would like, and certain times (such as at Christmas) it may be difficult to make arrangements. Here Level 4 systems which can operate on particular routes may provide a significant increase in their ability to engage in the activities they want or need to do.

Parents can also feel like they’re being treated as a free taxi service, struggling to juggle demands for transport along with their many other commitments. Even a limited set of routes that such as between the house and the school and their workplace would allow the luxury and freedom currently only available to those with the means to hire their own driver, as it could be programmer to pick up the kids from school (and only return to the house, nowhere else) as well as pick you up from work to drive you home.

All of these have the potential to transform society by making the most vulnerable amongst us more able to join in, and by giving the masses a convenience they could previously only dream of, all while saving hundreds of thousands of lives. If that’s not exciting, then I don’t know what is.

I’ll finish up with a few final, random thoughts. Volvo aims to launch a Level 4 vehicle within a year – it may be closer than you think. Services may be transformed if they take advantage of Level 4 and 5 automation effectively. Charities could more readily solicit donations of clothes and other items as such vehicles could cost-effectively pick up items or deliver them to the charity. Car ownership could be reduced by making car sharing schemes more practical (instead of going to your nearest pool parking spot some distances away you book it and it comes to your house at the allotted time, it’s easier to release it for another user and get another when you need it). Drink driving could be all but eliminated (drink drivers could have a mandatory system that only allows autonomous operation if a breath test is failed). Buses could operate more cheaply and run better services on holidays (for those that don’t enter a hire scheme). Reduced accident rates, consistent speeds and rule-following behaviour could optimise the operation of road networks to reduce congestion and improve journey times through cities. Holiday homes in coastal and rural locations may see significant increases in value, too, because the longer commute is bearable when it’s in a self-contained, comfortable pod watching a film, reading, gaming or sleeping while the journey happens without your attention required. The legal hurdles are present, as is the scaremongering, but the potential is just too great for it not to move forwards and become the norm.

Book Review: FinTech Innovation (Sironi)

The same few ideas expressed repeatedly make this already-short book very light on real detail. There is an explanation of modern portfolio theory which is better explained elsewhere, along with his own portfolio optimisation strategy, which roughly outlines a monte-carlo simulation method for simulating performance (so far so standard), and you can pretty well understand his approach, although it’s light on certain details like estimation of return distributions for individual investments and the flaws that are generally considered for most current optimisation work elsewhere.

Even these definitions come very late, having referred to the approach dozens of times before it bothers to fully explain it.

Really, this book is promoting his Goal Based Investing approach, but it offers little validation that the approach doesn’t result in a sub-optimal investment that increases investor risk – I simply don’t see how making a few investments – for retirement, fun and accumulation, for example – really promotes the same level of diversification to minimise volatility that modern portfolio theory promotes. It may lead to a somewhat diversified portfolio, which will go some way, but not all the way. 

Similarly, some of the technical discussion is a little way off. It refers to gamification throughout the book, but then only belatedly offers an outline concept without examples of how it can be applied – gamification has proved a very useful technique for fitness apps, diet apps, saving apps and so on, but I see little in the book that would generally really be called gamification, although the description isn’t completely off. Similarly it refers to the internet of things, although where WiFi-enabled Fridges, sensors, etc, fit into portfolio management I’m not sure, it’s like a buzz-word has been picked up and thrown in without any real relevance. Big Data is referred to a few times also, but what’s outlined doesn’t really say how it changes things (e.g. confidence levels, how you approach using the data), and doesn’t consider that things like Twitter and news feeds pushed into sentiment analysis might be a decent example of where it may fit into investment decision making.

The descriptions about the financial market-place seem reasonable. There’s talk about behavioural economics theories and so on which make some sense. There’s some overview of places where fintech startups are eating into the traditional banking market (such as peer to peer loans), But there are better books around the range of companies starting up and the areas they’re disrupting. The description about the different kinds of investor, some of the different players in the markets, identifying a range of instruments, that’s all pretty accurate. Aside from identifying that many smaller investors would probably be better off investing in ETFs than trying to build a similarly diverse portfolio or investing with an actively managed fund because fees are lower and returns of late have been comparable, it doesn’t really say that much about why you’d want to invest in Bonds or Equities or Derivatives or Funds. It doesn’t say how you’d implement a robo-advisor. It doesn’t really outline much at all about Machine Learning, despite saying it’s relevant (and it is very relevant) – in fact the description is off because it describes specifically supervised machine learning, missing that there is another area of unsupervised learning which can also be relevant (in fact some of the most sophisticated investment strategies use both). 

The book may give a bit of an overview of how the marketplace is changing, and it gives the author’s idea as to how you might create a platform that’s more appealing to younger investors in fairly general terms, but it kind of skips over regulatory details that it mentions, it doesn’t give much detail on case studies or experimental proof of the ideas, and it doesn’t give much detail on how you’d implement, instead ramming the same outline theories at you time and again. It’s a quick read, but that’s because, aside from a few divergences into a little maths, there’s not a huge amount to take in.

All in all, this is a pretty disappointing book, because it’s an area that there is a lot of exciting work happening, and it’s quite possible that the next tech unicorn ($1Bn+ new company) will come from this sector, but this book does a poor job of outlining who is doing what and how it’s changing things. There’s some facts, but it’s like the author make some notes in some meetings and put those random bits and pieces together rather than having a fully coherent plan to work against. They almost say as much at the end when they acknowledge how it was written following meetings with various people, and it shows that this wasn’t a book that was researched and written, but just created on the back of other work. It’s the writing of a professional in the field, I’m sure, but not a professional piece of writing.

The importance of integrity as a business

I’ve recently seen a company lose its way and treat a long-standing member of staff (not me) in a way which, I think, is alien both to how a company should (and must) conduct itself and to how the directors would previously have acted. Unfortunately it appears to be a situation where, having come a certain way towards creating an amicable resolution, then got lawyers involved, hard lines got drawn, and an ability to get an amicable resolution moved further away, not closer. After those talks failed, instead of trying to bridge that gap, they started a disciplinary process pointing to things that would not have been treated as misconduct previously.

What the company was trying to do was avoid spending maybe £10-20k more than it needed to while ending the contract of one of its longest-standing employees. Spread over the time they worked there, it was a question of £1-2k per year of service, when the employee could easily have got £20k+/year more elsewhere if they hadn’t been loyal to the company.

As a small company you know that if times are hard, then you’re going to have to get rid of people. Also, if someone isn’t a fit for where you want to take the team, then sometimes it’s time to find a resolution to that. But most of us are building something which aims to be better than most. Companies like Equal Experts and Thoughworks put a significant investment into the idea of social and corporate responsibility, of not just making money, but doing it in a way that is ethical and beneficial to society as a whole. Whether fully successful or not in these goals, companies like these show that it is possible to build a successful company while promoting ethics and good citizenship, and should be applauded.

The lesson that they have to teach is that losing a little from your profits to treat your employees well, to give a bit back to the community, to encourage development of both employees and your community isn’t a waste. You get employees who are more motivated, more loyal, and better informed. You get people wanting to do business with you, because they like what you represent as a company, and all other things being equal, you’ll win the business because of that.

As I look at where I may need to take on others, I’m keenly aware that everything I pay is taking out of profit that I make, directly. But if someone else is putting in most of the work, they should get most of the profit. I may take a little for administrative costs, for getting the opportunity in the first place, etc, but I have no right to expect to take half of the money for someone else’s work. I’ve been in a position where I’ve been paid just over £100/day as an employee when I know my work is being invoice out at £500-1000/day. The sales people put in a lot of work to bring in that business, too, so there were management and sales overheads, too, but the account manager drove a fancy sports car that I certainly couldn’t afford. Now I’m running a business for myself, I need to remember not to over-value my own contribution and share as much as possible with those creating value for the company. I want to be better than any employer I’ve worked for, and better than at least most companies I know about, otherwise I’m not fulfilling my own expectations – it’s not all about money, it’s about being able to look at yourself in the mirror and say you’re the kind of person you want to be.

In my own work it’s also about putting in a decent effort for what I invoice – ensuring that I create value for my clients. Like most IT contractors I know I’m an expensive resource. I also know that I put in more time than most in furthering my professional skills and knowledge to be a better developer. I’ve completed dozens of courses on programming, machine learning, business, contracts and more. However, I know that there are areas where I can improve (I’m not a natural born salesperson, for example). If I take a lot of time out of work do deal with something external I’ll either not invoice for that or make up the time, because I that’s only fair. I also won’t get someone cheaper to do my work for me and bill it out as my time (I’ve heard of that at various companies, where an assistants work is billed as an accountant or lawyers time). If in doubt, transparency is the important issue here – it’s unfair to hide from the employee the value they create (although the other overheads may need to be highlighted also), and the client should know if the people directly working on their projects are being fairly compensated – poorly compensated staff are more likely to feel undervalued and be less productive.

For a small business some of the compensation may not be in terms of salary. It’s important to build a team, to share knowledge, and to grow as a company. Employees may lose in monthly salary, but gain in potential rewards if the company takes off. However, some large companies like Barclays and Reuters put a lot of effort into knowledge sharing, creating social opportunities (events, running a canteen, etc), planning for career development. Small companies don’t have the hierarchy to rise through or the space for a canteen, but they may be able to offer an allocation of time for side projects, an early Friday finish for drinks, fairly regular social lunches, etc. Even a weekly delivery of pastries or fruit can make a difference, and if you make a difference for your employee, they’re more likely to make a difference for you.

It only takes one unhappy employee to do significant damage to a small company’s reputation. When many small businesses live or die on key clients, integrity becomes all the more important. Losing that one client could be the death knell for a business, and sitting at home as the director of a company with only possible projects in the pipeline and nothing through the door is a stressful position to be in. Having people that actively want to work with you again to help fill in the gaps (even if it’s only small pieces of work) can make the difference between lost nights sleep for yourself and employees, and building an expanding volume of business to support growth.

Completion of the basic elements for a budget telepresence robot

It’s taken a bit longer than expected, but the software and hardware elements are now all present for the telepresence robot. The basic concept is a tablet with a mount attached to a base that has an ESP8266 wifi microcontroller and motor board that controls 2 motors. There’s various options for powering the tablet and the base, but a good USB battery pack can run it for a reasonable time.

The initial plan was to use a wifi car as the base, which provides the microcontroller, motor board and motors with a mount. If you want to try mounting a phone for a mini option then the base could be fine, but it turned out it wasn’t large enough for mounting a tablet at a reasonable height off the floor – it wasn’t stable. Additionally, tests with the included motors found that they struggled to move the combined weight of the components. 

As a result, I evaluated other low-cost motors to see which looked most suitable for the base. I settled on the N20 micro gear motors, which provide a suitable level of torque to cope with the weight of the components. Cost-wise these are comparable with the motors in the doit car, but combined with the smaller included wheels results in slower, more reliable movement. 

The tablet is just attached in the tablet mount on the tripod, which is trivial. To provide a base for the motors and to hold the motor control components, I used a 48cm round plant saucer from Wilko. The tripod legs needed flexing slightly to fit inside at the height I wanted, but the height and width are somewhat adjustable. I decided to have the tripod legs at the standard spread for stability, and this seems to be sufficiently stable in use so far. While it’s not hard for someone to knock over a tablet mounted like this the addition of a base with the additional weight of a battery pack helps make it more stable.

To mount the legs in the saucer, I just drilled 2 holes next to each leg and cable-tied the legs to the saucer. The tripod has clips for adjusting the leg extension, so once the cable ties go over these they hold the tripod in place reasonably securely. Naturally, this could be hot-glued in place for a more permanent solution or a mounting block 3d-printed, perhaps, but cable ties provide a quick, simple solution that works well enough for the initial build.

Mounting the wheels and motors were trickier. Ideally you probably want to mark out rectangles for the wheels and then use a rotary power tool to cut out the sections. I didn’t have one to hand, so resorted to drilling out sufficient holes and then cutting out the remaining plastic – it looks a bit rough, but so long as you have enough space for the wheels to go through and move freely, it’s fine. With the doit wheels, I was thinking of mounting the motor on the top and having the wheels pass through to minimise the ground clearance, but with the N20 motors the smaller wheels wouldn’t give enough ground clearance, so these seem best mounted on the bottom. Again, for an initial solution, I drilled 4 holes around each motor and cable-tied the motor to the saucer (note, the motors will probably need to have had wires soldered on before you mount them, otherwise it will be fiddly). You’ll need a castor wheel at the back, and with 4 spare screws I drilled some holes and bolted this onto the saucer, although in a pinch hot glue or more cable ties could probably suffice.

The wiring up is easy – there’s a positive and negative for each motor. If you’re using just USB power, you don’t need to wire in VM or GND on the motor board. If you want to provide more power to the motors for more speed, wire the battery pack into VM and GND – then the push-switch on the motor board will provide on-off control. If you’re powering the board from a separate battery pack, set the jumper accordingly – a 7.4V battery pack can run the microcontroller as well, although you possibly want that battery pack to power just the motors.

The USB battery pack is used to power the microcontroller and the tablet and optionally the motors – they’ll run slower at 5V, but it should still run. Depending on the drain from the tablet, the battery pack may provide a substantial runtime, although using the motors driven from the pack will quickly reduce this. The optional component here is Qi wireless charging. A Qi receiver would be plugged into the microusb charging socket on the battery pack and mounted at the back of the unit – most likely at the edge of the saucer. The Qi charger would need to be mounted somewhere suitable for it to come into contact – at a similar height on the wall. The idea is that if you carefully back into your designated location the receiver can get close enough to charge the battery and ensure when you do move off it has a full charge, With minimal movement, it should be possible for the battery pack to run the robot for a week without charging, though. 

That basically completes the hardware build. As for the software, the tablet just needs to run Skype. I opted for a Kindle Fire HD because the camera is reasonable for the price. I discovered that Fire OS isn’t happy to leave wifi on while on standby, so I ended up having to install Cyanogenmod on it in order to control that – this is essential as otherwise when you try to dial into the tablet you won’t be able to connect. There’s still been some issues – it has sometimes gone on standby or crashed, so if you can get a similar qualify native Android tablet like a Samsung Tab, it may be a more stress-tree option. The only thing that needs to be installed on here is Skype. Create a new account for the robot, and set Skype to auto-answer. Make sure that only known contacts are allowed to call (you don’t want anyone to call and have auto-answer) and add your existing Skype account as a contact so you can phone in.

I did consider open source alternatives such as Linphone, and it will probably be interesting to look at an integrated solution that provides a telepresence interface with both video and motor controls in the one window, but that’s a project for later. The stock Linphone build doesn’t have auto-answer as an option, so it’s not really suitable as-is, although it’s apparently available as an option if you’re doing a custom build. 

For the motor base, the default software for the NodeMCU dedicated board allows you to install the Doit wifi car app and control it by connecting to its access point. This would allow you to control the robot within a moderate range – up to 100m or so, depending on interference, etc. However, that’s not really what you want for a telepresence robot – we want to control it over the internet. There is supposed to be a remote version which connects to a server and allows the board to be controlled over the internet. However, the documentation is so bad that it’s entirely unclear where you’re supposed to do in order to control the car. As a result, I ended up taking the DoitCarControl.lua script from the Doit site and removing the code to set up an access point and renamed it DoItCarControlSTA.lua, modified the sta.lua script to connect to my local wifi network, and uploaded the remote (STA) init.lua script, modified sta.lua script and new DoItCarControlSTA.lua script onto the board. This was loaded using ESPlorer. I found I had to get the NodeMCU flasher to reinstall the lua interpreter, needed to locate a driver so the board showed up as a COM port when plugged into the PC, and sometimes the scripts didn’t upload properly – sometimes I got an error about being out of memory and the board needed resetting using the Reset button on the board. 

Having put these scripts on the board, it connects to the local network and picks up an IP address. Going onto my router I opened up port 9003 and set it to forward to this IP address – you’ll ideally want to configure your DHCP so it assigns the same IP address each time, or pick a fixed IP address that’s outside of the DHCP range and set this in the sta.lua script. Now it should be possible to connect to the telepresence robot from anywhere.

The remaining question is how to control it. The doit app is no more happy controlling this version as it is controlling the normal STA app. Instead, go on Google Play and search for Wifi TCP/UDP Controller. This provides a configurable page of buttons which can send TCP or UDP messages. If you open up the DoItCarControlSTA.lua script you’ve made you’ll see the values that correspond to forwards, backwards, left, right, stop, faster/slower left motor and faster/slower right motor. Set up the buttons in the app so that they send the appropriate values. Enter your IP address or domain name for what to connect to, and make sure the port is set to 9003 (unless you changed this in the lua script). 

Having assembled the hardware and installed the modified software on the NodeMCU board, Skype on the tablet, Wifi TCP/UDP Controller on your phone and Skype on another device to talk from and configured the router to pass through requests to the given port, it should be possible to call into the tablet, connect to the microcontroller, and control the robot over the internet. So how does it work? There’s still some refinements to make, and potentially some mounting brackets to make things more solid, but after various bits of trial and error, it’s a solution that provide a basic telepresence solution, and the basic build cost is under £90 while some interesting additional options may bring the build up to around £100. That’s far, far cheaper than pretty much any alternative around, and it’s been an interesting and fun thing to build, even if some of the problems (such as Skype crashes or trying to find documentation on the wifi car app and scripts) have been rather frustrating. The camera on a tablet doesn’t have the ideal field of view for something like this, so it can be hard to see a table that you’re near – it may be worth considering a fish-eye lens attachment depending on whether this proves a problem. As a first serious hardware project, it’s been interesting to see what works. My total build cost has probably been around £40 higher than listed here because of some parts that didn’t work out, but most of those are now either available for other projects or already being used (e.g. a sheet music stand that didn’t prove effective as a mount was quickly claimed by my daughters).

As promised previously, a list of the parts and suggested sources are below. If someone wants step-by-step instructions on the build, modified scripts, Wifi TCP/UDP Controller config file, etc, then by all means ask – it might be interesting to write this up as an Instructable at some point, although perhaps having refined the idea and scripts a little further first. In addition to the components, you’ll need a soldering iron, a drill and ideally a rotary tool.

Component List:

Optional additional components:

  • Lithium-polymer battery pack (7.4v or 11.1v) and battery charger (for extra speed when moving) – £10-15
  • Qi charging pad and receiver (to set up wireless charging) – £7 upwards

Some further progress towards the telepresence robot

My previous effort to build the base literally stalled as the motors proved a little weak for driving the unit (which comes in at around 2kg). In theory the motors should be able to drive this weight, but reality suggested otherwise. As a result, I investigated cheap motors which have a reputation for having a higher torque value and a power option that would provide a higher voltage if necessary.

After another frustrating delay (ordering from China is sometimes surprisingly quick and sometimes ponderously slow), I have some more parts – some GA12-N20 motors with wheels, which are geared to a slower speed (which is good for this project) and apparently provides more torque than the motors on the Wifi car. Since I’d ordered a spare microcontroller and motor control board to play with, I figured I’d leave the Wifi car wired up and wire up the spare instead. Below shows the test setup, with helping hands holding the motors, the control board in the background, and a Li-Po battery providing a couple of extra volts and as much current as I need.

Connecting to this via the usual Wi-Fi control app showed the wheels running at a decent speed under load, and grabbing a wheel and the motor I had to use a fair amount of force to stop it turning. Testing with just the USB input didn’t provide nearly the same torque, so at the moment I’m not sure if these motors will work without the added power from the Li-Po – I need to get them mounted on something and add a couple of kilos of weight to test.

My impression is that I now have all the parts I need, but I need to work out whether I’m mounting 2 of these with a dolly wheel, and, if so, whether it’s sufficient with the USB battery pack alone or whether I need the Li-Po as well (which would be a bit more of a pain as I’d want to wire it properly using the larger connector on the battery, so I can put a low-voltage alarm on the 3-pin one for safety and would mean separate charging for the motors when that runs down). Alternatively I got a couple more motors of the original type, so could change to a 4 wheel setup and see how that works, or even whether using the Li-Po with the WiFi car kit as-is gives it enough oomph.

Either way, I’ll have a few bits and pieces to tinker with after finishing assembly, but the total cost of the unit itself shouldn’t change much – at most by a few quid – it should still be possible to build a unit for under £100, so it’s on target still.

Plan B for telepresence robot hardware

After the previous effort at planning a base, where I realized a sheet music stand did NOT make a good support, I had a look at tablet mounts and decided that it would be better to get a lightweight camera tripod and a tablet mount – both together can be had for around £15. This should resolve the issues with how well the tablet is held in the unit, while providing an adjustable height and angle for the display. One order placed, a short wait, and assembly begins again. 

And… now this looks more promising. Some experiments showed that there needed to be a reasonable spacing for the legs to give stability, so the base may need to be wider than I’d originally planned (I’m wondering if this’ll start looking like a mutant Dalek by the time I’ve finished…). With 2 legs extended, however, it’s a reasonable height for seated eye level and a slight upward angle may make it usable for talking to people who are standing, too, so long as I’m not too close. It’s the right height for my younger daughter and my son when they’re standing, anyway. And I can always adjust the height later. The base may require some more work, but I think this’ll do for a stand. The tablet holder also leaves a gap in the middle, so I can plug in a USB cable just fine.

The Wifi car base has arrived, and I’ve got that with me in London to get it fully assembled. I’m reckoning I’ll have to work something out to space the wheels further apart, since the base is significantly narrower than the stable width of the tripod legs. I don’t really want the robot tipping itself backwards when it moves, or for my dog to knock it over too easily.

The instructions for the wifi car were great. At least, if by great you mean only covering a part of the assembly and pointing you to a completely broken version of the application so nothing works. As too often happens with kits like this ordered from the Far East, it’s time to use your Google-fu to find such niceties like a) how to wire up the microcontroller, b) a version of the app that works (and is in English), and c) downloads for updating the controller to work via a remote connection.

Armed with this, assembly was actually pretty easy. The installation instructions showed how to attach the motors and battery pack to the base (the bit I’m most likely to abandon to create a wider base and integrated power supply). After that, I had a packet with a few wires, some random nuts and bolts and spacers, and the microcontroller with motor control board. The first discovery is that the holes on the base don’t really line up with holes in the motor board – you can get it kind of attached using 2 spacers, but it’s kind of balancing above the base and the wires you’re provided with aren’t really long enough to go from that mounting position to the motors – you need the control board positioned somewhere there are no matching holes. My recommendation would probably be double-sided foam tape, but for now I’ve made a double-sided sticky pad using duct tape, which is holding it in place OK. Looking online, you have 2 motor control sets with positive and negative – one pair to each motor, so not too difficult. The default is to share the power supply wires to both the motor control board and the microcontroller – unless you want to use more powerful motors, that’s what you want, so you then need to wire up the ground and Vin to the power supply. Since there’s an on-off switch on the motor board, the kid rocker switch is kind of optional, but it’s a little neater, so I wired that in.

Note that no tools are supplied here. You’ll need a small screwdriver and a soldering iron, which I’m not including in the budget as many will have these. Also some solder, the aforementioned foam tape or other sticking option (hot glue would work for a permanent attachment), and plenty of patience to work with the poor documentation.

Now, it’s time to test out the base. Did it turn on? No. Remember I mentioned the control board has a switch? Well, the ‘instructions’ helpfully didn’t mention that. It was off, so the unit wasn’t working. Once that was on, I’m in business. I see a hotspot to connect to, install the app it suggested, connect to the hotspot, run the app, and… nothing. This is where you need that supply of patience. The APK they recommended? It didn’t work – searching on Google found another version of the app which did, and which gives the option of connecting via a remote connection or local hotspot. The latter is the default. We’ll want the remote connection later, but for testing everything is working, use the local first.

With a better version of the app, it connects, I hit forwards, and it goes forwards and keeps going. I may need to adjust this – you don’t want a telepresence robot to keep going without you, so I may need to tweak the (thankfully open source) microcontroller script to automatically stop after a short time without input. However, it’s moving, both the motors are working, my soldering is better than I’d feared, and we’re in business. Press left and … it turns right. OK, that’s simple, at least – the motors are wired up opposite to what’s needed for the app, so quickly unscrew the motor wire pairs and swap them over, power on again, and it’s moving as you’d expect.

I’m inordinately happy with that – it’s only a crude app-controlled car at the moment, but all the parts work. I’d been wanting to have a USB battery pack which could power both the tablet and the base – I was thinking any twin-socket battery pack would do. I happen to have a nice Duracell one lying around, so I plugged a Micro-USB cable into that, plugged it into the ESP8266 NodeMCU board, and tested the app again. Sure enough, the car moved just as happily powered by this as when powered by the 4xAA batteries. This means that I can definitely have one battery pack power both the tablet and the base. With enough of a battery capacity, this could stay on standby for a week. A twin socket battery pack can be had for around £15, but a large-capacity one would be more.

However, I have another cunning plan. My phone didn’t have wireless charging, so I picked up a Qi pickup that slots into the MicroUSB port and a Qi charger base. The total cost for this is around £11. It can provide 800mA, which should be more than the drain when on standby. My tests suggest that the Qi pickup should be able to charge my battery pack faster than the idle base and tablet drain it, so hopefully I can set up a Qi wireless charging base station whereby I back the robot into the base and so long as it’s close enough it’ll charge. I’m thinking it could be done with the Qi base either on the floor or the wall, but that the wall is likely to be the easier one to get working.

Now, there’s some playing around with the software, and probably a need to assemble a wider base (I could try hot-gluing the motors and dolly wheel to the tripod, but a more solid base would be better to mount the battery pack and microcontroller on and ensure a low centre of gravity). But what’s the cost so far?

Tablet – £39
Tripod – £7
Tablet Mount – £7
Battery Pack – est. £15 (the Duracell one is much more expensive, but I had it anyway)
WiFi Car – £15
Qi Charger – £6
Qi Pickup – £5
Total – £94

So, it’s coming in under budget (particularly since I already had the battery pack). It will clearly be possible to build some kind of telepresence robot for around £100, including wireless charging. For a complete bare-bones option, you could drop the Qi parts and battery pack and have something sufficient for a meeting or other use where someone can turn it on for you for around £70.

My plan is to work out a full parts list with links to appropriate parts, full instructions, once I’ve got things fully assembled and any software and hardware issues resolved. Hopefully others will be encouraged to build their own budget telepresence robots. It’s been an interesting experience so far, planning hardware, doing some soldering, investigating microcontroller software, etc. I’m sure once it’s all working I’ll have a temptation to look at extensions – the most interesting would be looking at some kind of autonomous system which maps out its environment using Simultaneous Location And Mapping (SLAM) or the like, but a good start might be some homing beacon idea along with obstacle avoidance. First things first, though – I need to finish the basic build and deal with any issues like stability, auto-stopping, etc.

Working on a telepresence robot

Since I’m in London for work during the week, and only home with my family at the weekend, I want whatever opportunities I can to be more engaged with my family during the week. I’m also keenly interested in technology – tinkering with code and ideas during my free time.

The natural conclusion for this is to explore the areas of video calling and telepresence. If  I can’t be there in person, then having a more complete means of engaging is of interest. To that end, I’m looking at building a budget telepresence setup – complete with video calling and a mobile base. I want a good quality of video call, I want to be able to be understood reasonably easily, and I want a stable base that I can control remotely, and I want to do it for under £100.

There were a few options considered for the screen and camera elements. I have a Microsoft Life HD webcam which has a pretty reasonable feed, and combined with a Raspberry Pi 3 this would have provided a reasonable amount of processing power, full control over the operating system and a good quality video. There are screens available reasonably cheaply for the Pi, but most of them are small and once you look at 7 inch screens and above the price looks unlikely to fit in the budget. Additionally, while it seemed desirable to have a full open source stack using something like Linphone or another SIP-based video calling software, in practice I didn’t have much luck getting the Raspberry Pi to handle video calling reliably. I’m open to revisiting this approach, but with the budget constraints other alternatives were in order.

The main alternative is a tablet. With the budget an iPad is obviously out. There are some Chinese tablets around for as little as £20, but the battery life was liable to be poor, and the camera and speakers equally so – not promising details for a telepresence base. Doing some research into what tablets had a decent camera for under £50 there were few options – for around £100 there are a few more options, but that’s above the budget. Some (but not all) of the Kindle Fire HD tablets have cameras, and I remembered Amazon making a big thing about the video calling for its support features, so they seemed like a possible candidate. While too expensive new, there are plenty of refurbished models around, so ultimately I decided on the 2012 Amazon Fire HD 7 Inch. These can be picked up for under £40. (N.B. Don’t get the 2013 Fire HD 7 model, as it didn’t have a front camera, so it’s no good for this project.)

Testing one out, they also advertised Skype support. Great – I’ve used Skype lots of times and with auto-answer surely that’s a solution? If only things were that simple. With Skype installed, a new account created, and appropriate auto-answer and volume settings, I tried a few test calls with the tablet on, and everything looked good. However, when I let the tablet go into standby, I found Skype was no longer listening – the fixed Fire HD configuration seems to be to turn off the WiFi to conserve power when in standby. Since I couldn’t leave the screen on all the time, this is a deal-breaker. I tried LinPhone, and it had the same problem. I tried an app that claims to configure this setting, installed from the Amazon App Store, and having paid my dollar, I found it didn’t do a thing. That’s frustrating – I mean the app had ONE function, and it couldn’t do it. So one completely negative review later (such empowerment – at least hopefully it’ll deter others from wasting a few cents as well), it’s time for a more drastic solution – Cyanogenmod.

The Fire HD is a pretty reasonable tablet on its own, but if you have a limitation you don’t like a number of people have worked hard to get a more open version of Android running on the tablets. For a few quid more you can buy a refurbished tablet with Cyanogenmod already installed, and I’d recommend doing so unless you particularly like to tinker, but I already had my tablet, so I had to do the rooting, recovery package installer and Cyanogenmod installation myself. That’s a tutorial in itself, and there’s ones online. For now, I’ll recommend just getting a pre-flashed tablet. With Cyanogenmod installed and Google Apps, I could install a current version of Skype and set the tablet to leave the Wireless connection on in standby. Job done.

The next part of the build was some kind of support for the tablet. I thought it would be clever to get a sheet music stand – figuring it was a suitable weight, narrow at the top for a low centre of gravity, and with a suitable support for the tablet. In short, I was wrong. It served me OK for making a video call to the kids while working on a Plan B, but it’s just not good enough. Firstly, there’s no lip at the bottom and no clip for the tablet, so it’s not secure. Secondly, unless you’ve decided to mount a 14+ inch tablet, the size of the top support is going to be way too big. They’re not really designed to minimise size when you have something smaller, so while the sheet support folded up, it didn’t do so in a way that I could use it, and the tablet was even less secure. I also couldn’t charge the tablet as the charging port is on the bottom and there’s metal from the stand in the way.

I discovered the ESP8266-based WiFi car kits online, and figured one of those would make a reasonable base. At the point of trying out the sheet music stand, this was still awaiting delivery, but at £15 for a motorized base which can in theory be controlled by a remote app over the internet, it’s a possible solution to the movement question that will require minimal investment and new development.

So, at the end of the first experiments, my daughter has a new sheet music stand for when playing her recorder, and I had a possible base on the way, a need for a better support plan, and a working tablet. Definitely progress, but a way still to go… tune in again for part 2.

Fingers dancing across the keyboard – professional standards and equipment

One of the frustrations I’ve often had in a workplace is a sub-standard development machine. The net result is lengthy delays while programs switch, code builds, database queries run and so on. While you naturally try to minimise the disruption by thinking about the next steps and doing what you can while things are unresponsive, it can be a serious productivity killer. This invariably strikes as an entirely ridiculous situation given the impact on the business. For the sake of a few hundred extra in hardware per year, a developer whose time is costing many tens of thousands a year is losing a non-trivial proportion of their productivity to just waiting for their machine to catch up. The economics only require 1% of that developers time to be eaten up for it to be better to get a decent machine, and if you’re noticing the delays in various stages of your development efforts you can absolutely guarantee that it’s a lot more than 1% productivity that’s lost.

It gets worse still for the company, though. By being cheap on the equipment that the developer gets to use, this fosters frustrations in the developer. Are the management idiots that can’t see the self-evident value of decent equipment? Do they just not like my enough to me with decent equipment? Or worse, are my prospects being negatively affected by this reduced productivity, as I look like a more expensive resource for a given output, through no fault of my own? The doubts and frustrations are hardly likely to be conducive to productivity in what can sometimes be a stressful job. When you’re thinking about things other than the code, like how to tweak your settings to improve the build speed, or what you can kill to free up a little bit more memory, then that’s mental space taken away from the complex models that you need to keep in mind. And it you have to go make a coffee while things build, then that may be nice for catching up with your colleagues, but doesn’t get any more code written either.

So, the first point of the post is that the equipment developers should have access to should be commensurate with the simple fact that they are undoubtedly a very expensive resource. I’m a professional developer, and I know I cost my clients a lot of money. I aim to provide more than enough to justify that cost, but I’m keen to make myself as valuable as possible to the clients. This involves not just providing input on the best approaches to development, recommendations on viability of options and, of course, the development itself, but suggestions for peripheral improvements to the whole process. £1000 spent on a high-spec machine should be returned several times over in improved productivity over a machine costing half that, when it comes to development machines. Naturally, there’s a point of diminishing returns, as the absolute top-end machines can be astonishingly expensive, but aside from pandering to someone’s ego by making them feel special, it’s going to be rare that you need the absolute best machine going.

This should include other things such as a good keyboard. I have an OK keyboard at work. It has reasonable key spacing and keys haven’t yet got stuck, but it’s a keyboard worth about £10, which seems about standard for most work machines. I’m going to be spending large parts of my working day at the keyboard, whether it’s writing code or documentation or relevant articles about the company or related technology. The difference is less noticeable, just an occasional ache in the wrist from the less than ideal setup, but again, for the sake of a few quid there’s a slight impact on the performance of an expensive resource.

I’d compare it to hiring a consultant surgeon and then providing them cheap or antiquated equipment to perform their procedures with. The risks in a developers case are, thankfully, rarely a matter of life and death, but both will see an impact on their performance given sub-standard equipment, and both are specialists that cost significant amounts to employ – in some cases the developers may be more expensive than the surgeons (I’m not trying to suggest we’re necessarily worth that, it’s a simple product of supply and demand market forces given that most areas of IT are not considered vocational, game development excepted).

There is a corresponding obligation from developers. Not only should they be maintaining their development skills, but they should be ensuring that they are as capable of possible of maximising the time that they spend working out code rather than writing it. This means that since there is a substantial amount of typing involved, some time should almost invariably be spent in learning to touch type. Whether you stick fully with the process after the initial training, the difference is quite marked in the speed that you can write words. In some ways the position isn’t entirely ideal for a developer, as the use of less common symbols may imply a less than comfortable contortion of the hands to stay generally in a touch typists hand position, but the muscle memory and starting point it develops means that you can write code and documentation faster than before. For the sake of a few hours or so adjusting your naturally developed typing process to one that is able to develop further, you’ll reap benefits in terms of the speed at which you can type. By this I mean that a slow developer should be able to type at least 50-60wpm just from their natural typing practice. A practiced developer who has spent some time improving their typing speed should be looking more at a 100+wpm typing speed, comparable with many typists. You’ll develop more of a feel for when you’re typing things incorrectly, reducing the number of times that you mistype a variable name (saving time when you build – remember that we don’t benefit from a spelling checker in most of what we write). The aim is to develop your typing to a point where not only does it reduce the amount of time that you spend writing the code, but it reduces the amount of time you think about the process of typing, and it also allows those moments of flow when your logic is clearest to be expressed as completely as possible, maximising the productivity. If your typing is slowing down your ability to express the code that you’re imagining in your head, then it’s a limiting factor, and those are areas to improve.

More specific to developers are editors and keyboard controls. Most IDEs provide a range of keyboard shortcuts which allow many things to be done more efficiently from the keyboard than from the mouse. The process of selecting menus or buttons is relatively slow compared to a single keyboard combination, not least because you have to take one hand away from your ideal typing position as well as then positioning a cursor in a small part of the screen. If you’re not using a reasonable selection of keyboard shortcuts then you are not maximising your productivity and should take a step back to learn these, as the time they can save as well as the avoidance of distraction to do a task that can’t rely mainly on muscle memory make them something that any professional should look to embrace. Editors like vIM or EMACS can seem like a waste of time to learn with the steep learning curve and lack of full IDE functionality, but the sophistication with which you can interact with the text in a file and their lightweight nature can make them a very efficient way of modifying code. It’s interesting to see Microsoft move more towards supporting lighter tooling with Visual Studio Code alongside the full (and very different) Visual Studio, as this supports a range of plugins that can improve productivity and support a workflow that minimises delays while background tasts freeze up your editor.

There is naturally some downside to the process of getting developers to type much faster. It’s not that time is taken up doing that over learning about new languages or frameworks, because the amount of time involved is small and will be quickly overtaken by the  time saved by typing code and documentation faster. The downside is the liikelihood of taking up more of other people’s time by writing long essays (like this one). Where a short note might previously have sufficed, the ability to type volumes at speed means that a half hour typing up some notes to a meeting can be 6 pages of comprehensive documentation. I’ve had complaints that my emails are excessively long . I’ve rarely spent a long time writing them, I’ve just let something flow. The final step,perhaps, is to get the same developers to spend some time editing what they write for conciseness and form. While this blog is more of a stream of consciousness and not intended to be minutely edited, when I’m writing emails and other communications I try to reduce down to side. I’ve become quite fond of Twitter as a result. This enforces a very small size limit. My approach then ends up being to write what I want to say, delete at least half of it, and then review what I have left and trim it down to size for a Tweet. It’s a good exercise,trying to express a concept in so few letters, although i (as demonstrated here) have yet to consistently apply it to everything I write.

TL-DR: I hope that the general message has been clear. Provide good equipment to developers as even a few percent of time spent waiting for check-ins, builds, debuggers, etc, can cost a company much more over a single year than the additional cost of a well-specced machine. Developers should also optimise their use of equipment – for example learning to touch type and what the most useful keyboard shortcuts are. If both are done the developer is able to maximise their focus on writing code that matches a complex model in their head, improving productivity, staff morale and hopefully also the quality of the code (by reducing delay-related distractions).

Bugbears about proprietary lock-in on hardware

We’ve come to expect printers to require their own ink cartridges, even to the point of having microchips to identify the cartridge. This makes ink cartridges relatively complicated when they really shouldn’t be. This is an area where at least you’re paying extra for the knowledge that it’s the right ink for the printer, so it shouldn’t dry up or clog the printer. The printers may also be relatively subsidised as the manufacturers aim to make their profits from the ink rather than the printer, giving you very competitive prices if you don’t print much.

This is nothing new. Razor blades have been the classic place for vendor lock-in – making no money on the initial purchase but charging a huge premium for branded cartridges compared to the cost of plain razor blades. Again, you buy into the system knowing that there’s a lock-in and you’re paying more for the disposable part and less up front.

What’s much worse is ignoring common standards to provide proprietary interfaces for hardware like hard drives and cameras. By all means implement a driver or app that provides value-added features to improve the experience. But don’t hide away the open standards so people can’t do as much with the hardware.

An example. The Buffalo Ministation Air. You’d buy it if you want 1/2 TB of storage wirelessly. If you don’t care about wireless, then you’d use a wired portable drive instead. So why, then, does it not publicly provide a means to access the drive wirelessly from a Windows device? Why are we locked into a poorly maintained application that limits what files you can see on Android, and which sometimes fails to list even supported files? Why can’t we have Windows shares/Samba, NFS, etc, perhaps DNLA for sharing media? Those would support other applications. Why do the decision-makers assume that all their customers only want to use their devices in the ways they tell them to? I don’t. I want to be able to access the files directly from another app. I want to play files wirelessly from my Windows tablet. I might even want to push the boat out and try to think of it as being properly integrated storage with my other portable devices.

It’s rather disappointing, then, to have the manual effectively tell you ‘this hardware may be sleek and pretty, but it has behavioural problems and doesn’t play well with other kids’. Fortunately, they lie. Of course they’re not going to implement their own alternative to Windows shares. They just keep them hidden away and don’t bother to publish them. That way they can write noddy apps to control the whole process that use that secret wisdom, while we uninformed (presumably unwashed) masses have to satisfy ourselves with the meagre offerings passed on. Or, rise up in glorious revolution, proclaim loudly the secret knowledge, and help others to free themselves from the shackles of app lock-in. admin/admin. The truth is free.

Another example are some of the IP cameras. There are established protocols which are well supported by drivers, allowing various systems to stream video data from disparate sources to do with what they will. Limited only by creativity, intelligence, technology and (frequently) patience, it’s possible to create a network of open devices monitored all at once, or analysed for movement, faces or Funny Cat Fails. In the name of simplicity (and selling a few premium services at ‘only’ the cost of their brand reputation), many of the IP cameras hide away their open protocols behind applications which require registration or subscription to a monitoring service which can (for a fee) detect movement and host it themselves. If you want to be really stung, you can store a few GB of video, too. And when you want to watch it, you can watch from your Android app, one feed at a time, with motion detection that soon becomes reminiscent of water torture as the incessant reminder that your family are still in and moving around leads you to turn off any such motion detection, unencumbered as it was by the ability to set Appropriate Times Of Day to operate.

I have 4 IP cameras. One of them is a lump of Chinese cheapness, with all the quality instruction and build quality that I’ve come to expect from that. For all that, perhaps because of that, it doesn’t forgo open protocols, it embraces them, and as a result is the most useful of the set. The pair of Philips IP cameras and my BT IP camera are both much more solid feeling. They look nicer. The video is clearer. But I can’t hook them up to a custom system to try to identify who’s at the door because they’re all LOCKED DOWN. Yes, in the name of simplicity I can only access them from an app, and now if I wanted to set up all 4 cameras at once I’d have to switch between 3 pieces of software to view them all. What could make things easier? I think you’ve guessed by now – it would be much easier FOR ME if the manual told me how I could access the underlying open protocols, because then I could (almost trivially) integrate them with the software I want to write. I’m pretty sure the open protocols are there (for some I’ve seen a web page with login running on the camera), but without the required access information, I’m stuck in Toy Land.

These Android apps certainly make simple use cases easier. You don’t need to worry about firewalls and routing, you set up a single account and that’s almost job done. Some even have some nice integration to pick up a QR code from the webcam stream to quickly configure the IP camera to your account. But when you want to do something, anything, out of the ordinary, then these become like training wheels to a professional cyclist – severely restrictive of making full use of what would be possible without those extras. Like training wheels, users should be able to free themselves from the proprietary apps and use the hardware to its full capacity.

So please, any people working for hardware vendors reading this (I can but hope), when you’re looking at a custom app or open protocols, remember. It’s not an either/or proposition. It’s OK to have a ‘my first IP Camera/Hard Drive/Automated Nose Picker’ interface, but unless you’re actively advertising the lock-down, let the underlying protocols be known and accessed so that those of us who want to do more than 1 thing with a device have a reasonable chance to do so. We might even start spouting about the virtues of your open software rather than ranting at length about lock-in.