What Heartbleed means for you

On the 8th April a group of security researchers published information about a newly discovered exploit in a popular encryption library. With some marketing panache, they called this exploit “Heartbleed”.

A huge number of Internet services were vulnerable to this exploit, and although many of them have now been patched many remain. In particular, this was an Open Source library and so many of the very largest and most popular sites were directly affected.

Attention on the exploit has so far focused on the possible use of the exploit to obtain “key material” from affected sites, but there are some more immediate ramifications and you need to act to protect yourself.

Unfortunately the attack will also reveal other random bits of webserver’s memory, which can include usernames, passwords and cookies. Obtaining this information will allow attackers to log into these services as you, and then conduct more usual fraud and identity theft.

Once the dust has settled (so later today on the 9th, or tomorrow on the 10th) you should go and change every single one of your passwords. Start with the passwords you’ve used recently and high value services.

It’s probably a good idea to clear all your cookies too once you’ve done this, to force you to re-login to every service with your new password.

You should also log out of every single service on your phone, and then re-login in, to get new session cookies. If you are particularly paranoid, wipe your phone and reinstall. Mobile app session cookies are likely to be a very popular vector for this attack.

This is an enormous amount of work, but you can use it as an opportunity to set some decent random passwords for every service and adopt a tool like LastPass, 1Password or KeePass while you are at it.

Most people are hugely vulnerable to password disclosure because they share passwords between accounts, and the entire world of black-hats are out there right now slurping passwords off every webserver they can get them from. There is going to be a huge spike in fraud and identity theft soon, and you want to make sure you are not a victim to it.

The Man-In-The-Middle Attack

In simple terms this would allow an attacker to impersonate the site’s SSL certificate, so they can show the padlock icon that demonstrates a secure connection, even though they control your connection.

They can only do this if they also manage to somehow make your browser connect to their computers for the request. This can normally only be done by either controlling part of your connection directly (hacking your router maybe), or by “poisoning” your access to the Domain Name Service with which you find out how to reach a site (there are many ways to do this, but none of them are trivial).

You can expect Internet security types to be fretting about this one for a long time to come, and there are likely to be some horrific exploits against some high-profile sites executed by some of the world’s most skilled hackers. If they do it well enough, we may never hear of it.

The impact of this exploit is going to have huge ramifications for server operators and system designers, but there is very little in practical terms that most people can mitigate this risk for their own browsing.

About us: Isotoma is a bespoke software development company based in York and London specialising in web apps, mobile apps and product design. If you’d like to know more you can review our work or get in touch.

Reviewing Django REST Framework

Recently, we used Django REST Framework to build the backend for an API-first web application. Here I’ll attempt to explain why we chose REST Framework and how successfully it helped us build our software.

Why Use Django REST Framework?

RFC-compliant HTTP Response Codes

Clients (javascript and rich desktop/mobile/tablet applications) will more than likely expect your REST service endpoint to return status codes as specified in the HTTP/1.1 spec. Returning a 200 response containing {‘status’: ‘error’} goes against the principles of HTTP and you’ll find that HTTP-compliant javascript libraries will get their knickers in a twist. In our backend code, we ideally want to raise native exceptions and return native objects; status codes and content should be inferred and serialised as required.

If authentication fails, REST Framework serves a 401 response. Raise a PermissionDenied and you automatically get a 403 response. Raise a ValidationError when examining the submitted data and you get a 400 response. POST successfully and get a 201, PATCH and get a 200. And so on.


You could PATCH an existing user profile with just the field that was changed in your UI, DELETE a comment, PUT a new shopping basket, and so on. HTTP methods exist so that you don’t have to encode the nature of your request within the body of your request. REST Framework has support for these methods natively in its base ViewSet class which is used to build each of your endpoints; verbs are mapped to methods on your view class which, by default, are implemented to do everything you’d expect (create, update, delete).


The base ViewSet class looks for the Accepts header and encodes the response accordingly. You need only specify which formats you wish to support in your settings.py.

Serializers are not Forms

Django Forms do not provide a sufficient abstraction to handle object PATCHing (only PUT) and cannot encode more complex, nested data structures. The latter limitation lies with HTTP, not with Django Forms; HTTP forms cannot natively encode nested data structures (both application/x-www-form-urlencoded and multipart/form-data rely on flat key-value formats). Therefore, if you want to declaratively define a schema for the data submitted by your users, you’ll find life a lot easier if you discard Django Forms and use REST Framework’s Serializer class instead.

If the consumers of your API wish to use PATCH rather than PUT, and chances are they will, you’ll need to account for that in your validation. The REST Framework ModelSerializer class adds fields that map automatically to Model Field types, in much the same way that Django’s ModelForm does. Serializers also allow nesting of other Serializers for representing fields from related resources, providing an alternative to referencing them with a unique identifier or hyperlink.


Should you choose to go beyond an AJAX-enabled site and implement a fully-documented, public API then best practice and an RFC or two suggest that you make your API discoverable by allowing OPTIONS requests. REST Framework allows an OPTIONS request to be made on every endpoint, for which it examines request.user and returns the HTTP methods available to that user, and the schema required for making requests with each one.


Support for OAuth 1 and 2 is available out of the box and OAuth permissions, should you choose to use them, can be configured as a permissions backend.


REST framework provides a browsable HTTP interface that presents your API as a series of forms that you can submit to. We found it incredibly useful for development but found it a bit too rough around the edges to offer as an aid for third parties wishing to explore the API. We therefore used the following snippet in our settings.py file to make the browsable API available only when DEBUG is set to True:



REST Framework gives you an APITestCase class which comes with a modified test client. You give this client a dictionary and encoding and it will serialise the request and deserialise the response. You only ever deal in python dictionaries and your tests will never need to contain a single instance of json.loads.


The documentation is of a high quality. By copying the Django project’s three-pronged approach to documentation – tutorial, topics, and API structure, Django buffs will find it familiar and easy to parse. The tutorial quickly gives readers the feeling of accomplishment, the high-level topic-driven core of the documentation allows readers to quickly get a solid understanding of how the framework should be used, and method-by-method API documentation is very detailed, frequently offering examples of how to override existing functionality.

Project Status

At the time of writing the project remains under active development. The roadmap is fairly clear and the chap in charge has a solid grasp of the state of affairs. Test coverage is good. There’s promising evidence in the issue history that creators of useful but non-essential components are encouraged to publish their work as new, separate projects, which are then linked to from the REST Framework documentation.



We found that writing permissions was messy and we had to work hard to avoid breaking DRY. An example is required. Let’s define a ViewSet representing both a resource collection and any document from that collection:


class JobViewSet(ViewSet):
    Handles both URLS:
    serializer_class = JobSerializer
    permission_classes = (IsAuthenticated, JobPermission)
    def get_queryset(self):
        if self.request.user.is_superuser:
            return Job.objects.all()
        return Job.objects.filter(
            Q(applications__user=request.user) |

If the Job collection is requested, the queryset from get_queryset() will be run through the serializer_class and returned as an HTTPResponse with the requested encoding.

If a Job item is requested and it is in the queryset from get_queryset(), it is run through the serializer_class and served. If a Job item is requested and is not in the queryset, the view returns a 404 status code. But we want a 403.

So if we define that JobPermission class, we can fail the object permission test, resulting in a 403 status code:


class JobPermission(Permission):
    def get_object_permission(self, request, view, obj):
    if obj in Job.objects.filter(
        Q(applications__user=request.user) |
        return True
    return False

Not only have we duplicated the logic from the view method get_queryset (we could admittedly reuse view.get_queryset() but the method and underlying query would still be executed twice), if we don’t then the client is sent a completely misleading response code.

The neatest way to solve this issue seems to be to use the DjangoObjectPermissionsFilter together with the django-guardian package. Not only will this allow you to define object permissions independently of your views, it’ll also allow you filter querysets using the same logic. Disclaimer: I’ve not tried this solution, so it might be a terrible thing to do.

Nested Resources

REST Framework is not built to support nested resources of the form /baskets/15/items. It requires that you keep your API flat, of the form /baskets/15 and /items/?basket=15.

We did eventually choose to implement some parts of our API using nested URLs however it was hard work and we had to alter public method signatures and the data types of public attributes within our subclasses. We required entirely highly modified Router, Serializer, and ViewSet classes. It is worth noting that REST Framework deserves praise for making each of these components so pluggable.

Very specifically, the biggest issue preventing us pushing our nested resources components upstream was REST Framework’s decision to make lookup_field on the HyperlinkedIdentityField and HyperlinkedRelatedField a single string value (e.g. “baskets”). To support any number of parent collections, we had to create a NestedHyperlinkedIdentityField with a new lookup_fields list attribute, e.g. ["baskets", "items"].


REST Framework is great. It has flaws but continues to mature as an increasingly popular open source project. I’d whole-heartedly recommend that you use it for creating full, public APIs, and also for creating a handful of endpoints for the bits of your site that need to be AJAX-enabled. It’s as lightweight as you need it to be and most of what it does, it does extremely well.

About us: Isotoma is a bespoke software development company based in York and London specialising in web apps, mobile apps and product design. If you’d like to know more you can review our work or get in touch.

Ballet Phase 3 PPR

A strange one this. The project was delivered under budget and there were no significant quibbles from the customer about how quickly we turned it around. However, by the end of it, Jo and I in particular felt once again like actually sacking them as a customer. Our main point of contact is still Digital Manager David Watson (lol) who has a schizophrenic nature; capable of being extremely charming and reasonable and then suddenly changing everything, demanding the moon on a stick and threatening to get rid of us as an agency. Not the best.

We were also a bit twitchy going into this one due to the very unsuccessful nature of phase 2 that we delivered in September 2013.

So phase 1 was stressful and odd – but we delivered in spades. Phase 2 went very badly wrong (and was were we learned of David’s tendencies to be a tough customer). For phase 3 we actually learned from our mistakes in the previous releases:

1) David needed to understand why the site is the way it was. The timeline for the original build was *ridiculous* and decisions that were then practical seem odd if you don’t have the context of the original build.

2) David has a *very specific* image of what he wants that he is often unsuccessful at communicating. This is made to seem as if it’s our fault. Budgets need to be set accordingly. Contingency must be added; scope must be policed.

Although David did manage to extract a couple of change requests from us for free at the end of the project, we were largely successful in this – so nice work Tom and Antony.

I think the single most important step in this was reviewing the requirements in detail with him with a developer on the call. We could describe what we were intending to do and he could say, no, I want it covered in diamonds instead. We headed off at least 3 bad assumptions that would have killed us this way.


Requirements are badly captured in designs – this customer particularly needs a written scope and needs to be grabbed by the shoulders and made to think about consequences.

We invoiced £4,720 and timesheeted £3,520 — so our achieved day rate was an excellent £697.27. However it’s worth noting that the actual cost of the project was 4-5 days higher with the couple of CRs he managed to wangle. This was captured in support.

Django Class-Based Generic Views: tips for beginners (or things I wish I’d known when I was starting out)

Django is renowned for being a powerful web framework with a relatively shallow learning curve, making it easy to get into as a beginner and hard to put down as an expert. However, when class-based generic views arrived on the scene, they were met with a lukewarm reception from the community: some said they were too difficult, while others bemoaned a lack of decent documentation. But if you can power through the steep learning curve, you will see they are also incredibly powerful and produce clean, reusable code with minimal boilerplate in your views.py.

So to help you on your journey with CBVs, here are some handy tips I wish I had known when I first started learning all about them. This isn’t a tutorial, but more a set of side notes to refer to as you are learning; information which isn’t necessarily available or obvious in the official docs.

Starting out

If you are just getting to grips with CBVs, the only view you need to worry about is TemplateView. Don’t try anything else until you can make a ‘hello world’ template and view it on your dev instance. This is covered in the docs. Once you can handle that, keep reading the docs and make sure you understand how to subclass a ListView and DetailView to render model data into a template.

OK, now we’re ready for the tricky stuff!

Customising CBVs

Once you have the basics down, you will find that most of your work revolves around subclassing the built-in class-based generic views and overriding one or two methods. At the start of your journey, it is not very obvious what to override to achieve your goals, so remember:

  • If you need to get some extra variables into a template, use get_context_data()
  • If it is a low-level permissions check on the user, you probably want dispatch()
  • If you need to do a complicated database query on a DetailView, ListView etc, try get_queryset()
  • If you need to pass some extra parameters to a form when constructing it via a FormView, UpdateView etc, try get_form() or get_form_kwargs()


If you haven’t heard of ccbv.co.uk, go there and bookmark it now. It is possibly the most useful reference out there for working with class-based generic views. When you are subclassing views and trying to work out which methods to override, and the official docs just don’t seem to cut it, ccbv.co.uk has your back. If it wasn’t for that site, I think we would all be that little bit grumpier about using CBVs.


CBVs cut a LOT of boilerplate code out of the process of writing forms. You should already be using ModelForms wherever you can to save effort, and there are generic class-based views available (CreateView/UpdateView) that allow you to plug in your ModelForms and reduce your boilerplate code even further. Always use this approach if you can. If your form does not map to a particular model in the database, use FormView.


If you want to put some guards on your view e.g. check if the user is logged in, check they have a certain permission etc, you will usually want to do it on the dispatch() method of the view. This is the very first method that is called in your view, so if a user shouldn’t have access then this is the place to intercept them:

from django.core.exceptions import PermissionDenied
from django.views.generic import TemplateView
class NoJimsView(TemplateView):
    template_name = 'secret.html'
    def dispatch(self, request, *args, **kwargs):
        if request.user.username == 'jim':
            raise PermissionDenied # HTTP 403
        return super(NoJimsView, self).dispatch(request, *args, **kwargs)

Note: If you just want to restrict access to logged-in users, there is a @require_login decorator you can add around the dispatch() method. This is covered in the docs, and it may be sufficient for your purposes, but I usually end up having to modify it to handle AJAX requests nicely as well.

Multiple inheritance

Once you start subclassing and overriding generic views, you will probably find yourself needing multiple inheritance. For example, perhaps you want to extend your “No Jims” policy (see above) to several other views. The best way to achieve this is to write a small Mixin and inherit from it along with the generic view. For example:

class NoJimsMixin(object):
    def dispatch(self, request, *args, **kwargs):
        if request.user.username == 'jim':
            raise PermissionDenied # HTTP 403
        return super(NoJimsMixin, self).dispatch(request, *args, **kwargs)
class NoJimsView(NoJimsMixin, TemplateView):
    template_name = 'secret.html'
class OtherNoJimsView(NoJimsMixin, TemplateView):
    template_name = 'other_secret.html'

Now you have entered the world of python’s multiple inheritance and Method Resolution Order. Long story short: order is important. If you inherit from two classes that both define a foo() method, your new class will use the one from the parent class that was first in the list. So in the above example, in your NoJimsView class, if you listed TemplateView before NoJimsMixin, django would use TemplateView’s dispatch() method instead of NoJimsMixin’s. But in the above example, not only will your NoJimsMixin’s dispatch() get called first, but when you call super(NoJimsMixin, self).dispatch(), it will call TemplateView’s dispatch() method. How I wish I had known this when I was learning about CBVs!


As you browse around the docs, code and ccbv.co.uk, you will see references to Views, BaseViews and Mixins. They are largely a naming convention in the django code: a BaseView is like a View except it doesn’t have a render_to_response() method so it won’t render a template. Almost all Views inherit from a corresponding BaseView and add a render_to_response() method e.g. DetailView/BaseDetailView, UpdateView/BaseUpdateView etc. This is useful if you are subclassing from two Views, because it means you can choose which one renders the final output. It is also useful if you want to render to JSON, say in an AJAX response, and don’t need HTML rendering at all (in this case you’d need to provide your own render_to_response() method that returns a HttpResponse).

Mixin classes provide a few helper methods, but can’t be used on their own, as they are not full Views.

So in short, if you are just subclassing one thing, you will usually subclass a View. If you want to manually render a non-HTML response, you probably need a BaseView. If you are inheriting from multiple classes, you will need a combination of some or all of View, BaseView and Mixin.

A final note on AJAX

Django is not particularly good at serving AJAX requests out of the box, and once you start trying to use CBVs to do AJAX form submissions, things get quite complicated.

The docs offer some help with this in the form of a Mixin you can copy and paste into your code, which gives you JSON responses instead of HTML. You will also need to pass CSRF tokens in your POST requests, and again there is an example of how to do this in the docs.

This should be enough to get you started, but I often find myself having to write some extra Mixins, and that is before even considering the javascript code on the front end to send requests and parse responses, complete with handling of validation and transport errors. Here at Isotoma, we are working on some tools to address this, which we hope to open-source in the near future. So watch this space!


In case you hadn’t worked it out, we at Isotoma are fans of Django’s class-based generic views. They are definitely not straightforward for newcomers, but hopefully with the help of this article and other resources (did I mention ccbv.co.uk?), it’ll be plain sailing before you know it. And once you get what they’re all about, you won’t look back.

About us: Isotoma is a bespoke software development company based in York and London specialising in web apps, mobile apps and product design. If you’d like to know more you can review our work or get in touch.

Shortlister PPR

So after this blogpost about what Shortlister is, I thought I’d put together some stats on the project from the PPR. We’re still finding a format for these, so while this is mostly as per Fig’s AuthorDirect post the other day, I’ve also felt free to freestyle. Booya!

See the original post over here

Profitability and the Achieved Day Rate (ADR)

For the initial phase of the project we ended up invoicing £70,160. The value of the actual time that we recorded on the project was £88,266. This gave us an achieved day rate of £490. The sweet spot for us from a profitability point of view is £500.

Sadly, over the next month or so, we ended up timesheeting a significant amount more on fiddling, faffing, documenting and supporting – not to mention post QA bugs and so the final timesheeted amount was a wee bit higher.

Error Rates and QA

240 tickets were opened by Alex, Ben and Francois during the build. There were a total of 61 QA defects raised on this – most of them in the final month of the project.

What did the customer think?

David swings between being delighted with the product (it is undeniably of high quality) and being frustrated by the process of software development. In a review meeting I said that the success of the initial phase of the project has put the bar very high for him. This was his first experience of the process and, in a way, his expectations of the smooth running of the process have been set weirdly high. What he conveniently forgets is the 6 months of wireframing, meetings and arguing that we went through at the start of the process.

What else did we learn?

Some thoughts from Alex, Ben, Francois, Jo and myself:


The project was a bit vaguely defined at the start, where FJ was making a lot of guesswork. But we had to make do with what we got from David. Although this frustrated FJ, it actually worked out pretty well from mine (and I think Andy’s) point of view because we could steer the project where David was too inexperienced to give us an absolute brief.

FJ often felt like he was guessing at how things should work, but I found that it was better to show David something that he could react too rather than endlessly talk about what-ifs.

Annoying customers

“David also had an annoying habit of contact me directly via email, skype or phone”

This came up for Ben and Francois. A nice problem to have in some ways but a distraction for Ben particularly. Our learning here is to limit access to devs – not only because it’s a timesink but also because it can lead to pretty severe mission creep.

From the point of view of developers, it’s always useful to speak up about this kind of thing. We have some pretty robust ways to stop it – but in a few instances early on in the project, it was actually both hard for me to spot that it was happening and for Ben to actually say ‘this dude is a problem’.

So, yes, “Ben is nice” is actually a learning.

The case for prototyping and user testing

Francois: “I really feel that a serious missing ingredient was user testing with an early prototype. I’m sure it would’ve improved the product greatly. I really still don’t know how well this application will perform in practice.

I think this is a valid concern but the budget was not there and, by the time it was, there was no time. My learning from this was to make sure that the client understands at all times where our responsibilities as an agency end. We underlined with David that we were delivering what was in the stories/wireframes and not what the user’s response to that was.

Webfonts – how do they work?

Francois “We clearly still have a lot to learn about web fonts. Discovered unexpected problems, platform inconsistencies. Had to do lots of last-minute research, and i’m still not sure whether we arrived at the best possible result.”

(My notes from this part of the meeting just say “Fucking web fonts”. Does that count as a learning?)

Wireframing on the hoof

“I feel I still have more to learn about putting video containers in web pages. I didn’t have dummy content I could test with, and didn’t know what the final markup was going to be. Bit of a black box.”

This vagueness in the wireframes actually helped the devs because there was no definite method of implementation up until the last minute – though Jo points out that this makes a bit of a nightmare for QA.

The main thing to keep in mind for this is that it’s the communication between IA, Dev and QA that makes it work – not the individual documents themselves.

Bootstrap = Not worse than Hitler

Adding final design as a “skin” on top of a wireframe-based design + Bootstrap base worked surprisingly well. We were able to skin the entire thing in just a few days, following high-level brand guidelines.

Bootstrap worked very well on this project, basically because we only had a wireframe aesthetic to follow. Design could then be added via a CSS skin, affecting surface appearance only, not layout.


…and that’s about it. Happy to discuss more in the comments.


Thinking about wireframes

Last week Des Traynor provoked a lot of conversation by saying Some things can’t be wireframed

Many people reacted defensively. I suspect most of us in UX roles still spend a significant amount of our time wireframing.

Couple of things are worth bearing in mind: Des works in-house at a product design company. This means many differences from the agency model – they are their own client, for one. And design is a continuous, on-going process, rather than a time-boxed engagement. There is also a world of difference between product design and web design, and the weaknesses of wireframes are far more apparent with the former.

Problems with wireframes

But yes: wireframes can be limiting. Des’s main point is that they “[discourage] emotive design, eschewing it for hierarchy, structure, and logic”. I often feel they risk the “local maximum” problem, where “logical” improvements don’t necessarily get you to somewhere radically better. And I completely agree that wireframe tools and templates drastically limit the possibility space, at far too early a stage.

The other problem is of course where interaction is concerned. I’ve long stopped attempting to wireframe or otherwise document all “interesting moments” in an application. While wireframing, you often don’t know exactly how something will work, or whether it will “feel” right. Often you just have to prototype it (with the help of jQuery and some plugins), and refine it in the browser. Sometimes this process changes the interface from what you originally had in mind. I would also mention responsiveness and scrolling under this topic – wireframes do a poor job of conveying the experience of different screen sizes, or long scrolling pages. Again, early prototyping will often inform the designs.

Emotive design – careful

Some of examples in the article made me a bit uncomfortable. I remember what it’s like to work with visual designers whose no.1 technique on every project was to slap a big beautiful stock image behind the page. It may impress some clients, but often it betrays the designer’s lack of understanding of the page content, user goals, and interaction, or a fundamental disrespect for text-based information. That’s the kind of mindset that seeks to sweep unsightly navigation menus under a hamburger icon, or use low-contrast grey body text. And I’ve been in loads of user tests where people expressed irritation at irrelevant mood imagery while they’re looking for the information relevant to them. Emotive design is not necessarily audiovisual. I understand that’s not the point Des was making, but glancing at the screenshots it’s easy to misconstrue “emotive design” as “big background images and zero navigation”


Here are some of the things I (indirectly) took away from the article for mitigating the weaknesses of wireframes:

  • Spend more time sketching, before reaching for the pattern libraries and templates.
  • Involve visual designers and developers in idea generation and generally, collaborate more. Too often they are involved too late to fundamentally influence the design direction.
  • Never use Lorem Ipsum filler text in wireframes. How a site communicates, what it says, and in how many words – that should all be considered at the wireframing stage.
  • Stop pretending wireframes are wholly un-aesthetic. Many visual ideas come up during wireframing, from the use of imagery to the information design. Tabular information doesn’t have to look like a table. A percentage doesn’t have to be a number. If you have a certain style of photography in mind, include examples. Don’t rely on all the “magic” happening at the visual design stage. (Des offers some very important advice on this point in another article on wireframing.)
  • Discourage the mindset that a wireframed specification is set in stone. Sometimes things change during visual design and implementation. In fact, depending on the project, sometimes it’s OK for wireframes to remain unfinished, as a stepping stone towards a design that is refined further in Photoshop or in the browser.

Ultimately, us at digital agencies can’t wholly get away from wireframes, even for product / application design. Within a fixed amount of time, we need to produce an artifact that gives a sufficiently complete overview of a product for client acceptance, and that allows developers to make a realistic cost estimation. Wireframes remain the best tool for the job in the great majority of our cases.

About us: Isotoma is a bespoke software development company based in York and London specialising in web apps, mobile apps and product design. If you’d like to know more you can review our work or get in touch.

API First

Recently, we were faced with the task of writing an API-first web application in order to support future mobile platform development. Here’s a summary of the project from the point of view of one of the developers.

Agile API

For the first couple of iterations, we had problems demonstrating the project progress to the customer at the end of iteration meetings. The customer on this project was extremely understanding and reasonably tech-savvy but despite that, he remained uninterested in the progress of the API and became quite concerned by the lack of UI progress. Although we were busy writing and testing the API code sitting just beneath the surface, letting the customer watch our test suite run would have achieved nothing. It was frustrating to find that, when there was nothing for the customer to click around on, we couldn’t get the level of engagement and collaboration we would typically achieve. In the end, we had to rely on the wireframes from the design process which the customer had signed off on to inform our technical decisions and, to allay the customer’s fears, we ended up throwing together some user interfaces which lacked any functionality purely to give the illusion of progress.

On the plus side, once we had written enough of our API to know that it was fit for purpose, development on the front-end began and progressed very rapidly; most of the back-end validation was already in place, end-points were well defined, and the comprehensive integration tests we’d written served as a decent how-to-use manual for our API.

Extra Work

Developing the application API-first took more work and more lines of code than it would have required if implemented as a typical post-back website.

Each interface had to be judged by its general usefulness rather than by its suitability for one particular bit of functionality alluded to by our wireframes or specification. Any view that called upon a complex or esoteric query had to instead be implemented using querystring filters or a peculiar non-generic endpoint.

In a typical postback project with private, application-specific endpoints, we’d be able to pick and choose the HTTP verbs relevant to the template we’re implementing however our generic API required considerably more thought. For each resource and collection, we had to carefully think about the permissions structure for each HTTP method, and the various circumstances in which the endpoint might be used.

We wrote around 4000 lines of integration test code just to pin down the huge combination of HTTP methods and user permissions however I sincerely doubt that all of those combinations are required by the web application. Had we not put in the extra effort however, we’d have risked making our API too restrictive to future potential consumers.

In terms of future maintainability, I’d say that each new generic endpoint will require a comparable amount of otherwise-unnecessary consideration and testing of permissions and HTTP methods.


Having such an explicitly documented split between the front and back end was actually very beneficial. The front end and back-end were developed and tested based on the API we’d designed and documented. For over a month, I worked solely on the back-end and my colleague worked solely on the front and we found this division of labour was an incredibly efficient way to work. By adhering to the HTTP 1.1 specification, using the full range of available HTTP verbs and response codes, and to our endpoint specification, we required far less interpersonal coordination than would typically be the case.

Beyond CRUD

The two major issues we found with generic CRUD endpoints were (1) when we needed to perform a complex data query, and (2) update multiple resources in a single transaction.

To a certain extent we managed to solve the first problem using querystrings, with keys representing fields on the resource. For all other cases, and also to solve the second problem, we used an underused yet still perfectly valid REST resource archetype: the controller, used to model a procedural concept.

We used controller endpoints on a number of occasions to accommodate things like /invitations/3/accept (“accept” represents the controller) which would update the invitation instance and other related user instances, as well as sending email notifications.

Where we needed to support searching, we added procedures to collections, of the form /applicants/search, to which we returned members of the collection (in this example “applicants”) which passed a case-insensitive containment test based on the given key.


API-first required extra implementation effort and a carefully-considered design. We found it was far easier and more efficient to implement as a generic, decoupled back-end component than in the typical creation process (model -> unit test -> url -> view -> template -> integration test), with the front-end being created completely independently.

In the end, we wrote more lines of code and far more lines of integration tests. The need to stringently adhere to the HTTP specification for our public API really drove home the benefits to using methods and status codes.

In case you’re curious, we used Marionette to build the front-end, and Django REST Framework to build the back end.

About us: Isotoma is a bespoke software development company based in York and London specialising in web apps, mobile apps and product design. If you’d like to know more you can review our work or get in touch.

Hold the hamburger

Hamburger iconI’ve noticed a worrying trend in web navigation lately. More and more websites are hiding their navigation – at desktop resolutions – under a single button, often the 3-bar “hamburger” icon.

They are doing this because it makes the website look “clean” – simple and uncluttered. Who wouldn’t want that? Or perhaps they are following the lead of some powerful role models, such as Google, or Medium. Or they are influenced by design for mobile devices, where small screens often require navigation to be hidden initially, and the hamburger icon has become ubiquitous. But they are usually wrong.

Hyperisland, Xoxo festival and Squarespace all hide their navigation under an icon even at desktop resolutions.

Hyperisland, XOXO festival and Squarespace are just 3 examples of sites that hide their navigation under an icon even at desktop resolutions.

Just a quick recap of the purposes1 of navigation menus on websites:

  1. It tells you what’s here and what you can do
  2. It gets you to where you want to go
  3. It tells you where you are

Hiding the navigation under an icon does a slightly worse job at no.2 (one extra click), but a terrible job at nos.1 and 3. And a clean-looking design does not compensate for this loss, for most websites at least.

So when is it OK to hide the navigation under an icon?

Well, I’ve already mentioned devices with small screens, where there simply is no room to spare for a menu. Responsive web design (RWD) is often used to transform the navigation menu into an icon at small screen sizes, like the popular Bootstrap framework. This is an ergonomic, not aesthetic decision.

The other case where hiding the navigation is understandable is on sites where random browsing is the dominant navigation pattern. This can describe journalism sites such as Medium, Upworthy, blogs in general, or social networks like Google+, Pinterest, Instagram, etc. These are sites where you typically don’t start at the homepage, and you typically navigate via related content. They may have navigation behind the scenes (such as content categories or account tools) but these are not needed in the vast majority of user journeys.

For most other websites and web applications, where users need to be guided to the information or tool they need with as little fuss as possible, visible navigation menus or toolbars are necessary2.

Yes, it’s easier for a designer to make a site without navigation menus look attractive, at first glance. But as any UX expert knows, visual simplicity does not necessarily equal ease of use. The best website designs are those that look beautiful while also providing the information and tools most users need. You do not solve a design problem by sweeping it under the carpet.

Hold the mystery meat, too

Which brings me to another form of the same problem – sweeping “surplus” navigation underneath a cryptic icon like the hamburger or “…” Software developers have known for decades that menu labels like “Other”, “Misc” or “More” are design failures – yet somehow giving them a trendy icon has given this form of mystery meat navigation new respectability. Google is a prime offender. Submenus are OK when the label clearly suggests what’s inside, such as the now-ubiquitous Account menu (or just avatar) at the top right. If not, it may as well be labeled “Stuff”.

Google has become an arch-offender in making invisible navigation seem respectable again. Even on wide screens with plenty of real estate, Gmail hides commonly-used functions under cryptic menus. (1) I curse every time I have to click here to go to Contacts. Without looking, I challenge you to guess what's in the "More" menu. (3) What would you find in here? (4) Or here?

Google has become a chief offender in making invisible navigation seem respectable again. Even on wide screens with plenty of real estate, Gmail hides commonly-used functions under cryptic menus. (1) I curse every time I have to click here to go to Contacts. (2) Without looking, I challenge you to guess what’s in the “More” menu. (3) What would you find in here? (4) Or here?

Flickr’s May 2013 redesign swept most of the user-related navigation under the obscure ellipsis icon, which may seem neater to anyone who doesn’t actually use the site, but is a major, on-going frustration to regular users.

Flickr’s May 2013 redesign (bottom) swept most of the user-related navigation under the obscure ellipsis icon, which may seem neater to anyone who doesn’t actually use the site, but is a major, on-going frustration to regular users.

[Update 10 Feb: Killing Off the Global Navigation: One Trend to Avoid by the Nielsen Norman Group makes much the same argument, but provides more background, examples and suggestions. Their article correctly targets any single menu item hiding the global navigation inside a drop-down menu, rather than hiding it under an icon as I focused on. They point to online retailers starting the trend, possibly copying Amazon. They suggest using click tracking, observation and analytics to decide whether it makes sense to hide your global navigation, and what impact it’s having.]

(1) Those who’ve read Steve Krug’s 2001 classic Don’t Make Me Think may recall his slightly different list of the purposes of navigation:

  • It gives us something to hold on to.
  • It tells us what’s here.
  • It tells us how to use the site.
  • It gives us confidence in the people who build it.

(2) Search can help, but most usability studies show that Search is typically only used after navigation has already failed and should not be considered a replacement for navigation. Search on the vast majority of websites falls far, far short of Google’s magic.

About us: Isotoma is a bespoke software development company based in York and London specialising in web apps, mobile apps and product design. If you’d like to know more you can review our work or get in touch.

Shortlister is live – GIF-fest

Shortlister is live at app.shortlister.com – the explanatory website, which we did not build is at www.shortlister.com

Shipping! Kaboom, right? The temptation to talk these things is always a little alarming because we all know how it can go:

Having said that though, the Shortlister project has gone actually really well. The project is, as we say, Very Isotoma. It’s business critical, it’s weird, it’s hard and it’s low on marketing puffery.

They’re an upstart-start-up who came to us nearly 2 years ago with some fairly disruptive ideas for the recruitment world.

At that time we talked. A lot. About all aspects of the application that they felt they wanted. We helped them define what it was they wanted and, in Christmas 2012, we’d designed the dream application in wireframe. All whistles and bells. We all agreed:

But it was expensive!

And they were like can you make it cheaper? And we were like

So then they were like

To which we were like: I know! We’ll do an MVP.

In early 2013 we defined what it was that was fundamental to the application, stripped back the functionality to something lean and deliverable that would work and went for it. Agile-style.

So. Agile:

We managed to maintain an unusually high level of Actual Proper Agile Development on this project for a few reasons but I think the chief amongst them is that our customer had a really high level of buy-in in the process. He understood that there was no flexibility in his budget and so there had to be flexibility in the dates or the scope (or both.)

So we took Francois’ excellent wireframes and we cut them back and back and back to something that could be delivered within the budget and it was painful but we got there.

We then built it. And from my point of view, that was probably the most uneventful and relaxing part of the any build I’ve been on. Ben, Alex and Francois rocked the shit out of it. I popped in once or twice a week to say

And Ben and Alex were all

Joking aside though, there is some *cutting edge* crap that this site does for video that’s worth getting into in the comments. Also it’s API first. We should probably talk about that.

I’m just going to stop now because oh god.

Backbone history and IE9

This bit me the other day, so I thought I’d share the pain.

IE9 doesn’t support pushState as you probably know which meant everything was routing to root (as it were).

The following snippet checks and resorts to hash based routing if it can’t cut the mustard:

app.on('initialize:after', function() {
    if(Backbone.history && !Backbone.History.started) {
        if(!(window.history && history.pushState)) {
            Backbone.history.start({ pushState: false, silent: true });
            var fragment = window.location.pathname.substr(
            Backbone.history.navigate(fragment, { trigger: true });
        else {
            Backbone.history.start({ pushState: true });

Add it wherever you would initialize Backbone history – often the entry point of the app. Mine for instance has an app.js that is initialised by main.js

About us: Isotoma is a bespoke software development company based in York and London specialising in web apps, mobile apps and product design. If you’d like to know more you can review our work or get in touch.