What Heartbleed means for you

On the 8th April a group of security researchers published information about a newly discovered exploit in a popular encryption library. With some marketing panache, they called this exploit “Heartbleed”.

A huge number of Internet services were vulnerable to this exploit, and although many of them have now been patched many remain. In particular, this was an Open Source library and so many of the very largest and most popular sites were directly affected.

Attention on the exploit has so far focused on the possible use of the exploit to obtain “key material” from affected sites, but there are some more immediate ramifications and you need to act to protect yourself.

Unfortunately the attack will also reveal other random bits of webserver’s memory, which can include usernames, passwords and cookies. Obtaining this information will allow attackers to log into these services as you, and then conduct more usual fraud and identity theft.

Once the dust has settled (so later today on the 9th, or tomorrow on the 10th) you should go and change every single one of your passwords. Start with the passwords you’ve used recently and high value services.

It’s probably a good idea to clear all your cookies too once you’ve done this, to force you to re-login to every service with your new password.

You should also log out of every single service on your phone, and then re-login in, to get new session cookies. If you are particularly paranoid, wipe your phone and reinstall. Mobile app session cookies are likely to be a very popular vector for this attack.

This is an enormous amount of work, but you can use it as an opportunity to set some decent random passwords for every service and adopt a tool like LastPass, 1Password or KeePass while you are at it.

Most people are hugely vulnerable to password disclosure because they share passwords between accounts, and the entire world of black-hats are out there right now slurping passwords off every webserver they can get them from. There is going to be a huge spike in fraud and identity theft soon, and you want to make sure you are not a victim to it.

The Man-In-The-Middle Attack

In simple terms this would allow an attacker to impersonate the site’s SSL certificate, so they can show the padlock icon that demonstrates a secure connection, even though they control your connection.

They can only do this if they also manage to somehow make your browser connect to their computers for the request. This can normally only be done by either controlling part of your connection directly (hacking your router maybe), or by “poisoning” your access to the Domain Name Service with which you find out how to reach a site (there are many ways to do this, but none of them are trivial).

You can expect Internet security types to be fretting about this one for a long time to come, and there are likely to be some horrific exploits against some high-profile sites executed by some of the world’s most skilled hackers. If they do it well enough, we may never hear of it.

The impact of this exploit is going to have huge ramifications for server operators and system designers, but there is very little in practical terms that most people can mitigate this risk for their own browsing.

 

Reviewing Django REST Framework

Recently, we used Django REST Framework to build the backend for an API-first web application. Here I’ll attempt to explain why we chose REST Framework and how successfully it helped us build our software.

Why Use Django REST Framework?

RFC-compliant HTTP Response Codes

Clients (javascript and rich desktop/mobile/tablet applications) will more than likely expect your REST service endpoint to return status codes as specified in the HTTP/1.1 spec. Returning a 200 response containing {‘status’: ‘error’} goes against the principles of HTTP and you’ll find that HTTP-compliant javascript libraries will get their knickers in a twist. In our backend code, we ideally want to raise native exceptions and return native objects; status codes and content should be inferred and serialised as required.

If authentication fails, REST Framework serves a 401 response. Raise a PermissionDenied and you automatically get a 403 response. Raise a ValidationError when examining the submitted data and you get a 400 response. POST successfully and get a 201, PATCH and get a 200. And so on.

Methods

You could PATCH an existing user profile with just the field that was changed in your UI, DELETE a comment, PUT a new shopping basket, and so on. HTTP methods exist so that you don’t have to encode the nature of your request within the body of your request. REST Framework has support for these methods natively in its base ViewSet class which is used to build each of your endpoints; verbs are mapped to methods on your view class which, by default, are implemented to do everything you’d expect (create, update, delete).

Accepts

The base ViewSet class looks for the Accepts header and encodes the response accordingly. You need only specify which formats you wish to support in your settings.py.

Serializers are not Forms

Django Forms do not provide a sufficient abstraction to handle object PATCHing (only PUT) and cannot encode more complex, nested data structures. The latter limitation lies with HTTP, not with Django Forms; HTTP forms cannot natively encode nested data structures (both application/x-www-form-urlencoded and multipart/form-data rely on flat key-value formats). Therefore, if you want to declaratively define a schema for the data submitted by your users, you’ll find life a lot easier if you discard Django Forms and use REST Framework’s Serializer class instead.

If the consumers of your API wish to use PATCH rather than PUT, and chances are they will, you’ll need to account for that in your validation. The REST Framework ModelSerializer class adds fields that map automatically to Model Field types, in much the same way that Django’s ModelForm does. Serializers also allow nesting of other Serializers for representing fields from related resources, providing an alternative to referencing them with a unique identifier or hyperlink.

More OPTIONS

Should you choose to go beyond an AJAX-enabled site and implement a fully-documented, public API then best practice and an RFC or two suggest that you make your API discoverable by allowing OPTIONS requests. REST Framework allows an OPTIONS request to be made on every endpoint, for which it examines request.user and returns the HTTP methods available to that user, and the schema required for making requests with each one.

OAuth2

Support for OAuth 1 and 2 is available out of the box and OAuth permissions, should you choose to use them, can be configured as a permissions backend.

Browsable

REST framework provides a browsable HTTP interface that presents your API as a series of forms that you can submit to. We found it incredibly useful for development but found it a bit too rough around the edges to offer as an aid for third parties wishing to explore the API. We therefore used the following snippet in our settings.py file to make the browsable API available only when DEBUG is set to True:

if DEBUG:
    REST_FRAMEWORK['DEFAULT_RENDERER_CLASSES'].append(
        'rest_framework.renderers.BrowsableAPIRenderer'
    )

Testing

REST Framework gives you an APITestCase class which comes with a modified test client. You give this client a dictionary and encoding and it will serialise the request and deserialise the response. You only ever deal in python dictionaries and your tests will never need to contain a single instance of json.loads.

Documentation

The documentation is of a high quality. By copying the Django project’s three-pronged approach to documentation – tutorial, topics, and API structure, Django buffs will find it familiar and easy to parse. The tutorial quickly gives readers the feeling of accomplishment, the high-level topic-driven core of the documentation allows readers to quickly get a solid understanding of how the framework should be used, and method-by-method API documentation is very detailed, frequently offering examples of how to override existing functionality.

Project Status

At the time of writing the project remains under active development. The roadmap is fairly clear and the chap in charge has a solid grasp of the state of affairs. Test coverage is good. There’s promising evidence in the issue history that creators of useful but non-essential components are encouraged to publish their work as new, separate projects, which are then linked to from the REST Framework documentation.

Criticisms

Permissions

We found that writing permissions was messy and we had to work hard to avoid breaking DRY. An example is required. Let’s define a ViewSet representing both a resource collection and any document from that collection:

views.py:

class JobViewSet(ViewSet):
    """
    Handles both URLS:
    /jobs
    /jobs/(?P<id>\d+)/$
    """
    serializer_class = JobSerializer
    permission_classes = (IsAuthenticated, JobPermission)

    def get_queryset(self):
        if self.request.user.is_superuser:
            return Job.objects.all()

        return Job.objects.filter(
            Q(applications__user=request.user) |
            Q(reviewers__user=request.user)
        )

If the Job collection is requested, the queryset from get_queryset() will be run through the serializer_class and returned as an HTTPResponse with the requested encoding.

If a Job item is requested and it is in the queryset from get_queryset(), it is run through the serializer_class and served. If a Job item is requested and is not in the queryset, the view returns a 404 status code. But we want a 403.

So if we define that JobPermission class, we can fail the object permission test, resulting in a 403 status code:

permissions.py:

class JobPermission(Permission):
    def get_object_permission(self, request, view, obj):
    if obj in Job.objects.filter(
        Q(applications__user=request.user) |
        Q(reviewers__user=request.user)):
        return True
    return False

Not only have we duplicated the logic from the view method get_queryset (we could admittedly reuse view.get_queryset() but the method and underlying query would still be executed twice), if we don’t then the client is sent a completely misleading response code.

The neatest way to solve this issue seems to be to use the DjangoObjectPermissionsFilter together with the django-guardian package. Not only will this allow you to define object permissions independently of your views, it’ll also allow you filter querysets using the same logic. Disclaimer: I’ve not tried this solution, so it might be a terrible thing to do.

Nested Resources

REST Framework is not built to support nested resources of the form /baskets/15/items. It requires that you keep your API flat, of the form /baskets/15 and /items/?basket=15.

We did eventually choose to implement some parts of our API using nested URLs however it was hard work and we had to alter public method signatures and the data types of public attributes within our subclasses. We required entirely highly modified Router, Serializer, and ViewSet classes. It is worth noting that REST Framework deserves praise for making each of these components so pluggable.

Very specifically, the biggest issue preventing us pushing our nested resources components upstream was REST Framework’s decision to make lookup_field on the HyperlinkedIdentityField and HyperlinkedRelatedField a single string value (e.g. “baskets”). To support any number of parent collections, we had to create a NestedHyperlinkedIdentityField with a new lookup_fields list attribute, e.g. ["baskets", "items"].

Conclusions

REST Framework is great. It has flaws but continues to mature as an increasingly popular open source project. I’d whole-heartedly recommend that you use it for creating full, public APIs, and also for creating a handful of endpoints for the bits of your site that need to be AJAX-enabled. It’s as lightweight as you need it to be and most of what it does, it does extremely well.

Django Class-Based Generic Views: tips for beginners (or things I wish I’d known when I was starting out)

Django is renowned for being a powerful web framework with a relatively shallow learning curve, making it easy to get into as a beginner and hard to put down as an expert. However, when class-based generic views arrived on the scene, they were met with a lukewarm reception from the community: some said they were too difficult, while others bemoaned a lack of decent documentation. But if you can power through the steep learning curve, you will see they are also incredibly powerful and produce clean, reusable code with minimal boilerplate in your views.py.

So to help you on your journey with CBVs, here are some handy tips I wish I had known when I first started learning all about them. This isn’t a tutorial, but more a set of side notes to refer to as you are learning; information which isn’t necessarily available or obvious in the official docs.

Starting out

If you are just getting to grips with CBVs, the only view you need to worry about is TemplateView. Don’t try anything else until you can make a ‘hello world’ template and view it on your dev instance. This is covered in the docs. Once you can handle that, keep reading the docs and make sure you understand how to subclass a ListView and DetailView to render model data into a template.

OK, now we’re ready for the tricky stuff!

Customising CBVs

Once you have the basics down, you will find that most of your work revolves around subclassing the built-in class-based generic views and overriding one or two methods. At the start of your journey, it is not very obvious what to override to achieve your goals, so remember:

  • If you need to get some extra variables into a template, use get_context_data()
  • If it is a low-level permissions check on the user, you probably want dispatch()
  • If you need to do a complicated database query on a DetailView, ListView etc, try get_queryset()
  • If you need to pass some extra parameters to a form when constructing it via a FormView, UpdateView etc, try get_form() or get_form_kwargs()

ccbv.co.uk

If you haven’t heard of ccbv.co.uk, go there and bookmark it now. It is possibly the most useful reference out there for working with class-based generic views. When you are subclassing views and trying to work out which methods to override, and the official docs just don’t seem to cut it, ccbv.co.uk has your back. If it wasn’t for that site, I think we would all be that little bit grumpier about using CBVs.

Forms

CBVs cut a LOT of boilerplate code out of the process of writing forms. You should already be using ModelForms wherever you can to save effort, and there are generic class-based views available (CreateView/UpdateView) that allow you to plug in your ModelForms and reduce your boilerplate code even further. Always use this approach if you can. If your form does not map to a particular model in the database, use FormView.

Permissions

If you want to put some guards on your view e.g. check if the user is logged in, check they have a certain permission etc, you will usually want to do it on the dispatch() method of the view. This is the very first method that is called in your view, so if a user shouldn’t have access then this is the place to intercept them:

from django.core.exceptions import PermissionDenied
from django.views.generic import TemplateView

class NoJimsView(TemplateView):
    template_name = 'secret.html'

    def dispatch(self, request, *args, **kwargs):
        if request.user.username == 'jim':
            raise PermissionDenied # HTTP 403
        return super(NoJimsView, self).dispatch(request, *args, **kwargs)

Note: If you just want to restrict access to logged-in users, there is a @require_login decorator you can add around the dispatch() method. This is covered in the docs, and it may be sufficient for your purposes, but I usually end up having to modify it to handle AJAX requests nicely as well.

Multiple inheritance

Once you start subclassing and overriding generic views, you will probably find yourself needing multiple inheritance. For example, perhaps you want to extend your “No Jims” policy (see above) to several other views. The best way to achieve this is to write a small Mixin and inherit from it along with the generic view. For example:

class NoJimsMixin(object):
    def dispatch(self, request, *args, **kwargs):
        if request.user.username == 'jim':
            raise PermissionDenied # HTTP 403
        return super(NoJimsMixin, self).dispatch(request, *args, **kwargs)

class NoJimsView(NoJimsMixin, TemplateView):
    template_name = 'secret.html'

class OtherNoJimsView(NoJimsMixin, TemplateView):
    template_name = 'other_secret.html'

Now you have entered the world of python’s multiple inheritance and Method Resolution Order. Long story short: order is important. If you inherit from two classes that both define a foo() method, your new class will use the one from the parent class that was first in the list. So in the above example, in your NoJimsView class, if you listed TemplateView before NoJimsMixin, django would use TemplateView’s dispatch() method instead of NoJimsMixin’s. But in the above example, not only will your NoJimsMixin’s dispatch() get called first, but when you call super(NoJimsMixin, self).dispatch(), it will call TemplateView’s dispatch() method. How I wish I had known this when I was learning about CBVs!

View/BaseView/Mixin

As you browse around the docs, code and ccbv.co.uk, you will see references to Views, BaseViews and Mixins. They are largely a naming convention in the django code: a BaseView is like a View except it doesn’t have a render_to_response() method so it won’t render a template. Almost all Views inherit from a corresponding BaseView and add a render_to_response() method e.g. DetailView/BaseDetailView, UpdateView/BaseUpdateView etc. This is useful if you are subclassing from two Views, because it means you can choose which one renders the final output. It is also useful if you want to render to JSON, say in an AJAX response, and don’t need HTML rendering at all (in this case you’d need to provide your own render_to_response() method that returns a HttpResponse).

Mixin classes provide a few helper methods, but can’t be used on their own, as they are not full Views.

So in short, if you are just subclassing one thing, you will usually subclass a View. If you want to manually render a non-HTML response, you probably need a BaseView. If you are inheriting from multiple classes, you will need a combination of some or all of View, BaseView and Mixin.

A final note on AJAX

Django is not particularly good at serving AJAX requests out of the box, and once you start trying to use CBVs to do AJAX form submissions, things get quite complicated.

The docs offer some help with this in the form of a Mixin you can copy and paste into your code, which gives you JSON responses instead of HTML. You will also need to pass CSRF tokens in your POST requests, and again there is an example of how to do this in the docs.

This should be enough to get you started, but I often find myself having to write some extra Mixins, and that is before even considering the javascript code on the front end to send requests and parse responses, complete with handling of validation and transport errors. Here at Isotoma, we are working on some tools to address this, which we hope to open-source in the near future. So watch this space!

Conclusion

In case you hadn’t worked it out, we at Isotoma are fans of Django’s class-based generic views. They are definitely not straightforward for newcomers, but hopefully with the help of this article and other resources (did I mention ccbv.co.uk?), it’ll be plain sailing before you know it. And once you get what they’re all about, you won’t look back.

Thinking about wireframes

Last week Des Traynor provoked a lot of conversation by saying Some things can’t be wireframed

Many people reacted defensively. I suspect most of us in UX roles still spend a significant amount of our time wireframing.

Couple of things are worth bearing in mind: Des works in-house at a product design company. This means many differences from the agency model – they are their own client, for one. And design is a continuous, on-going process, rather than a time-boxed engagement. There is also a world of difference between product design and web design, and the weaknesses of wireframes are far more apparent with the former.

Problems with wireframes

But yes: wireframes can be limiting. Des’s main point is that they “[discourage] emotive design, eschewing it for hierarchy, structure, and logic”. I often feel they risk the “local maximum” problem, where “logical” improvements don’t necessarily get you to somewhere radically better. And I completely agree that wireframe tools and templates drastically limit the possibility space, at far too early a stage.

The other problem is of course where interaction is concerned. I’ve long stopped attempting to wireframe or otherwise document all “interesting moments” in an application. While wireframing, you often don’t know exactly how something will work, or whether it will “feel” right. Often you just have to prototype it (with the help of jQuery and some plugins), and refine it in the browser. Sometimes this process changes the interface from what you originally had in mind. I would also mention responsiveness and scrolling under this topic – wireframes do a poor job of conveying the experience of different screen sizes, or long scrolling pages. Again, early prototyping will often inform the designs.

Emotive design – careful

Some of examples in the article made me a bit uncomfortable. I remember what it’s like to work with visual designers whose no.1 technique on every project was to slap a big beautiful stock image behind the page. It may impress some clients, but often it betrays the designer’s lack of understanding of the page content, user goals, and interaction, or a fundamental disrespect for text-based information. That’s the kind of mindset that seeks to sweep unsightly navigation menus under a hamburger icon, or use low-contrast grey body text. And I’ve been in loads of user tests where people expressed irritation at irrelevant mood imagery while they’re looking for the information relevant to them. Emotive design is not necessarily audiovisual. I understand that’s not the point Des was making, but glancing at the screenshots it’s easy to misconstrue “emotive design” as “big background images and zero navigation”

Lessons

Here are some of the things I (indirectly) took away from the article for mitigating the weaknesses of wireframes:

  • Spend more time sketching, before reaching for the pattern libraries and templates.
  • Involve visual designers and developers in idea generation and generally, collaborate more. Too often they are involved too late to fundamentally influence the design direction.
  • Never use Lorem Ipsum filler text in wireframes. How a site communicates, what it says, and in how many words – that should all be considered at the wireframing stage.
  • Stop pretending wireframes are wholly un-aesthetic. Many visual ideas come up during wireframing, from the use of imagery to the information design. Tabular information doesn’t have to look like a table. A percentage doesn’t have to be a number. If you have a certain style of photography in mind, include examples. Don’t rely on all the “magic” happening at the visual design stage. (Des offers some very important advice on this point in another article on wireframing.)
  • Discourage the mindset that a wireframed specification is set in stone. Sometimes things change during visual design and implementation. In fact, depending on the project, sometimes it’s OK for wireframes to remain unfinished, as a stepping stone towards a design that is refined further in Photoshop or in the browser.

Ultimately, us at digital agencies can’t wholly get away from wireframes, even for product / application design. Within a fixed amount of time, we need to produce an artifact that gives a sufficiently complete overview of a product for client acceptance, and that allows developers to make a realistic cost estimation. Wireframes remain the best tool for the job in the great majority of our cases.

API First

Recently, we were faced with the task of writing an API-first web application in order to support future mobile platform development. Here’s a summary of the project from the point of view of one of the developers.

Agile API

For the first couple of iterations, we had problems demonstrating the project progress to the customer at the end of iteration meetings. The customer on this project was extremely understanding and reasonably tech-savvy but despite that, he remained uninterested in the progress of the API and became quite concerned by the lack of UI progress. Although we were busy writing and testing the API code sitting just beneath the surface, letting the customer watch our test suite run would have achieved nothing. It was frustrating to find that, when there was nothing for the customer to click around on, we couldn’t get the level of engagement and collaboration we would typically achieve. In the end, we had to rely on the wireframes from the design process which the customer had signed off on to inform our technical decisions and, to allay the customer’s fears, we ended up throwing together some user interfaces which lacked any functionality purely to give the illusion of progress.

On the plus side, once we had written enough of our API to know that it was fit for purpose, development on the front-end began and progressed very rapidly; most of the back-end validation was already in place, end-points were well defined, and the comprehensive integration tests we’d written served as a decent how-to-use manual for our API.

Extra Work

Developing the application API-first took more work and more lines of code than it would have required if implemented as a typical post-back website.

Each interface had to be judged by its general usefulness rather than by its suitability for one particular bit of functionality alluded to by our wireframes or specification. Any view that called upon a complex or esoteric query had to instead be implemented using querystring filters or a peculiar non-generic endpoint.

In a typical postback project with private, application-specific endpoints, we’d be able to pick and choose the HTTP verbs relevant to the template we’re implementing however our generic API required considerably more thought. For each resource and collection, we had to carefully think about the permissions structure for each HTTP method, and the various circumstances in which the endpoint might be used.

We wrote around 4000 lines of integration test code just to pin down the huge combination of HTTP methods and user permissions however I sincerely doubt that all of those combinations are required by the web application. Had we not put in the extra effort however, we’d have risked making our API too restrictive to future potential consumers.

In terms of future maintainability, I’d say that each new generic endpoint will require a comparable amount of otherwise-unnecessary consideration and testing of permissions and HTTP methods.

Decoupling

Having such an explicitly documented split between the front and back end was actually very beneficial. The front end and back-end were developed and tested based on the API we’d designed and documented. For over a month, I worked solely on the back-end and my colleague worked solely on the front and we found this division of labour was an incredibly efficient way to work. By adhering to the HTTP 1.1 specification, using the full range of available HTTP verbs and response codes, and to our endpoint specification, we required far less interpersonal coordination than would typically be the case.

Beyond CRUD

The two major issues we found with generic CRUD endpoints were (1) when we needed to perform a complex data query, and (2) update multiple resources in a single transaction.

To a certain extent we managed to solve the first problem using querystrings, with keys representing fields on the resource. For all other cases, and also to solve the second problem, we used an underused yet still perfectly valid REST resource archetype: the controller, used to model a procedural concept.

We used controller endpoints on a number of occasions to accommodate things like /invitations/3/accept (“accept” represents the controller) which would update the invitation instance and other related user instances, as well as sending email notifications.

Where we needed to support searching, we added procedures to collections, of the form /applicants/search, to which we returned members of the collection (in this example “applicants”) which passed a case-insensitive containment test based on the given key.

Conclusion

API-first required extra implementation effort and a carefully-considered design. We found it was far easier and more efficient to implement as a generic, decoupled back-end component than in the typical creation process (model -> unit test -> url -> view -> template -> integration test), with the front-end being created completely independently.

In the end, we wrote more lines of code and far more lines of integration tests. The need to stringently adhere to the HTTP specification for our public API really drove home the benefits to using methods and status codes.

In case you’re curious, we used Marionette to build the front-end, and Django REST Framework to build the back end.

Hold the hamburger

Hamburger iconI’ve noticed a worrying trend in web navigation lately. More and more websites are hiding their navigation – at desktop resolutions – under a single button, often the 3-bar “hamburger” icon.

They are doing this because it makes the website look “clean” – simple and uncluttered. Who wouldn’t want that? Or perhaps they are following the lead of some powerful role models, such as Google, or Medium. Or they are influenced by design for mobile devices, where small screens often require navigation to be hidden initially, and the hamburger icon has become ubiquitous. But they are usually wrong.

Hyperisland, Xoxo festival and Squarespace all hide their navigation under an icon even at desktop resolutions.

Hyperisland, XOXO festival and Squarespace are just 3 examples of sites that hide their navigation under an icon even at desktop resolutions.

Just a quick recap of the purposes1 of navigation menus on websites:

  1. It tells you what’s here and what you can do
  2. It gets you to where you want to go
  3. It tells you where you are

Hiding the navigation under an icon does a slightly worse job at no.2 (one extra click), but a terrible job at nos.1 and 3. And a clean-looking design does not compensate for this loss, for most websites at least.

So when is it OK to hide the navigation under an icon?

Well, I’ve already mentioned devices with small screens, where there simply is no room to spare for a menu. Responsive web design (RWD) is often used to transform the navigation menu into an icon at small screen sizes, like the popular Bootstrap framework. This is an ergonomic, not aesthetic decision.

The other case where hiding the navigation is understandable is on sites where random browsing is the dominant navigation pattern. This can describe journalism sites such as Medium, Upworthy, blogs in general, or social networks like Google+, Pinterest, Instagram, etc. These are sites where you typically don’t start at the homepage, and you typically navigate via related content. They may have navigation behind the scenes (such as content categories or account tools) but these are not needed in the vast majority of user journeys.

For most other websites and web applications, where users need to be guided to the information or tool they need with as little fuss as possible, visible navigation menus or toolbars are necessary2.

Yes, it’s easier for a designer to make a site without navigation menus look attractive, at first glance. But as any UX expert knows, visual simplicity does not necessarily equal ease of use. The best website designs are those that look beautiful while also providing the information and tools most users need. You do not solve a design problem by sweeping it under the carpet.

Hold the mystery meat, too

Which brings me to another form of the same problem – sweeping “surplus” navigation underneath a cryptic icon like the hamburger or “…” Software developers have known for decades that menu labels like “Other”, “Misc” or “More” are design failures – yet somehow giving them a trendy icon has given this form of mystery meat navigation new respectability. Google is a prime offender. Submenus are OK when the label clearly suggests what’s inside, such as the now-ubiquitous account Account menu (or just avatar) at the top right. If not, it may as well be labeled “Stuff”.

Google has become an arch-offender in making invisible navigation seem respectable again. Even on wide screens with plenty of real estate, Gmail hides commonly-used functions under cryptic menus. (1) I curse every time I have to click here to go to Contacts. Without looking, I challenge you to guess what's in the "More" menu. (3) What would you find in here? (4) Or here?

Google has become a chief offender in making invisible navigation seem respectable again. Even on wide screens with plenty of real estate, Gmail hides commonly-used functions under cryptic menus. (1) I curse every time I have to click here to go to Contacts. (2) Without looking, I challenge you to guess what’s in the “More” menu. (3) What would you find in here? (4) Or here?

Flickr’s May 2013 redesign swept most of the user-related navigation under the obscure ellipsis icon, which may seem neater to anyone who doesn’t actually use the site, but is a major, on-going frustration to regular users.

Flickr’s May 2013 redesign (bottom) swept most of the user-related navigation under the obscure ellipsis icon, which may seem neater to anyone who doesn’t actually use the site, but is a major, on-going frustration to regular users.

[Update 10 Feb: Killing Off the Global Navigation: One Trend to Avoid by the Nielsen Norman Group makes much the same argument, but provides more background, examples and suggestions. Their article correctly targets any single menu item hiding the global navigation inside a drop-down menu, rather than hiding it under an icon as I focused on. They point to online retailers starting the trend, possibly copying Amazon. They suggest using click tracking, observation and analytics to decide whether it makes sense to hide your global navigation, and what impact it's having.]


(1) Those who’ve read Steve Krug’s 2001 classic Don’t Make Me Think may recall his slightly different list of the purposes of navigation:

  • It gives us something to hold on to.
  • It tells us what’s here.
  • It tells us how to use the site.
  • It gives us confidence in the people who build it.

(2) Search can help, but most usability studies show that Search is typically only used after navigation has already failed and should not be considered a replacement for navigation. Search on the vast majority of websites falls far, far short of Google’s magic.

Backbone history and IE9

This bit me the other day, so I thought I’d share the pain.

IE9 doesn’t support pushState as you probably know which meant everything was routing to root (as it were).

The following snippet checks and resorts to hash based routing if it can’t cut the mustard:

app.on('initialize:after', function() {
    if(Backbone.history && !Backbone.History.started) {
        if(!(window.history && history.pushState)) {
            Backbone.history.start({ pushState: false, silent: true });
            var fragment = window.location.pathname.substr(
                Backbone.history.options.root.length);
            Backbone.history.navigate(fragment, { trigger: true });
        }
        else {
            Backbone.history.start({ pushState: true });
        }
    }
});

Add it wherever you would initialize Backbone history – often the entry point of the app. Mine for instance has an app.js that is initialised by main.js

Content types and Django CMS

Screenshot of the new ENB website

The new ENB website

One of our latest projects to go live is a new website for the English National Ballet. Part of a major rebrand, we completely replaced their old PHP site with a new content-managed site powered by Django CMS.

Django CMS is very flexible, largely due to its minimalistic approach. It provides no page templates out of the box, so you can construct your HTML from the ground up. This is great if you want to make a CMS with a really strong design, because there is very little interference from the framework. However, its minimalistic approach also means that you sometimes have to write extra code to tie all the content together.

A good example of this is content types. In Django CMS, there is only one content type: Page. It has certain fields associated with it e.g. title, slug, published. Any other information that appears on a page comes courtesy of plugins. The default Django CMS plugins give you everything you need to add arbitrary text, images and video to a page. But what if you want more fields for your page? Let’s say, for example, you are representing a ballet production and you want category, thumbnail and summary text fields, which don’t appear on the page itself but are needed for listings elsewhere on the site?

We decided to create a special “metadata” plugin to be added to the production pages, that would only be visible to content editors and not end users. This was seen as the best solution that achieved our goal while maintaining a decent user experience for the editors.

The plugin model looks something like this:

class ProductionDetails(CMSPlugin):
    summary = models.CharField(max_length=200) # Short summary, shown in listings
    image = FilerImageField() # Thumbnail image, shown in listings
    audiences = models.ManyToManyField(Audience) # Categorisation

Note the use of django-filer for the image field. This is simply the best add-on I have encountered for dealing with image uploads and the inevitable cropping and resizing of said images. You can also use cmsplugin-filer (by the same author) to replace the standard image plugin that comes with Django CMS.

Now querying the database for, say, the first 10 productions for a family audience (audience id 3) is as simple as:

ProductionDetails.objects.filter(audiences=3, placeholder__page__published=True)[:10]

So now we have a plugin model that we can query, and we don’t need a template as we don’t want it to appear on the actual page, right? Wrong. We still want to provide a good user experience for the editors, and this includes looking at a page in edit mode and being able to tell whether the page already has the plugin or not. So we use request.toolbar.edit_mode in the template to decide whether to render the plugin:

{% load thumbnail %}

{% if request.toolbar.edit_mode %}
<div id="production-details">
 <img src="{% thumbnail instance.image 100x100 crop upscale subject_location=instance.image.subject_location %}" />
 <p>Summary: {{ instance.summary }}</p>
 <p>Audiences: {{ instance.audiences.all|join:', ' }}</p>
</div>
{% endif %}

Now this information will only appear if an editor has activated the inline editing mode while looking at the page. If they look at the page and the information is missing, they know they need to add the plugin!

This solution works quite well for us, although it is still fairly easy to create a page and forget to give it any metadata. Ideally it would be mandatory to add a metadata plugin. Perhaps the subject of a future blog post!

Polite user interfaces know when to wait a little

Web page elements that appear or disappear on hover should almost always do so with a slight delay. Why?

  • To prevent distracting elements leaping out at you while your mouse is simply traversing the page.
  • To prevent you from accidentally clicking something that popped into view just as you were moving your cursor towards the target.
  • To prevent elements such as menus from unexpectedly disappearing when you just stray a pixel off, forcing you to re-invoke them.

Building in a small delay (say, 100ms) before elements appear or disappear is a hallmark of polite user interfaces, but is woefully rare. If you do a Google search for JavaScript plugins for menus, dropdowns, etc., you’ll find almost none that do this. This is also the biggest problem I have with using CSS :hover to show or hide elements (and why I think pure CSS dropdown menus are useless.)

On pretty much all projects with interactive JavaScript elements I’ve worked on in the past, I’ve specified this behaviour, which added considerable complexity for the developer. In most cases, they developed their solution from scratch.

So I was very happy to discover Brian Cherne’s hoverIntent jQuery plugin, a lightweight (4KB unminified) script which makes this effortless to do:

HoverIntent is similar to jQuery’s hover. However, instead of calling onMouseOver and onMouseOut functions immediately, this plugin tracks the user’s mouse onMouseOver and waits until it slows down before calling the onMouseOver function… and it will only call the onMouseOut function after an onMouseOver is called.

Please consider using it on your next project!

Running a Django (or any other) dev instance over HTTPS

Being able to run your dev instance over HTTPS is really useful: you might spot some weird bug that would have bitten you in production, and if you do find one, you can debug it much more easily. Googling for this subject resulted in several different tutorials using stunnel, but all of them broke in some way on my machine running Ubuntu Maverick. So here is how I got stunnel working – perhaps it will help someone else too:

sudo aptitude install stunnel
sudo su -
cd /etc
mkdir stunnel
cd stunnel
openssl req -new -x509 -days 365 -nodes -out stunnel.pem -keyout stunnel.pem
openssl gendh 2048 >> stunnel.pem
chmod 600 stunnel.pem
logout
cd

Now create a file called dev_https with the following text:

pid=
foreground=yes
debug = 7

[https]
accept=8443
connect=8000
TIMEOUTclose=1

Note: this assumes your web server is running on port 8000. If it’s not, change the value of “connect” to the appropriate port.

Finally, run:

sudo stunnel4 dev_https

Now if you go to https://localhost:8443/, you should see your HTTPS-enabled dev instance!

Note: To properly simulate a HTTPS connection in Django, you should also set an environment variable HTTPS=on. Without this, request.is_secure() will return False. You could set it at the same time as starting your dev instance e.g:

HTTPS=on python manage.py runserver