Blog

  • JIT Design

    Just in time design is about designing something just before the design is needed. Do it any later and it’ll be implemented without any design thought, do it too soon and the design could be miss-informed.

    Premature Optimisation

    This is an often cited sin of software creation. It’s really just a sub category of premature design. It may be hard to know if a certain algorithm needs optimisation until it’s used in production. The same can be said of software design. No matter how well a piece of software is designed, until it’s integrated and used it’s design can’t be validated. If lots of time or effort is wasted optimising code that doesn’t need it, how much time is wasted refactoring code that wasn’t given any design thought. Even worse is the time wasted refactoring and redesigning code that was designed too soon with limited information and incorrect assumptions.

    How much to design and when?

    Does this mean we should abandon all up front design and do it retrospective. Certainly not, design what can be seen of the problems complexity just before the work is to be started. There are other factors to consider when figuring out how much to design. If the problem is well understood and a known design has worked well for similar problems, then it’s predictable that applying the design pattern to the problem at hand and doing the work will be the end of it. However, if the problem is novel or complex then this is not the case. In this situation it’s best to design for what can be seen, what is known. Make some (hopefully educated) guesses about what can’t be seen and most importantly acknowledge that some guesses have been made.  This means going back and re-evaluating the design after the work is finished. The process of creating the code and putting it in to use should reveal most if not all of the problems complexity that couldn’t be seen before work started.

    Avoid wasting time on a bad implementation by designing for what can be seen of the problem, avoid wasting time on a bad design by not designing for what can’t be seen of the problem. Doing the design work just in time means being able to see more of the problem.

    Waterfall Design, Iterative Implementation

    As an industry we’ve acknowledged the problems of the Waterfall method of software creation and largely abandoned it. Unfortunately there is a tendency, even in an iterative software creation process, to still do too much design prematurely. People then stick to it even as more is learnt about the problem space that could inform a better design. Iterative software development is about iterating on the problem not an upfront design to the problem.

    To some extent this can seem counter intuitive but there are benefits such as delivering some functionality sooner. The benefit in terms of design, is that what’s been learnt in previous iterations can be used for the design work in the next iteration. If the problem is to provide a means of transport that’s quicker than walking, a skateboard is a good place to start. What is learnt from using a skateboard will be really helpful in creating something better. Iterative design will generally take a little longer than designing up front but the end result will be a better solution to the problem.

  • Designing APIs vs Consuming APIs

    Most APIs are described as being RESTful but many really aren’t. Fortunately understanding about what RESTful really means is becoming more common. The one area that seems to still hold debate is HATEOAS. Should an API be described to its consumers with documentation or should they discover it through links, the relationships between the requested resource representations. HATEOAS is the final level of the Richardson Maturity Model and one that some APIs ignore completely. HATEOAS or something very similar is the future of APIs but we’re not ready to fully embrace it yet.

    When consuming an API, Roy Fielding’s dissertation tells us that a truly RESTful API should be discovered but details on how that should actually work are thin on the ground. A discoverable API isn’t too hard to create but consuming it is a very different story and until the semantic gap between API and client is closed there should still be human consumable documentation to aid writing the clients. The semantic gap will exist until a client can understand the application level semantics of an API without having it hard coded in. The gap is slowly closing. Hypermedia formats such as HAL and Siren are becoming more common which help with the underlying protocol semantics. Lists of universal media and link types such as the ones maintained by IANA allow greater interoperability of application semantics. These still fall back to human understandable descriptions though.

    An API should be designed in terms of the relationships between resources and how they can change, similar to a state machine. If an API is designed around its endpoints the quality of the API will suffer, it’ll be designed backwards. For this reason Swagger and other similar efforts to document APIs should be avoided. They’re solving the symptoms of a problem and not the problem itself. Tools such as Spring Rest Docs can help here, especially as documenting larger APIs can be difficult. Spring Rest Docs ties the documentation to the API, not the other way round, tests will fail if the docs don’t match the API. It also ties in the human readable docs that are still needed when writing a client.

    Equally a client shouldn’t be written to consume a set of known endpoints either. Once it is coded to understand the specific media types, resources and relationships of an API it can be discovered. This also reducers the coupling between client and API, letting the API evolve without breaking the client. Like many areas of technology this is one that is changing. It important to understand where it is heading, while the future can’t be predicted, Roy Fielding’s dissertation describes a future most people are working towards. It’s equally important to understand what is actually possible today. An API should be designed for the future but also easy to consume today.

  • Long time no post

    Well my hosting company Anynines went down and I was lazy about relocating my blog to a new host. I’m still a fan of AnyNines but I’m now wary of their dev hosting and a little disappointed they never did a post-mortem of the outage. I considered GitHub Pages but I want to have comments and I’m not happy using a hosted commenting system. I’ve really enjoyed learning Jekyll, the static site generator for GitHub Pages but migrating an entire WordPress site turned out to be a lot of work. I have moved some other simpler site to GitHub though.

    DigitalOcean

    I’m now hosting on DigitalOcean, I’ve heard good things, time will tell. Setup was certainly straight forward and being able to SSH in to the box is nice, but not as nice as just pushing an app to CloudFoundry.

  • Simple vs Easy

    This blog post is inspired by a wonderful and entertaining video called Simple Made Easy by Rich Hickey. It explains the difference between easy & simple, and why we’re obsessed with the wrong one. It’s a sentiment I’ve known for a long time but I’ve never seen it put in to words before. The rest of this post is just my brief take on the subject, watching the video is highly recommended.

    Simple is the opposite of complex, it implies there won’t be a big cognitive overhead involved in what’s being done. It doesn’t imply that what’s being done will be quick or require only a little effort. Easy is the opposite of hard or difficult, it implies that only a little effort will be needed. It doesn’t imply things are going to be straight forward or understandable. Both these terms are subjective but the difference is very important and they are often used the wrong way round.

    SimpleVsEasy The trade off is that easy is very quick to get going with, “Hey look, I typed these commands in and now I have an app” but two months in to the project understanding the app will take more time than it should; “What’s this dependency for, do I have to use xml for my data?!”. The video I espouse above contains this diagram, it illustrates the point well. Simple tools, languages, frameworks etc… may be slower to get stuff done with initially but in the long run they will be faster as everything makes sense and the app isn’t getting bogged down in increasing complexity as time goes on.

    As a software engineer I often find myself working with easy and complex tools, languages or frameworks but I’d much rather use something simple and put in a little hard work to build something that is itself simple. Simple code is easier to debug and maintain, especially for people new to it. I think something can be both simple and easy but a lot of the time people probably just have a lot of experience and deep knowledge about something complex that makes it seem simple to them. I’ve often noticed how new software comes out and people love it as it’s simple while still doing something useful. Then they want to make it better, they add features, which is fine, but unfortunately they also try to make it easier to use. In doing so it becomes complicated and eventually requires people to climb a steep learning curve to understand it. The video makes this point (probably better than I do) and other related points but this one really stuck out for me. Now go and watch the video.

  • Firefox extensions with Travis CI

    Developing an extension for Firefox is no different than developing anything else and CI is a good idea. It’s actually very easy to use Travis to run your test suite. For those that don’t know, Travis is a great resource for running CI on your open source projects for free.

    Travis will install any version of Firefox you need and as it has them cached it’s much quicker than downloading it yourself. Specify the version with an addons element. The commands in the before_script section setup a headless window service and unpack Firefox. The tar command uses a wildcard to match the Firefox version requested, less to remember when updating the version. The environment variable JPM_FIREFOX_BINARY tells jpm where to find Firefox. This should be all you need to run jpm on any jpm based extension with Travis.

    language: node_js
    env:
      global:
        - DISPLAY=:99.0
        - JPM_FIREFOX_BINARY=$TRAVIS_BUILD_DIR/firefox/firefox
    addons:
      firefox: "38.0"
    before_script:
      - sh -e /etc/init.d/xvfb start
      - tar xvjf /tmp/firefox-*.tar.bz2 -C $TRAVIS_BUILD_DIR/
      - npm install jpm -g
    script:
      - jpm test
    

    You can see this in action with a little project of mine over at Travis.

  • Sample Apps for Cloud Foundry

    Working on the Java Buildpack means lots of testing with unit (over 99% coverage) and integration tests but we also have a suite of sample applications to do complete system wide testing. They are all tested during our CI builds and might be useful for people that want to play around with different application types on Cloud Foundry. Find them here in GitHub. The project has a good README so I won’t duplicate it but we have various apps for Spring Boot, Grails, Groovy, Java Main, Play, Ratpack and Spring MVC. Each app supports a number of URL end points that allow exploration of the environment the application is running in from the classpath and environment variables to bound services. These are also all well documented. Getting the applications running is simple. The cf cli tool is required to push applications from the command line, install instructions are here.

    git clone https://github.com/cloudfoundry/java-test-applications.git

    cd java-test-applications

    ./gradlew

    cd ***-application

    To give the application a more descriptive application name the manifest.yml file should be modified.

    cf push

    That’s it. The console output will show the URL the app is bound to. It’s now possible to start looking at the endpoints documented in the README and exploring the applications environment and any bound services.