Lessons learned designing and redesigning an API

Greenfield projects are the best. These are the projects that start afresh where developers rejoice in being unencumbered by reams and reams of legacy code. The choice of technology is still open and the code complexity so low it’s easy to reason about.

Flash forward 6-8 months and things may start to seem less rosy. At this point the developers have iterated themselves to a point where they have a database, a server and a client side application talking to each other. This solution is a big hulking boat by now with lots of moving parts and its become hard to turn around quickly. Developers are busy plugging holes and re-applying hull paint.

A huge ship

Photo credit: Rennet Stowe, licensed under Creative Commons CC-BY.

These past days I’ve been trying to take a big, hulking it works™ kind of project and make it faster. I created a local copy of the production database before creating fresh git branch copies of both the server and client code. And then I started hacking away to try and make everything faster in the least intrusive way so that it would be easier to port any performance improvements back to the production code.

In short I’ve learned a lot but the production code won’t be changing in the near term. What I tried improving was one of the costliest client side queries (costly as in time). This query occurs right after a user logs in and a large data table containing users starts loading. After getting all the user objects to be displayed in the table the code then fetches three more links for each user to add additional resources like related comments, forms and so on.

I tried tailoring some of the server’s API (url) endpoints to be able to provide the needed table information in a single call. Initial results showed a dramatic speed increase but at that point I was already worried that I was introducing to many backend changes to the server and the data model in particular. I was also surprised by the amount of coding needed in the client code to accommodate the new ways of fetching data for the tables all the while preserving the previous way of fetching data for the other web pages. The client code changes were quickly getting hard to reason about. By now the development process had devolved into a twisted ping pong dance where I was jumping back and forth between the code bases performing poke and hope programming before recompiling and browser refreshing to see if/how things were holding up. So, I concluded I had learnt a lesson and called it quits. I stopped and committed my changes to my experimental git branches before leaving them be for now.

Tailored (fast loading but accidentally complex) vs general (slow loading yet easy to reason about). Can it be fast and simple? #

Tailoring a server API to a single client(-side application) can work but it can also make it harder for any additional clients that want to talk to the server and need the data to be different from what the other client(s) is already getting. I imagine it can quickly become a nightmare where two or more frontend application teams are arguing with a backend server team about which API endpoints to expose and the manner in which to expose them.

Thankfully there have been some progress in this problem space. As it often goes developers have tried making their lives easier by building tools to help create stable API’s which can meet the needs of a diverse application ecosystem. Solutions include hypermedia standards such as HATEOAS and JSON+HAL. Furthermore we have Facebook’s Relay and Netflix’s Falcor both of which try to let client side applications be more in control of what data they want to be getting from the server. The benefit of this is not having to tailor the server to any client in particular beyond configuring it to be Relay or Falcor capable. Alternatively, one might might consider moving the data into the client as PouchDB does through clever syncing. However, storing data client-side is not an option for sensitive user data.

In the end I’ve chosen not to change that lumbering project. At least for now. If/when users start complaining about any slowness we’ll look into paginating the table data. For this project I consider that to be the least intrusive optimization. Nevertheless, for new projects I would absolutely consider the technologies mentioned above.