Better Design Through HATEOAS
Because Everyone Loves Inscrutable Acronyms
HATEOAS is short for "Hypermedia as the Engine of Application State." For the non developers out there, yes it's a bad acronym for an inscrutable technical term. It's a child concept to REST (REpresentational State Transfer), which is at least a better acronym for an equally inscrutable technical term.
In as simple of a description possible REST is a style guide for designing interfaces to web services. It's designed to note store state, be cacheable, and to have a uniform interface that follows standards to that many clients can connect to it.
In that context, HATEOAS is the principle that representations of available actions and objects should be represented as interface-relative links. When I first started drafting design of the service that I'm designing, I built return types that has several levels of nested sub-objects rather than returning linked values.
As I started building more complex functionality, the need to return sparse references to objects became more obvious. The obvious reason is that by returning sparse objects I could write simpler functions and only fill in details in the sparse objects as needed, hopefully keeping the application faster and keeping the load lower on the server.
The second, more interesting point about that is that when I started doing research on how to build a good object persistence model, one suggestion that came up was leaning heavily on the URL cache mechanism and define acceptable cache lengths on the server side rather than the client side. For an ephemeral cacheing system, that makes a lot of sense, and it appeals to principle I've been trying to follow to try to put as much code as makes sense on the server side because it has better odds of being re-used there.
Implementing it was fairly easy on both the client side, and the server side. On the server side I created some @XmlJavaTypeAdapter
classes that use the primary keys and server side URL paths for the relevant classes, and then on the client side I just had to add properties for the URL path, a flag if the object is a sparse reference, and a function to load the data for a sparse reference. In theory, once the URL cache is properly defined, I could load a list of 1,000 products from the same manufacturer, and if the data for the manufacturer has been loaded once, every other one be handed the cached data rather than reloading from the server for every object.
It lacks a large number of features that a real persistence framework would have, especially in terms of data relationships, but because the server side is the canonical "truth" to any interaction that's not nearly as much of an issue as if I wanted to support offline interaction or real synchronization.