Is GraphQL the future of APIs?

Jahia is the only DXP that truly empowers you to deliver personalized journeys powered by customer data Learn how here

We at Jahia believe in making things easier for our customers, in particular when it comes to integrating multiple systems together into a coherent digital front for your business. Part of this mission entails being on the lookout for new technology that might simplify things and, naturally, we got intrigued when we first heard about GraphQL (http://graphql.org).

If you haven’t already heard about GraphQL and are in the business of developing web APIs, you will probably hear about it soon. In the words of Facebook, GraphQL’s creator,

“GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools”

While this might sound quite complex and abstract, it is in fact not that complicated and the video introduction by our Chief Technical Officer, Serge Huber, should make it clearer by providing a gentle introduction to GraphQL and demonstrating a quick integration into Jahia Digital Experience Manager (DX). We will then dive a little deeper on the concepts behind GraphQL and what it might mean for web APIs.

REST vs. the rest

The evolution of distributed APIs is interesting to follow to say the least. Not that long ago, CORBA (Common Object Request Broker Architecture) and the diverse flavors of Remote Procedure Call (RPC) technologies were the bee’s knees. But then, the “web” took off and with it, the HTTP protocol, which started to dominate the proceedings, gradually, first, with the Simple Object Access Protocol (SOAP) then all the WS-* (Web Services) standards that were still trying very hard to pretend you would deploy them over other protocols than HTTP. However, despite the name, there was nothing (or at least, not much) simple about SOAP and its cohorts. People were looking for simpler ways to reason about web-services and make them work better with the backbone of the network on which they were running.

This is when the REpresentational State Transfer (REST) architectural style (https://en.wikipedia.org/wiki/Representational_state_transfer), the basis for the 1.1 version of the HTTP protocol, started to take over the API developers’ mind share, with good reasons. The REST approach introduced a clean, if maybe sometimes a little overly dogmatic, way to approach web API design: you would model your domain using resources identified uniquely by Uniform Resource Identifiers (URIs) with which client software would interact using the strict modality and the defined semantics of HTTP methods, each method representing a type of action you could accomplish on the targeted resource. In the REST approach, and simplifying a lot, if resources were your entities, the HTTP methods would be the actions you could trigger on them.

Along with this rise of the RESTful style, we also started to see a shift of balance from the eXtensible Markup Language (XML) to the JavaScript Object Notation (JSON) format for data interchange. This follows the same pattern observed in architectural style moving from heavily structured, rigorously defined web services to more nimble resources served over HTTP.

The problem with REST

While REST and JSON moved the web services world in the direction of more agility by removing some of the complexity associated with SOAP and WS-* technologies, this is not to say that the conjunction of both technologies is a silver bullet. Nothing ever is anyway.

Static resources

A big restriction with REST resources is that they are static by definition: once published it is impossible to change how they are organized without breaking the concepts behind REST architectures. This makes it difficult to evolve the data model without some sort of versioning scheme. This, in turn, poses a question of maintainability because, in a fast-moving environment where quick iterations are desirable, maintaining an increasing number of API versions can get unwieldy very quickly.

Thus, a proper organization of resources is needed. However, since resources are identified and accessed via URIs (i.e. a path-like identifier), their organization tends to fall into a hierarchical (parent-children, container-containee, 1-n) model. Other types of organizations, in particular n-to-m relations, are much more difficult to model. It is therefore often needed to have resources point to a collection of identifiers of associated resources. While this is fine in itself, this results in clients usually having to perform multiple requests to retrieve all the data they need (the infamous n+1 query problem), not to mention the fact that clients then need to extract and assemble the data they really need from what the server returns.

Limited interaction semantics out of the “box”

Moreover, interactions with the resources are limited to the semantics provided by HTTP methods (GET, PUT, DELETE, POST, etc.) which, more or less, map to Create Read Update Delete (CRUD) operations. However, if you need more complex operations, in particular ones processing several types of resources at once, you’re out of luck. The only solution to that issue is to tunnel your operation through POST calls, though this is often frowned-upon by REST purists because, in order to leverage the benefits of the REST principles (cacheability, simplicity, etc.), it is crucial to use HTTP methods properly and not hijack them.

Limited discoverability

Finally, resources are supposed to be self-descriptive but, in practice, this is quite hard to achieve since there is no standard way to query metadata about resources, nor standard representation of such metadata. This results in loosely typed interactions with resources, requiring out-of-band documentation, with low discoverability.

In a sense, the REST architectural style skews the interaction towards the server which provides a rigid structure of its available resources and their organization. It’s then up to clients to somehow extract the data they need, very often requiring lots of fragile boilerplate code to manipulate data just to massage it into a form that’s actually useful to the specific application.

Introducing GraphQL

GraphQL, on the other hand, restores some control to the client, which is more able to drive the interaction. We believe that, ultimately, this should be what APIs are all about: they should make it easy for clients to work with the data you provide. Let’s see how the balance is reversed with GraphQL.

Overview

GraphQL benefits are achieved by defining an indirection layer on top of the data your API provides access to. This indirection layer consists of a schema which details the kind of data your API provides along with associated data types. The schema also defines the relations between your types, constituting the graph part of GraphQL. The “QL” part of GraphQL consists of a query language to retrieve data conforming to that graph from the server. Of note, though, graphs should be loosely understood as opposed to the formal definition provided by graph theory, so no need to feel intimidated!

The GraphQL schema really provides a sort of “capability” map of what the API server is able to provide rather than describe how that data is organized and persisted. This frees up the server to change the underlying storage without requiring clients to be aware of such changes.

Shape-oriented querying

The really interesting part of the query language is that it allows clients to describe the shape of the data they want back from the server, based on the types defined in the schema. Clients can thus control exactly the data they want back. This also has the distinct advantage of removing lots of fragile boilerplate code on the client that previously was simply dedicated to retrieving and re-assembling the needed data from disparate resources. Clients can therefore focus on the business logic and the use case they’re addressing instead of low-level data access details.

For example, a very simple query could be:

{
 me {
   name
 }
}

This query could return the associated data in JSON format:

{
 "me": {
   "name": "Christophe Laprun"
 }
}

Since the query language is not constrained to following a strict by-resource organization, clients can request and retrieve lots of cross-cutting data in a single request, in particular, resolving a parent and all its children at once, thus solving the dreaded n+1 query problem.

For example, the following query would return all the names of the hero’s friends in a single request:

{
 hero {
   name
   friends {
     name
   }
 }
}

This query could return something as follows:

{
 "data": {
   "hero": {
     "name": "R2-D2",
     "friends": [
       {
         "name": "Luke Skywalker"
       },
       {
         "name": "Han Solo"
       },
       {
         "name": "Leia Organa"
       }
     ]
   }
 }
}

Another interesting aspect of GraphQL is that fields need not be mapped to static values on the implementing server. Fields are associated by the server to a resolver that the implementer provides, meaning any processing, data fetching and assembling can take place to output a single field. The server can specify any number of arguments for fields, allowing the client to control how said fields are resolved or what specific data should be retrieved, allowing GraphQL to fully live up to its “query language for your API” moniker.

It gets particularly powerful once you realize that any field can accept arguments, thus really allowing the client to control what data needs to be returned. Performing the same kind of data “shaping” with a single REST call would get complicated real fast!

For example, the following query asks for the name and height of the human identified by the id 1000, further specifying that the height field should be returned using feet as unit:

{
 human(id: "1000") {
   name
   height(unit: "FOOT")
 }
}

Types and introspection

The schema is strongly typed: all object and field definitions that the server supports are associated with a type. This has several benefits. First, the model can be introspected (asked about itself) thus addressing the issues of discoverability and self-documentation often associated with RESTful services. It is even possible to envision dynamically built clients based on schema introspection. Second, queries can be validated on the clients and it is possible to prevent invalid queries to even be sent to the server. This validation process also benefits servers as they can avoid running invalid queries by checking their validity first: no more string-encoded queries that need to be parsed and then sent to the database, hoping that the syntax is correct. This also allows for powerful client tools such as GraphiQL (https://github.com/graphql/graphiql), an in-browser editor/navigator/query client for GraphQL APIs.

Mutations

Finally, another important aspect of GraphQL we want to mention here is mutations. Remember how we stated earlier that RESTful web services suffer from limited expressivity when it comes to operating on resources due to the lack of widely supported HTTP methods that are semantically allowed to modify resources? Well, GraphQL mutations provide a safe solution to this issue.

If you recall properly, in RESTful web services, calling the HTTP GET method on a resource in RESTful APIs should result in a read-only operation, i.e. the resource should not be modified as a result of the operation. This is a convention but one that must be followed to leverage the benefits of the web architecture. Similarly, by convention, GraphQL queries are also not supposed to alter the underlying data.

However, GraphQL allows schema developers (API providers) to define what they call mutations, a sort of named, parameterized methods that can be called on the server to mutate the data. No semantics is attached to the operation and thus, API developers are free to provide any mutating operation they want. More importantly, contrary to RESTful APIs where an HTTP method generally operates on a single resource, GraphQL mutations are free to target any data they want. This significatively broadens the scope of operations (in particular, business rules) that can be performed with the data in a meaningful way. Also, since mutations are defined at the schema level, API providers can control and limit the scope of mutating operations to the surface they are comfortable with.

GraphQL limitations

Hopefully, by now, you should start to see the benefits associated with GraphQL, though we've only scratched the surface of what GraphQL can offer. However, like all technology, it also has limitations. In particular, though GraphQL has been used by Facebook internally for years and should therefore be robust and battle tested, it has only been presented to a wider audience since 2015 and has just been moved to a “production-ready” status by Facebook in September 2016 (http://graphql.org/blog/production-ready/). As such, the question of maturity around the GraphQL ecosystem is a very valid one, in particular when it comes to both tools and best practices or patterns. The question of control of the emerging “standard” by Facebook is also important: will Facebook allow the community to contribute to the evolution of GraphQL or will it keep a tight grip on its future?

Another immediate issue and one that contrasts sharply with RESTful architectures is the fact that cacheability is not trivial as all queries go through the HTTP POST method as opposed to the more readily cacheable GET method. Consequently, caching of data needs to be handled at the application level and / or handled by a framework mediating between clients and servers. This second solution is the one that Facebook has adopted and is advocating with its Relay (https://facebook.github.io/relay/) framework. The Apollo Client (http://dev.apollodata.com) is also a very interesting GraphQL client implementation with lots of advanced features

Finally, another limitation, is the fact that GraphQL prefers JSON for response serialization, and, while this is not mandated by the specification, most existing tools and implementations enforce this de facto practice. Depending on your use case, this might or not be a deal-breaker. Considering the prevalence of JSON-only RESTful web services already, though, we’d say that this shouldn’t matter too much.

Conclusion

While it is too early to say definitively that GraphQL is going to supplant REST as the architecture of choice for web APIs, it definitely provides a refreshing and interesting approach to accessing backend data from a multitude of clients. It promises to solve some of the issues associated with REST by giving control back to the client in the interaction with the server, thus allowing more agility for sped-up developments of rich heterogeneous clients.

We'd love to hear about what you think about GraphQL. What do you think of its approach to solving some of the current issues observed with web APIs? Have you used it already or do you intend to? Let us know in the comments!

Christophe Laprun
Back