To Blog

From Redux to Apollo Cache: What to Consider When Making the Switch

At In-Q-Tel, we built ArQ to help our employees create specialized hierarchical structures called Architectures from a large taxonomy called “ArQ tags”. These architectures are a hierarchical representation of a large problem or technology area that shows how technology capabilities fit together. (Read more about ArQ here.) We also built several features to enable collaboration and provide context to our users. There’s an in-app architecture editor that includes features such as undo and redo. There are graphical visualizations of architectures. There are ways to collaborate with other users on architectures, submit architectures for review, and ultimately publish them for all users to see. Suffice it to say, the app is relatively complex.

The ArQ client is built with React, and in the early days of development, our team, like many others, chose Redux as our state management system. Redux would normalize and cache data fetched from the API and keep track of UI state, as well as handle more complex tasks like managing the undo stack in the architecture editor.

We had some frustrations with Redux as the app grew. As many developers have noted, Redux requires a lot of boilerplate code, across several files, for even very simple features, such as fetching and caching a list of items. We also found that maintaining a fully normalized store, free of redundant or inconsistent data, would probably require introducing Normalizr and making major changes to the structure of our store. We weren’t looking forward to that. In addition, we had encountered a few frustrating bugs caused by code that accidentally mutated data in our store directly. We wondered, should we add Immutable.js as well and refactor accordingly?

With the app already built around Redux, none of these issues were problematic enough to send us looking for a replacement. Besides, we had features to build! That changed when we decided to introduce a GraphQL API. When we settled on Apollo as our GraphQL client, we found Apollo could work as potential store replacement. Apollo, comes with its own built-in cache, which can be extended to manage local state as well as remote data. As we researched the Apollo cache, it looked as though it addressed all of our frustrations.

The first advantage of the Apollo cache was its promise to keep our cached remote data well organized with very minimal code. Just by setting up Apollo, we would have a client cache up and running that could store data fetched from our GraphQL API and ensure, through automatic normalization, that any nested objects were indexed by a unique identifier. We wouldn’t have to write a new slice of our store for the new page we were building next, or write actions and reducers to manage it. We could get away with zero new state management files where before we needed three. Another attractive feature of the Apollo cache was the option to enforce a data store that was immutable. With an immutable store, we could better ensure that cached data was only changed deliberately, and that those changes were reflected in all the relevant parts of the app. Apollo could do this without the introduction of a library like Immutable.js.

With the promise that it could also manage local state, we decided to build a few new features using the Apollo Client cache. We wanted to test it out, understand it better, and assess how it compared to Redux in practice. Here’s what we learned in the process.

The Apollo Client Cache Makes a Good First Impression

The Apollo Client is very easy to set up. The documentation is, by and large, quite good, and sending queries is a snap. With one line of code in your client configuration, you also have the built-in cache up and running. And with that done, you’re already caching remote data that you’ve received from your API.

import { InMemoryCache } from 'apollo-cache-inmemory';
import { HttpLink } from 'apollo-link-http';
import { ApolloClient } from 'apollo-client';

const client = new ApolloClient({
  link: new HttpLink(),
  cache: new InMemoryCache()

Note: We were using version 2 of Apollo initially. As of this writing, version 3 is the most recent.

From there, it’s very easy to specify which data source your queries should refer to. They can return data from the remote GraphQL API, from the cache, or from both. By using Apollo hooks, we could set a fetchPolicy, for example, cache-only or cache-and-network, on a particular query. Individual objects in the cache can be accessed with a unique identifier that’s created from the object’s id and GraphQL __typename, a meta field that distinguishes, for example, architectures from users.

Compared to Redux, this is a breath of fresh air. First and foremost, there’s very little boilerplate code. There’s no need to write action creators and reducers and selectors; there’s no need for a tool like Normalizr to organize nested data; and it’s easy to make simple updates to objects in the cache or refresh them with new data from the API .

One of our first applications for the Apollo cache in ArQ was a new page that would allow users to view the details of a particular Company object, and to add or remove Taxonomy “tags” on that Company. Using the Apollo cache, the new code for this feature was about half the size it would have otherwise been, thanks to the fact that the basics of remote data storage were handled automatically. When we followed that up with a page to let users view, add, and remove Taxonomy tags on a Problem object, the nested Taxonomies each page fetched were being stored in an efficient and consistent way, thanks to the automatic normalization.

It’s Not All Smooth Sailing.

As we used the Apollo cache more, it began to lose some of its luster as a full replacement for Redux.

While working with the cache to store remote data is essentially plug-and-play, there are some subtleties to getting it right. If a GraphQL query requests data using a different key than it was fetched with—architecture instead of architectures, for example—you may have to write code to tell the cache it already has the data you need. This cropped up as an issue for us. There are also many different ways of reading from, and writing to, the cache, each with its own use case (does your write depend on data already in the cache?) and implementation details to master. 

Using Apollo to manage complex local state is not especially simple either. It involves writing client-only queries and client-side resolvers, which can, at times, feel as verbose as Redux boilerplate. It also requires enough facility with the cache to know how to read data that currently exists, and merge or replace it as appropriate. For us, this was essentially the same as writing Redux reducers, just with unfamiliar tools and a fuzzy consensus in the developer community about best practices.

These weren’t insurmountable problems, but it did feel like significant complexities were obscured by the initial, apparent magic of the Apollo cache. From that perspective, the fact that so much of the cache functionality is built in started to feel like a liability. You can’t get away with using the Apollo cache effectively, on a reasonably complex app, without understanding its inner workings.

Compounding that problem, we found that high-quality resources for Apollo developers were harder to find than for Redux. Apollo has a Chrome extension that let’s you view the cache, but it feels a little primitive compared to Redux DevTools. It doesn’t let you isolate changes to the cache caused by a particular query or mutation or time travel through the cache. The documentation for setting up Apollo is great, but it was harder to get clear information when we needed to dig a little deeper or do something slightly non-standard. And while there are plenty of blog posts about Apollo and the Apollo cache, there doesn’t (yet) seem to be a set of time-tested conventions for using it as a general state management solution.

But Which One Is Right for My Project?

For now, we’re using both state management systems in ArQ. Any new features and functionality developed since our migration to GraphQL use the Apollo Client cache. It has sped up development  significantly. We’ve migrated several older areas of the app to Apollo as well. But in some areas where our local state management needs are most complex, we’re sticking with Redux. It’s more transparent, and the tools for inspecting, manipulating, and tracking state with Redux are incredibly clear.

Which tool is right for your project? With my CTO hat on, the answer is: It depends. Are you building a small app from scratch? Are you working with a stable team of developers who know—or want to learn—Apollo? Do you need a state management system primarily to cache remote data? Those factors would point me towards Apollo. There are real efficiency advantages to built-in data caching and normalization.

If, on the other hand, you’re working on an app with more complex local state management needs, Redux might be the right tool. It’s easier to understand, and debug, the behavior of a Redux store than the Apollo cache. If you have a shifting team of contract developers, or a large team of developers of different experience levels, I might recommend Redux simply because it’s more widely used with very well established conventions.

But keep an eye on Apollo’s state management system regardless. We’re currently upgrading from Apollo 2 to Apollo 3. The changes to caching in Apollo 3 attest to some of the issues in Apollo 2, but they’re also good changes—and bode well for Apollo 4 and beyond.