Social Login with FaunaDB and Auth0

Image of Author
March 30, 2020 (last updated September 21, 2022)

Introduction

Serverless full-stack is here to stay

Serverless design patterns are a revolution in web app development because they simplify the developer responsibility space. In the zero-sum game of spending time, every second spent in application-adjacent tech (e.g., architecture, platform, etc.) is a second taken away from application tech (e.g., feature development, user experience, etc.). My perfect world would have all responsibility taken away from me except for application development.

A minor diatribe in defense of reduced developer responsibility

After years of advising and consulting on software and software architecture, from fortune 500s to startups, I can say with mild confidence, "Hey, that tech thing you're managing... you don't need to manage that." The vast majority of web applications aren't "special" enough to warrant fine-grained control over multiple layers of your backend architecture. The defaults are sufficient, and cheaper in time and money.

For a similar reason, I think technology service providers writ large are generally the right approach for startups. I see this, in a way, as the design principles of YAGNI and KISS applied to software architecture.

The tech stack I'm currently exploring

I knew from the beginning that I wanted to use FaunaDB and Zeit/Nextjs to build a web app. FaunaDB has a revolutionary design, and has a built-in GraphQL layer that is insanely simple to start working with. I surprised myself with how quickly I had a todo app up and running on localhost. I was using the Apollo React Client to query the FaunaDB GraphQL endpoint directly from my frontend. I had no server, no serverless function, no anything. I excitedly began writing a blog post using words like backendless, serverlessless, and my favorite, serverlesslessness, which is the state achieved by having a serverless architecture within which you never need to utilize a serverless function. To be clear, this is actually achievable with FaunaDB, depending on your desired architecture. However, I quickly realized that I wasn't going to be living this dream architecture because of an old enemy we all know too well...

Authentication (in a serverless world)

There are old problems in the new world of serverless architectures that need to be accounted for. For me, the most pressing issue was authentication, and in particular, social login, which 86% of your users prefer over manually creating an account.

While there are tools like Userbase innovating in this space, I knew I wanted to use Auth0 as my IDaaS. FaunaDB supports credentials-based authentication but has no built-in integration with Auth0 yet. This meant I needed to extend FaunaDB’s GraphQL layer with my own authentication layer. So, I needed a backend, which in the serveless world means I needed some serverless functions.

Once I started building my backend functions, things got complicated. There were a few different directions I could go with my codebase, but they all felt clunky. After a lot of research, and a lot of great chats on the faunadb-community slack channel, I've found a setup I'm satisfied with. It uses Auth0 to authenticate requests, and on success will proxy those requests through to the FaunaDB GraphQL endpoint, and it does all this with a minimal amount of code. This is what I want to share today.

What is not in this piece

How to build the client-side code with Zeit and Nextjs will not be covered here, as there are good intros elsewhere on the web for that. Also, how to get started with FaunaDB and the FaunaDB GraphQL endpoint is not covered here, as there are good intros elsewhere.

A quick note on the reference architecture

I am building a simple todo app reference architecture. The aim is to have a cloneable codebase to start from, so you can start writing features from day one. It's a work in progress at the moment, but by the time you look at it, it will hopefully have all the boilerplate in place, including multiple environments, a CICD pipeline, etc. Also, you can just skip all this and stare at the codebase, if you'd prefer.

Let's get started!

Apollo

The hardest part of this architecture, in my opinion, is intercepting your client-side GraphQL requests on the backend without incidentally setting up a full GraphQL server. The actual logic is straightforward enough, it’s just that the documentation on how to do it feels scattered. In the next paragraph, I will detail some of the theoretically important aspects of this architecture. Then, I will go into detail on how to set it up in the following subsections.

Importantly, we don’t need to setup a GraphQL server. FaunaDB took care of that for us. We just need to create space for custom functionality in between the GraphQL client in the browser, and the (“server-side”) FaunaDB GraphQL service. Your shema definition and data fetching logic is fully controlled by FaunaDB. Our backend / serverless function will only act as a “middleware layer” to perform authentication and other minor processing tasks. This is quite powerful; our middleware can seamlessly extend the GraphQL schema with custom functionality while delegating the bulk of the work to the FaunaDB GraphQL endpoint. This allows us to set up custom endpoints, fetch data from other sources, or integrate with 3rd party APIs like Auth0. There is sparse documentation around how to write serverless function "middleware" like this, so, hopefully, I can shed some light on how I chose to go about solving this problem for myself.

Apollo Server

Apollo Server needs to know (1) the schema it's exposing to the client, and (2) how to fetch the data demanded by the schema. The typical way this is done is by defining typeDefs, which define your GraphQL SDL schema, and resolvers, which tell your Apollo Server how to fetch the data defined in your schema. Combining these together, we can then create an Apollo Server instance.

const server = new ApolloServer({ typeDefs, resolvers });

Remote Apollo Server

Our situation is different from the typical scenario described above. With FaunaDB, the GraphQL endpoint already exists, and the schema does as well. We don't need to create typeDefs or resolvers anymore. In a sense, they both already exist behind the FaunaDB GraphQL endpoint, and FaunaDB manages them for you. All you have to do is ferry the request along. To do this, create an Apollo Http Link with your FaunaDB key, and an appropriate fetcher. An Apollo Link in the abstract is a bit complicated, so I'm not going to go into depth about that. But, the Http Link is less complicated. As is evidenced by the uri and fetch keys below, this Http Link tells our backend that there exists a FaunaDB GraphQL endpoint and defines how to call it securely

const link = new HttpLink({
  uri: "https://graphql.fauna.com/graphql",
  fetch,
  headers: {
    Authorization: `Bearer ${process.env.FAUNADB_KEY}`,
  },
});

Now we can use the link above to fetch the schema from the FaunaDB GraphQL endpoint using introspection, which is a fancy word for "the ability to read the GraphQL schema definition of an endpoint.” Once we have read the remote schema, we can use it to execute queries and mutations against the same remote endpoint. In other words, we can fetch the remote schema and use it to build a local Apollo API that delegates to the remote GraphQL endpoint as if it were hosted locally. Another way to think of it is as a local impersonation of a remote GraphQL endpoint.

We will achieve this functionality with the Apollo graphql-tools library. In particular we will use two important functions, introspectSchema, and makeRemoteExecutableSchema. introspectSchema will use the link created above to read the remote schema, and makeRemoteExecutableSchema will use the same link, and the newly introspected remote schema to make a local, executable schema. (It's worth noting that since introspectSchema is a network request, you need to call it in an async-friendly environment.)

const getSchema = async () => {
  const schema = makeRemoteExecutableSchema({
    schema: await introspectSchema(link),
    link,
  });

  return schema;
};

This schema can now replace both typeDefs and resolvers because (A) the above schema is it's own schema definition, which is what typeDefs were used for, and (B) the above schema also knows how to fetch its own data, which is what resolvers were used for. We can now pass it to the Apollo Server.

const server = new ApolloServer({ schema });

Apollo Server proxy, realized

We now have a working Apollo Server that will receive client queries and pass them along to the FaunaDB GraphQL endpoint. Basically, we now have a working proxy between the client and the FaunaDB GraphQL endpoint. This will be the same across all hosting platforms and serverless frameworks. I feel like the documentation around how to do this is sparse, so I hope this has been helpful. Also, shout-out to Paul Paterson for showing me how to do a lot of this when I was lost and sad in Apollo land.

The summation of our efforts thus far is the following,

const link = new HttpLink({
  uri: "https://graphql.fauna.com/graphql",
  fetch,
  headers: {
    Authorization: `Bearer ${process.env.FAUNADB_KEY}`,
  },
});

const getSchema = async () => {
  const schema = makeRemoteExecutableSchema({
    schema: await introspectSchema(link),
    link,
  });

  return schema;
};

const server = new ApolloServer({ schema: await getSchema() });

Now, unfortunately, unless your environment supports top-level await syntax, this code won't work. The question of how to create asynchronous space around your server-side code is framework-dependent. How to do it in Zeit, in particular, is the subject of a later section. Let's now take a look at how to use our newly created Apollo Server inside a Zeit serverless function.

Zeit/Nextjs

To create a serverless function with Zeit/Nextjs, you have to export a default function from a file in (src)/pages/api/ which takes as parameters a request and response object.

export default (req, res) => {};

Importantly, you can also export an async function.

export default async (req, res) => {};

Apollo Server middleware for Zeit

The easy way to work with Apollo Server and your framework of choice is with the Apollo Server middleware integration. They have many integrations, including some that are not listed in their docs (strangely enough). To see the full list, go to the repository README and scroll to the section on integrations.

For Zeit/Nextjs, I recommend using the micro integration. It is made by Apollo for integrating with the Zeit micro library, which is supported within Nextjs.

Using that library, we can export a handler that is compatible with the Zeit/Nextjs requirements.

const server = new ApolloServer({...})
export default server.createHandler()

server.createHandler() returns a function with the following signature: (req, res) => Promise<void>. This is exactly what we need for creating a serverless function.

An async trick for fetching the remote schema

The above solution is adequate if you have a local schema. But, as mentioned above, our schema is remote. This means we need to fetch our schema over the network. This means we need to create anasync-friendly environment before server creation in order to fetch the schema from the FaunaDB GraphQL endpoint.

We originally default-exported server.createHandler(), which has a signature of (req, res) => Promise<void>. Step one of this trick is realizing that we can achieve an identical signature by executing the handler manually.

const server = new ApolloServer({...})
const serverHandler = server.createHandler()

export default async (req, res) => {
  await serverHandler(req, res)
}

In the above code we “await” the return, which will eventually be void due to the signature of the server.createHandler function. This means that we are being “promised” a void return with our manual variant. This is identical to the original default-exported handler.

Step two is the easy part. We await anything else we want.

const server = new ApolloServer({...})
const serverHandler = server.createHandler()

export default async (req, res) => {
  await doSomething()
  await serverHandler(req, res)
}

In particular, we can await the fetching of the remote schema.

const link = new HttpLink({
  uri: "https://graphql.fauna.com/graphql",
  fetch,
  headers: {
    Authorization: `Bearer ${process.env.FAUNADB_KEY}`,
  },
});

const getSchema = async () => {
  const schema = makeRemoteExecutableSchema({
    schema: await introspectSchema(link),
    link,
  });

  return schema;
};

export default async (req, res) => {
  const schema = await getSchema();
  const server = new ApolloServer({ schema });
  const serverHandler = server.createHandler();
  await serverHandler(req, res);
};

A caching problem

The reason the above code is untenable is because we are refetching the schema on every request, as well as recreating the server and server handler. We only want to do those steps once, ideally before ever even starting the server (top-level await, anyone?). Until Zeit/Nextjs supports top-level await (or until you want to manually handle the babel plugins), we need a stopgap. The next best alternative here is to run the schema generation once, on the first query to the server. We can then cache the value for the life of the server. Just to be super clear, I want to reiterate that everything up to and including server handler creation only needs to be done once. We can wrap all of it up into a single function wherein we can cache the returned handler.

let handler;

const getHandler = async () => {
  if (handler) return handler;

  const schema = makeRemoteExecutableSchema({
    schema: await introspectSchema(link),
    link,
  });

  const server = new ApolloServer({ schema });
  handler = server.createHandler();
  return handler;
};

The full proxy

Here we are, in all our glory.

const link = new HttpLink({
  uri: "https://graphql.fauna.com/graphql",
  fetch,
  headers: {
    Authorization: `Bearer ${process.env.FAUNADB_KEY}`,
  },
});

let handler;

const getHandler = async () => {
  if (handler) return handler;

  const schema = makeRemoteExecutableSchema({
    schema: await introspectSchema(link),
    link,
  });

  const server = new ApolloServer({ schema });
  handler = server.createHandler();
  return handler;
};

export default async (req, res) => {
  const handler = await getHandler();
  await handler(req, res);
};

What we have created thus far is an Apollo Server proxy that doesn't do anything (yet). Once this is deployed on Zeit/Nextjs, we will be able to submit a GraphQL query from the frontend to the appropriate endpoint on our backend, which will intercept a client-side GraphQL request, do nothing with it (for now), and then pass the request along to the FaunaDB GraphQL endpoint, which will fetch the data and send it back, roundtrip, to the frontend.

Authentication

I don't want to get into the weeds regarding how to write an authentication function that integrates with Auth0, but I do want to conclude this piece by showing you how to utilize that authentication function within the Apollo Server / Zeit/Nextjs architecture. There are a variety of ways to authenticate within the Apollo Server framework. For our purposes, I recommend using the approach of putting user data on the context. The Apollo Server context makes available the req object. We can use this to extract the JWT token that our client-side code added to the request (which it got from the client-side Auth0 library it uses, e.g., the Auth0 React SDK).

server = new ApolloServer({
  schema,
  context: async ({ req }) => {
    const [isAuthenticated, user] = await authenticate(req);

    if (!isAuthenticated) {
      // tell 'em NO! (or 401)
    }

    return { user };
  },
});

The power of this approach is that if your user is authenticated, it will now be included within the context object for later reference. How you use the Auth0-provided user data is up to you. I use it when creating new users in my FaunaDB. It should be noted that the data returned from Auth0 is dependent upon the social service providing the data, so you might have to adjust your functionality based on that.

Authorization

This post is getting a little long. In a follow up article I will address how to get Authorization working after you have Authenticated with Auth0.

Conclusion

A friendly reminder that you can checkout, clone, fork, whatever, etc., the (hopefully) fully-operational reference architecture on Github for more details.

Thanks for reading. I hope this has helped! If you have any questions or comments, please feel free to reach out. Also, if you or your organization needs additional help or advice deciding on a software architecture or tech stack, I am a technical advisor that can help. Feel free to reach out and let's see if we can work together.

Saying some thanks! :D

Thank you to Brecht De Rooms and Summer over at FaunaDB for insights, edits, and good conversations. Also, thank you to Paul Paterson for helping me work out the my original confusions.