TD toolkit: GraphQL

TD toolkit: GraphQL

TL;DR

Anyone who is interested in or does actively develop web APIs should consider the use of GraphQL as a viable substitute for REST.

The TL; part starts here

Simple beginnings...

I love that initial wave of pure excitement and -often times- fear that comes with learning anything new. Indeed this was the case when I decided over a decade ago to venture into the more technical territories of my chosen industry. The goal was simple, collate all the fragmented knowledge I've gained in terms of techniques, application proficiency, coding skills etc., and focus it into a singular pursuit. For this purpose, my chosen medium of focus was this vast, often times generically expressed ideology, that we call pipeline.

Climb! I tell you, Climb!

Now, I am one to readily admit that I am not a "core" programmer (sometimes my "core" friends do this for me! I love them just the same.). This aspect was not what interested me about visual effects... initially, and for the first few years of my foray into the industry I was more than happy learning the art of color and lighting (Thanks to Jeremy Vickery, Jeremy Birn, whoever invented chiaroscuro and Jon Alton! amongst countless other tutors) rather than the science or technology of a raytracer vs a rasterizer. Back then I didn't even know the difference! However, as my self taught tutelage progressed, I found myself being drawn to understanding the inner workings of these fascinating software called renderers.

It's the Matrix

Venturing deeper into the subject led me to gaining awareness of the fact that all of it, the pretty pixels on screen, the paragraphs of text that were responsible for said prettiness, the commands that I typed into a MEL (yeah, I'm that old) window, all of it was at the end of the day just a bunch of 1s and 0s strung together. Since my learning style was to form associative memory, in my mind I usually equated the thing to an image. In this particular case, the association very easily was made to the streams of numbers that were show in The Matrix,

Data (No, not that one. Leave Brent Spiner out of this!)

So what does all this have to do with our subject for the day? Well, simply put its all data. A model is just a bunch of vertices strung together, a texture is just a..... yada, yada, blah, blah....data!. Since my chosen subject of study was about rendering, I naturally assumed the final render to be the ending point, the reservoir if you will, or a "piped" endpoint of all this data.

The more you learn, the less you realize you know

Alright, I now knew my destination but whats my starting point?, and more importantly which road do I take?, The more I traced (slightly clever pun intended?) said source, the more I came to understand that rather than visualize a pipeline as a single starting point and single endpoint, a more appropriate association was in the form of multiple streams leading eventually into the ocean or a dam.

Said association had the intended result of raising my awareness to the fact that the data need not be linearly expressed. It's not a simple point A to point B, all the way we go to point Z. Sometimes, ABCD might go into F and F might break out into GHI and JKL etc.

We are family! Well, at least related for sure.

One might assume that due to aforementioned non-linearity(/ness?), that data might break off from the source at some point, and perhaps may not even end up at the destination. After all, lakes do exist (I'm no expert in potamology, please excuse any discrepancies). However, for the sake of my simple minded clarity, I assumed an ideal world where all streams led to the ocean..... eventually.

This also derived that all streams are related, since they have the same source and same destination. (Slight aside: I feel the need to clarify my earlier statement. I shared my belief that pipeline is not as simple as a single point A to ...,, this belief still holds true in this situation. Single Source != Same Source, a source can be any infinite collection, imagine a group of lights contributing to the same key light source for instance, while single is.... well singular.)

Quantify.

With my newly found understanding of the various aspects of my journey, I set out to each checkpoint to identify the path of the generation/change. Off I went to the modeling department where a rather patient lead sat me down and explained the various nuances of his work, he said model.... I thought stream generation! Another equally patient lead in texturing said shader... I again though stream generation! However, this particular lead also UV and textures, I thought, multiple stream generation! And so on so forth through every department, playing at the patience (and sanity) of my contemporaries/seniors till I ended up at my own desk with a scene to light, the quantified result of my journey.

Record.

I recently came across an algorithm known as the Finite Element Method, the over-arching basis of this method is to break down complex problems into simpler more "finite" calculations. Though I knew nothing about FEM or indeed algorithms back then, the basis was still the same, take all this -often times- complex data, break it down based on some logic (I was guided to choose department wise breakdown at the time), group related data, form connections to different groups (departments) and persist all this information somewhere.

For the requirement of persistence, we had XML at that point in time (though we did leverage a fair share of Maya's file references and later on reference edits),

Why XML?

Because it was perfect for recording logical blocks of hierarchical data.

Why did we stop using XML?

Because it was not so perfect to represent sibling relationships between separate blocks. (Also, the XML files were getting kinda huge to the point that we had to write smaller XML files to quantify the larger XML files, lets just say that more than a few TD's sanity was in question back in the day due to this monolithic complexity.)

Enter, stage left.... Databases! These massively scalable, secure, data storage mediums have become increasingly important (dare say indispensable) within the scope of pipeline. They offered everything one might require to record aforementioned information. And with such elevated status of import, came the need for different types of web services that could ease the complexity (read: complexity based on situation, I do not mean that all databases are complex to interact with) of interacting with said databases.

Enter, stage right..... REST, which was/is and most likely will be the de-facto standard for web APIs as it stands/stood and so our first SQL server was installed, our first Django application was made and we were taught the means to interact with such services (I did not code web services during this time.)

Optimize.

I now had all the information needed to assemble my lighting scene, I was at the ocean now and thoroughly enjoying my journey and the view. I lit the scene to the best of my abilities and was rather proud to see the visual result of this rather vast journey. However, my elation was short-lived, like any other of my contemporary TDs, I couldn't leave things well enough alone. I had to find a faster way to quantify, a faster way to record!

What's this? A contender?

As one can imagine, the datasets generated in pipeline tend to be deeply relational, following is a very simplistic structure of a pipeline structure:

A basic pipeline relational model

While I will not be elaborating on the mechanisms used to achieve this structure in a database, we will assume that a REST API has been setup that returns any part or whole of this structure when queried.

If for example, one were to write a front-end UI based off this structure, to list all the texture dependencies of an asset for a particular shot, the query order might look something like this:

QUERY SHOTS TABLE                                              # Query 1
QUERY ASSETS TABLE WHERE SHOT IS A RELATION                    # Query 2
QUERY DEPARTMENTS TABLE BY TYPE TEXTURE FOR EACH MATCHED ASSET # Query 3
QUERY DEPENDENCIES TABLE BY MATCHING TEXTURE ID                # Query 4

Expressed as a REST query:

GET shots                                                          # Query 1
GET assets?parent_shots__in=<each_shot_id>                         # Query 2
GET departments where parent_asset=<each_asset_id>&type="texture"  # Query 3
GET dependencies where department=<each_department_id>             # Query 4

Which would yield something like, assuming we asked for JSON:

{
  "shots": [
    {
      "shot": {
        "id": 123,
        "assets": [
          {
            "asset": {
              "id": 456,
              "name": "someName",
              "created": "sometimestamp",
              "edited": "sometimestamp",
              "author": "somePerson",
              "departments": [
                {
                  "id": 789,
                  "name": "Texture",
                  "code": "texture",
                  "users": [
                    {
                      ... list of users
                    }
                  ],
                  "dependencies": [
                    {
                      "shader": {
                        "code": "someShaderType",
                        "version": 1
                      }
                    }
                  ]
                }
              ]
            }
          }
        ]
      }
    }
  ]
}

From the above JSON, we can perceive three main things:

  1. Queries always return ALL attributes
  2. Queries will return ALL relations, with id pointers (This depends on the design of the API, however this is the expected normal behavior)
  3. The query mechanism is responsible for "mapping" or "joining" the multiple queries after every request, requiring deeply nested loops to form a cohesive JSON of expected information.

Now lets look at the same query done from GraphQL:

query {
  shots {
    id
    assets {
      id
      departments(where: {code: {_eq: "texture"}}) {
        id
        dependencies(where: {code: {_eq: "shader"}}) {
          code
          version
        }
      }
    }
  }
  
}

And the resulting JSON:

{
  "shots": [
    {
      "shot": {
        "id": 123,
        "assets": [
          {
            "asset": {
              "id": 456,
              "departments": [
                {
                  "id": 789,
                  "dependencies": [
                    {
                      "shader": {
                        "code": "someShaderType",
                        "version": 1
                      }
                    }
                  ]
                }
              ]
            }
          }
        ]
      }
    }
  ]
}

This time we got back ONLY the relevant attributes and as a bonus we were required to write a single query. (Though graphql behind the scene does make multiple network requests etc, this is presently beyond the scope of this demonstration.)

Well, That was a (very?) long story.

GraphQL, for me has been a game changer in my incessant pursuit to find newer and better ways to quantify, record and assemble the information one needs to get started effectively light and render a shot. As always, I hope at least some of you find the information useful and incorporate the same into your dev workflows.

NOTE: The above demonstration was made on the basis of GraphQL's Apollo Spec.

Have you considered gRPC?

To view or add a comment, sign in

More articles by Sreenathan N.

  • TD toolkit - Hasura

    TL;DR For anyone already familiar with GraphQL concepts and actively using it in their work, if time turns out to be…

  • TD toolkit: qtmodern/QtAwesome

    TL;DR I always believe in using a good set of icons and a good color palette in my UI designs to make them more…

  • TD toolkit: Prettier/Black

    TL;DR Welcome to my first in a series of articles that I intend to write to share some tools and techniques that I've…

    3 Comments

Insights from the community

Others also viewed

Explore topics