
An encoding of graph-shaped training data on disk, as well as a library used to parse this data into a data structure from which your model can extract the various features.A high-level API for product engineers to quickly build GNN models without necessarily worrying about its details.A library of standard baked convolutions, that can be easily extended by ML engineers/researchers.Various efficient broadcast and pooling operations on nodes and edges, and related tools.A library of operations on the GraphTensor structure:.A GraphTensor composite tensor type which holds graph data, can be batched, and has graph manipulation routines available.This schema describes the shape of its training data and serves to guide other tools. A well-defined schema to declare the topology of a graph, and tools to validate it.

Hence we chose to provide an easy way to model this.

Many of the graph problems we approach at Google and in the real world contain different types of nodes and edges. GNNs are often used in combination with ranking, deep-retrieval (dual-encoders) or mixed with other types of models (image, text, etc.) A high-level Keras-style API to create GNN models that can easily be composed with other types of models.The initial release of the TF-GNN library contains a number of utilities and features for use by beginners and experienced users alike, including: The various components of TF-GNN that make up the workflow. Beyond the modeling APIs, our library also provides extensive tooling around the difficult task of working with graph data: a Tensor-based graph data structure, a data handling pipeline, and some example models for users to quickly onboard. TF-GNN provides building blocks for implementing GNN models in TensorFlow. Finally, we can use GNNs at the edge level to discover connections between entities, perhaps using GNNs to “prune” edges to identify the state of objects in a scene. GNNs can be used on node-level tasks, to classify the nodes of a graph, and predict partitions and affinity in a graph similar to image classification or segmentation. We can identify the presence of certain “shapes,” like circles in a graph that might represent sub-molecules or perhaps close social relationships. By working at the graph level, we try to predict characteristics of the entire graph.

GNNs can be used to answer questions about multiple characteristics of these graphs. Additionally, we can ascribe directionality to edges to describe information or traffic flow, for example. We can characterize each node, edge, or the entire graph, and thereby store information in each of these pieces of the graph. Graphs can model the relationships between many different types of data, including web pages (left), social connections (center), or molecules (right).Ī graph represents the relations (edges) between a collection of entities (nodes or vertices). And while fundamental research on GNNs is perhaps decades old, recent advances in the capabilities of modern GNNs have led to advances in domains as varied as traffic prediction, rumor and fake news detection, modeling disease spread, physics simulations, and understanding why molecules smell.

More often than not, the data we see in machine learning problems is structured or relational, and thus can also be described with a graph. A set of objects, places, or people and the connections between them is generally describable as a graph. Graphs are all around us, in the real world and in our engineered systems.
