What are Graph Neural Networks and why should you consider using them in your Recommendation System?
Introduction
Graph Neural Networks (GNNs) have been one of the hottest topics in the AI world in recent years, with many potential business applications. They are representatives of one of the most powerful groups of machine learning algorithms, which are Artificial Neural Networks. Other members of this family are, for example, language models, which operate on textual data or computer vision models, which in turn operate on visual data such as photos or videos. And precisely what distinguishes GNNs is the type of data they work on, namely graphs. Graphs are expressive and flexible data structures that are often used due to their efficiency when modeling and representing various compound interactions and relationships, making them ideal for processing complex and diverse data.
Because of their many desirable properties, GNNs have gained popularity for application in the solving of a wide variety of business problems. They are used in fraud detection, in the field of drug discovery or in social networks analysis, amongst others. GNNs take advantage of the fact that in many of these cases, the data can be very easily represented as graphs, such as the relationships between groups of people in the case of social networks.
However, arguably one of the most promising applications of GNNs is in recommendation systems. By analyzing the relationships between products and users, GNNs can make personalized recommendations based on past behavior and interactions. Due to the effectiveness and business value obtained thanks to such recommendations, more and more big companies are starting to use GNNs in their recommendation systems. For example:
In Uber the GraphSage algorithm was used to suggest the foods that are most likely to appeal to an individual user,
Pinterest used its own PinSage algorithm which was used to make visual recommendations based on the tastes of their users,
In Alibaba GNNs were used to support a variety of business scenarios, including product recommendation and personalized search,
And for the last example, Medium used it for article recommendations.
GNNs are a powerful tool for businesses, offering a wide variety of new opportunities. As the field continues to evolve, businesses should keep an eye on this rapidly growing area of research and its potential applications. For this reason, this blog post will succinctly describe what GNNs are, how they work and how we can model any data as graphs. Additionally, we will take a closer look at their use in one of their most promising applications, which are recommendation systems.
What are Graph Neural Networks?
In order to determine whether choosing GNNs is a good approach to a particular business problem or not, it is useful to know how they work and for which data they can deliver the best results.
How do we represent a problem with a graph?
Let’s start with an introduction to what graphs really are, since this is one of the main concepts on which GNNs are based. It is the data modeled as graph structures that serve as input for these algorithms. Thus, graphs are pretty simple data structures that are extremely flexible and can really well model interactions within data. For example, in recommendation data, we usually analyze interactions between the users and items which are presented in the diagram below.
Graphs are mainly built from two elements: nodes - which represent some types of entities - in this case users and items, and edges - which here represent the interactions between them. They can be, for example, information about which items a user has bought or added to the cart.
But that is not all. As mentioned before, graphs are also really flexible and expressive data structures. It is not a problem to additionally incorporate information about interactions between users, for example, users following features from Instagram or friends from Facebook. Such types of relationships are presented as blue edges in the extended diagram beneath. We can also add the time dimension very easily to our graph representation of data. This can be achieved by assigning timestamps or ordering indexes to our edges, which will introduce the order in which some items were, for example, bought by a particular user (red numbers in the diagram). It is also possible to include different types of edges, indicating if, for example, a given user has added an item to their favorites (green dashed lines). Or if two certain items are in some kind of relationship with each other - like the same genre of film (brown line). At the end, each node can additionally store classical tabular data which will represent the features of a given user or item (purple and blue tables).
So we know that with graphs, we can indeed model many complex interactions in a fairly simple way. However, what makes these particular data structures very useful, is how GNNs are able to leverage them.
How do Graph Neural Networks work?
Now that we know what graphs are, we can focus on how GNNs use them to solve problems. Let's start with a very simple example and stick to the topic of recommender systems. Suppose we want to predict which item we should recommend to a user, so as to maximize the chances of that user buying it. Let’s assume that we already have encoded the features of the users and items into vector embeddings for each node, which are represented with the colorful bars in the diagram below. Vector embeddings of features are simply their numerical representations, which can be more easily consumed by many different algorithms, as it is a fairly universal format. In this form, we can present, amongst other things, categorical variables, images or whole texts which can be, for example, descriptions of our products.
As already mentioned, our task will be to predict which item the particular user is most likely to buy next - so we simply need to find a new edge between this user and an item. Such a prediction will help us to generate a useful recommendation. The idea of how we will do this is to update the feature embedding of nodes in such a way, that by comparing the embedding of the user with the embedding of the item, based on their similarity we can calculate the probability that this user will buy this particular item. So how is this done? First, let’s define what the neighbors of a node are in the graph - these are simply nodes connected with an edge. Ok, so going further, like every other deep neural network, graph neural networks also have layers. In every such layer for each node we aggregate information from each of its neighbors. With such aggregated information we update the feature embedding of the target node.
For this example, let's focus on the gray user in the graph beneath. We can see that he has two neighbors: a dark blue item and a yellow item. In the first layer of our GNN, we take the embeddings of these two items, aggregate them and update the gray user feature embedding with it. This process is called message propagation in graphs. If we had added a second layer to our network, we would have first performed the above actions for the gray user's neighbors as target nodes and only then updated the gray user’s state. An example scheme is shown in the diagram below. The more layers our network has, the larger the neighborhood will be considered during a single passage through the network. Each of such passages consists of steps of node representation updates and is performed sequentially according to the order of the layers - from the last to the first. Aggregate and update functions can have many different formats, and can vary between network architectures (For example, the aggregation function can use an attention mechanism (Graph Attention Networks) and the update function can be a simple concatenation of vectors followed by a forward pass through a deep neural network).
At the end of such a process, we can compare the embedding of the user with each of the items and recommend one with the highest similarity to the user embedding. For example, the gray user and red item as in the diagram below. Thanks to the message propagation process, the created user embeddings take into account the characteristics of the items that users have previously interacted with and also the characteristics of users who have similar tastes to our target user. This ensures that the final embeddings capture well the interactions between users and items and also their characteristics, which allow for accurate and personalized recommendations.
Now that we know roughly how GNNs work, let's focus on recommendation systems and how a graph-based approach can help us generate accurate recommendations for a wide variety of problems.
Graph Neural Networks in Recommendation Systems
What are Recommendation Systems?
In a nutshell, recommender systems are algorithms, usually based on machine learning models, that use data about the users, the products and the interactions between them to provide accurate recommendations. Taking the e-commerce industry as an example, the goal of a typical recommender system is to score the interest that a user may have with a set of items. Based on that score, we can suggest a customer items he or she is most likely to buy, which at the end of the day is supposed to help us sell more products and make more money. This is one of the reasons why recommender systems are one of the most valuable applications of machine learning for solving business problems. For a more extensive guide on the topic of recommender systems, we would like to refer you to our White Paper “Guide to Recommendation Systems. Implementation of Machine Learning in Business” on this subject.
Why use Graph Neural Networks in Recommendation Systems?
One of the challenges that arise in making accurate recommendations is to effectively learn about the users and their item representations from data about their past interactions and other available side information. One of the reasons why GNNs have been widely used in recommendation systems in recent times, is that most of the data used in the recommendation field have a graph structure and GNNs are at the forefront when it comes to graph representation learning. We can easily represent sequential data as graphs or even knowledge graphs with many levels of relations and many types of nodes. Also, thanks to the way that GNNs work and the possibility of using multiple layers in them, it is feasible to utilize information even from very distant neighbors, which helps in taking into account not only obvious relationships. Additionally, it helps with the cold start problem which occurs when there is a limited amount of data on past user interactions.
Another thing is that algorithms such as XGBoost, Random Forest or standard deep learning algorithms use a conventional machine learning approach, based on a tabular data format. These approaches aren’t optimized for graph data structures. What is more, graph solutions are currently achieving state-of-the-art results in many recommendation benchmarks(Paperswithcode, BARS)and are gaining a lot of traction because of that. And last but not least, as touched upon earlier, graphs are very expressive and flexible data structures which, combined with the fact that GNNs are versatile algorithms, make it possible for us to use them to solve a wide variety of recommendation problems. For this reason we will now focus on explaining how and where we can use them in different ways, depending on exactly which problem we want to solve.
Categories of Graph Neural Network-Based Recommendation
Recommendation tasks can take very different forms. It depends mainly on what exactly we want to predict, what factors influence the actions of our target users and simply what data we have access to. We would approach the problem of recommendations differently if we didn't have access to any data about our customers, just information on their anonymous sessions, and differently if we had access to, for example, information about their accounts or social groups that they belong to. However, thanks to the many properties of graphs that we mentioned earlier, we can easily address many types of recommendation problems using GNNs. We will now briefly introduce several types of approaches which utilize GNNs for various recommendation tasks.
Collaborative filtering - e.g. simple taste similarity
Collaborative filtering bases its functioning on the recommendation of items by identifying other users with similar tastes; it uses their opinions to recommend items to the target user. This approach requires us to store the transaction history of each user, but it is also one of the most simple and popular techniques that often give satisfactory results. We presented how to model such a problem using graphs earlier on in the section on the introduction to GNNs.
Sequential recommendation - e.g. predict the next action in the session by an anonymous user
However, in case we are unable to determine the identity of our website visitors, we are unable to store their transaction history. We must then base our recommendation efforts only on the activity of a given user in an ongoing browsing session. Such activity may consist of clicks on various items on our site, their additions to shopping carts, or transactions made. We can present the sequences of such actions as a graph as shown in the diagram above. After analyzing many such sequences of interactions with items, the graph neural network is able to predict the next action that would be of interest to the user and recommend it to them.
Social recommendation - e.g. taste similarity informed with friends’ preferences
One important factor that often influences users’ purchasing decisions is the behavior of other users in their social networks. With the emergence of online social networks, social recommender systems have been proposed to utilize each user’s social group member preferences to enhance recommendations. If we have access to information about the social interactions of our users, there is a good chance that it can positively influence the effectiveness of our recommendation system.
Knowledge graph-based recommendation - e.g. improved items/objects representations
As was mentioned in the previous example, social networks, which reflect relationships between people, are used to enrich user representations, meanwhile knowledge graphs, which express relationships between items through their various attributes, are used to enhance item representations. The rich semantic relationships between items in the knowledge graph can also help to explore the relevance of connections between them and increase the interpretability of the results by analyzing the links between the knowledge graph, items used historically by users and the user’s recommended items.
As you can see, we can approach recommendation problems in many different ways. However, this doesn't make it much of a problem for GNNs, because thanks to the flexibility of graphs, they can address each of these problems as if they were designed to do so. If you are curious about what specific Graph Neural Network models can be used to solve these problems, we encourage you to check out the article Graph Neural Networks in Recommender Systems: A Survey. And if you would like to learn how to apply and implement graph solutions, we can move on to the next section.
Tools for implementation of graph solutions
As we have mentioned before, graphs are a unique type of data and require different models and solutions than a standard tabular approach. Fortunately, thanks to the rapidly growing interest in GNNs, a number of high-quality tools have been developed to make it easier. Good examples are PyTorch Geometric and Deep Graph Library, which are python packages integrated with, among others, PyTorch which enable you to build deep graph solutions very easily. In terms of graph databases, Neo4j is one of the leaders and offers fully managed cloud services with many graph solutions available. We can use it for storing and managing data in graph format, which is sometimes more natural and enables maintaining data relationships that deliver very fast queries and deeper context for analytics. Also, Amazon recently introduced a new service in AWS, which is Amazon Neptune ML. It uses GNNs for enabling predictions using graph data. One of Spark's components, GraphX, is also a noteworthy tool, which supports graph-parallel computation.
An example of the use of graph tools can be found in the recommendation use case realized within the QuickStart ML Blueprints. QuickStart ML Blueprints is a framework that we are building at GetInData. In just a few words, it is a set of good engineering and machine learning practices for the fast prototyping of machine learning solutions in a structured manner, supported by documentation and real examples. It enables quick, structured and efficient prototyping, makes testing new promising trends in data science very easy and is a good framework for evaluating new state-of-the-art solutions on real-world scenarios and datasets.
One example of such a state-of-the-art solution is a GNN implemented for the purpose of sequential recommendations on the e-commerce, online shop Otto dataset. The specific network used for this task was the Dynamic Graph Neural Networks for Sequential Recommendation (DGSR). This model utilizes ideas from dynamic graph neural networks to model the dynamic collaborative signals among all interactions in data, which helps in exploring the interactive behavior of users and items with time and order information. QuickStart ML Blueprints is an open-source project, so if you would like to learn more about it, you can go ahead and browse the project repository.
Summary
Graph Neural Networks are part of an extremely active and rapidly growing field of research. Also considering how effective and versatile these algorithms are, it is worth keeping a close eye on their evolution. Recommendation data, which is usually composed of interactions between users and items, can naturally be modeled as a graph in many different ways depending on the specific task. This aspect further contributes to the traction of GNNs, as recommendation systems are one of the machine learning applications that bring the most business value.
If after reading this article you have become interested in the topic of graph neural networks, stay tuned for the next blogs in this series. If you want us to show how you can apply QuickStart ML Blueprints approach in your business, sign up for a demo.
Sign up to get notified about our solutions, tutorials and more
The administrator of your personal data is GetInData Poland Sp. z o.o. with its registered seat in Warsaw (02-508), 39/20 Pulawska St. Your data is processed for the purpose of provision of electronic services in accordance with the Terms & Conditions. For more information on personal data processing and your rights please see Privacy Policy.
By submitting this form, you agree to our Terms & Conditions and Privacy Policy
ML Framework
Graph Neutral Networks
Recommendation
Recommendation Systems
8 March 2023
Like this post? Spread the word
Want more? Check our articles
Tutorial
NiFi Ingestion Blog Series. PART V - It’s fast and easy, what could possibly go wrong - one year history of certain nifi flow
Apache NiFi, a big data processing engine with graphical WebUI, was created to give non-programmers the ability to swiftly and codelessly create data…