Graph neural networks are one of the most emerging techniques in the field of machine learning and deep learning. In a lot of research, we can see the success of these networks in the context of results and speed. One of the main reasons for the success of graphical neural networks is that they use graphical data for modeling and graphical data can consist of the structural relationship between entities in the dataset. In this article, we will learn how to build and perform modeling with graph neural networks by building and implementing them from scratch. The main points to be covered in this article are listed below.

**Contents**

- What is a Graph Neural Network?
- Understand the data
- Dataset Download
- Visualize data
- Create graphical data

- graphical neural network implementation
- Graphic layer
- Graph classifier of neural nodes

- Mounting model
- Instantiating the GNN Model
- Set training data
- Training model
- View results

Let’s start by understanding what a graph neural network is.

**What is a Graph Neural Network?**

In one of our articles, we explained that neural networks that can operate on graph data can be considered as graph neural networks. Using graph data, any neural network is needed to perform tasks using the vertices or nodes of the data. Suppose we are performing a classification task using any GNN, then the network needs to classify the vertices or nodes of the graph data. In graphical data, nodes should be presented with their labels so that each node can be classified by their labels according to neural networks.

Since in most datasets we see this structural relationship between data entities, we can use graph neural networks instead of other ML algorithms and can utilize the advantages of using graph data in modeling. The benefits of graphical data can be found here.

In this article, we will implement a convolutional graph neural network using Keras and TensorFlow libraries. In this implementation, we will try to use the neural network of the graph for a node prediction task.

**Understand the data**

Using a graph neural network requires graph data. In this article, we use the Kora database. This dataset includes 2708 scientific articles which are already classified into 7 classes with 5429 links. Let’s start implementing graphical neural network modeling with uploading datasets.

**Dataset Download**

```
import os
from tensorflow import keras
zip_file = keras.utils.get_file(
fname="cora.tgz",
origin="https://linqs-data.soe.ucsc.edu/public/lbc/cora.tgz",
extract=True,
)
data_dir = os.path.join(os.path.dirname(zip_file), "cora")
```

To go out:

Since the dataset consists of two files

- cora.cites: includes citation records
- cora.content: includes paper content records

We can see in the output that we have two download records.

Now we need to convert the citation data into a database.

```
import pandas as pd
citations_data = pd.read_csv(
os.path.join(data_dir, "cora.cites"),
sep="t",
header=None,
names=["target", "source"],
)
```

Describing the data set as,

`citations_data.describe()`

To go out:

In the data description, we can see that the data frame has two variables target and source, and the total number of values is 5429. Let’s convert the main content into a data frame.

```
column_names = ["paper_id"] + [f"term_{idx}" for idx in range(1433)] + ["subject"]
papers_data = pd.read_csv(
os.path.join(data_dir, "cora.content"), sep="t", header=None, names=column_names,
)
```

Describing items data like,

```
print("Papers shape:", papers_data.shape)
papers_data.head()
```

To go out:

In the output, we can see that this data contains 2708 rows and 1435 columns with the subject name. Now we need to provide label encoding to paper ids and subject columns.

```
class_values = sorted(papers_data["subject"].unique())
class_idc = {name: id for id, name in enumerate(class_values)}
paper_idc = {name: idx for idx, name in enumerate(sorted(papers_data["paper_id"].unique()))}
papers_data["paper_id"] = papers_data["paper_id"].apply(lambda name: paper_idc[name])
citations_data["source"] = citations_data["source"].apply(lambda name: paper_idc[name])
citations_data["target"] = citations_data["target"].apply(lambda name: paper_idc[name])
papers_data["subject"] = papers_data["subject"].apply(lambda value: class_idc[value]
```

**Visualize data**

Let’s visualize the chart data using the following lines of code.

```
import networkx as nx
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
colors = papers_data["subject"].tolist()
cora_graph = nx.from_pandas_edgelist(citations_data.sample(n=1500))
subjects = list(papers_data[papers_data["paper_id"].isin(list(cora_graph.nodes))]["subject"])
nx.draw_spring(cora_graph, node_size=15, node_color=subjects)
```

To go out:

In the output we can see that the node is a representation of the graph and the colors of the nodes represent different topics in the data. As we discussed that graph neural networks operate on graph data, we need to convert these data frames into graph data.

**Create graphical data**

In this section of the article, we will see how to convert the data frame to chart data. Here we will see a basic approach to creating graphical data. The data in a basic chart can consist of the following:

**Node features:**This element represents the number of nodes and the number of features in an array. The dataset we use in this article contains paper information that can be used as nodes and the node_features are the word presence binary vectors of each paper.**Edges:**It is a sparse matrix of links between nodes that represent the number of edges in both dimensions. In our dataset, the links are the paper citations.**Edge weight:**This is an optional element that is an array. The values under this array represent the number of edges which is a quantization between the nodes. Let’s see how we can make them.

```
import tensorflow as tf
feature_names = set(papers.columns) - {"paper_id", "subject"}
feature_names
```

To go out:

```
edges = citations_data[["source", "target"]].to_numpy().T
print("Edges shape:", edges.shape)
```

To go out:

```
node_features = tf.cast(
papers_data.sort_values("paper_id")[feature_names].to_numpy(), dtype=tf.dtypes.float32
)
print("Nodes shape:", node_features.shape)
```

To go out:

```
edge_weights = tf.ones(shape=edges.shape[1])
print("Edges_weights shape:", edge_weights.shape)
```

To go out:

We can now create an information graphic tuple consisting of the above elements.

`graph_info = (node_features, edges, edge_weights)`

We are now ready to train a graph neural network using the graph data above with the essential elements.

**Graph neural network implementation**

As shown above in this section, we will build a network that can work with the chart data. For this we are required to create a layer that can operate on the chart data.

**Graphic layer**

In this section of the article, we are going to discuss the tasks that a base graphics layer should perform. Since code size is important, we won’t push it here, but discuss the task and layer functionality. We can find all the implementation here. Let’s start with the first task.

- This task concerns the preparation of the input nodes that we have implemented using a feed-forward neural network. This network will produce a message so that the input node representations can be processed. The shape of the node representation will be [num_nodes, representation_dim].
- The next task concerns the aggregation of the messages provided by the node to its neighboring node using the edge weights. In mathematics, here we use permutation invariant pooling operations. These operations create a single aggregated message for each node. The form of the aggregated messages will be [num_nodes, representation_dim].
- The next task concerns the production of a new state of node representations. In this task, we combine node representation and aggregated messages. Basically, if the combination is GRU type, the node representations and aggregated messages can be stacked to create a sequence and processed by a GRU layer.

To perform these tasks, we created a convolutional graph layer as a Keras layer consisting of prepare, aggregate, and update functions.

**Graph classifier of neural nodes **

After creating the layer, we need to create a graphical neural node classifier. This classifier can follow the following processes:

- Pre-processing the node features to generate the node representation.
- Application of graphic layers.
- Post-processing the node representation to generate the final node representations.
- Using a softmax layer to produce predictions based on the node representation.

Since the code in this section is also large, we push them here. We can find the implementation here. In the codes, we applied two graph convolutional layers to model the graph data.

**Mounting model**

Now let’s adjust the neural network of the graph.

**Instantiating the GNN Model**

```
hidden_units = [32, 32]
learning_rate = 0.01
dropout_rate = 0.5
num_epochs = 300
batch_size = 256
gnn_model = GNNNodeClassifier(
graph_info=graph_info,
num_classes=num_classes,
hidden_units=hidden_units,
dropout_rate=dropout_rate,
name="gnn_model",
)
gnn_model.summary()
```

To go out:

Here we have instantiated the template.

**Set training data **

```
x_train = train_data.paper_id.to_numpy()
y_train = train_data["subject"]
```

**Definition of the function of compiling and fitting the model**

```
def run_experiment(model, x_train, y_train):
# Compile the model.
model.compile(
optimizer=keras.optimizers.Adam(learning_rate),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy(name="acc")],
)
# Create an early stopping callback.
early_stopping = keras.callbacks.EarlyStopping(
monitor="val_acc", patience=50, restore_best_weights=True
)
# Fit the model.
history = model.fit(
x=x_train,
y=y_train,
epochs=num_epochs,
batch_size=batch_size,
validation_split=0.15,
callbacks=[early_stopping],
)
return history
```

**Training model**

`history = run_experiment(gnn_model, x_train, y_train)`

To go out:

**View results**

Loss

```
fig, ax1 = plt.subplots(1, figsize=(15, 5))
ax1.plot(history.history["loss"])
ax1.plot(history.history["val_loss"])
ax1.legend(["train", "test"], loc="upper right")
ax1.set_xlabel("Epochs")
ax1.set_ylabel("Loss")
```

To go out:

Precision

```
fig, ax2 = plt.subplots(1, figsize=(15, 5))
ax2.plot(history.history["acc"])
ax2.plot(history.history["val_acc"])
ax2.legend(["train", "test"], loc="upper right")
ax2.set_xlabel("Epochs")
ax2.set_ylabel("Accuracy")
plt.show()
```

To go out:

Here in the above output, we can see that the model worked fine. In the accuracy part, we can see that the model gave us about 90% accuracy in training and 80% in test data.

**Last words**

In this article, we have seen how we can design data as graph data and how we can implement a graph neural network to work with graph data. In depth we can say that we have implemented a convolutional graph neural network which can also work with graph data with sequential attributes.

**Reference**