From a373623c16ac6c026388e0a33c70b85e648961f5 Mon Sep 17 00:00:00 2001 From: William L Hamilton Date: Sat, 16 Sep 2017 11:43:32 -0700 Subject: [PATCH] Update README.md --- README.md | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index 4ccf602..9cc74dd 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -## GraphSAGE: Inductive Representation Learning on Large Graphs +## GraphSage: Inductive Representation Learning on Large Graphs #### Authors: [William L. Hamilton](http://stanford.edu/~wleif) (wleif@stanford.edu), [Rex Ying](http://joy-of-thinking.weebly.com/) (rexying@stanford.edu) #### [Project Website](http://snap.stanford.edu/graphsage/) @@ -6,14 +6,15 @@ ### Overview -This directory contains code necessary to run the GraphSAGE algorithm. +This directory contains code necessary to run the GraphSage algorithm. +GraphSage can be viewed as a stochastic generalization of graph convolutions, and it is especially useful for massive, dynamic graphs that contain rich feature information. See our [paper](https://arxiv.org/pdf/1706.02216.pdf) for details on the algorithm. The example_data subdirectory contains a small example of the protein-protein interaction data, which includes 3 training graphs + one validation graph and one test graph. The full Reddit and PPI datasets (described in the paper) are available on the [project website](http://snap.stanford.edu/graphsage/). -If you make use of this code or the GraphSAGE algorithm in your work, please cite the following paper: +If you make use of this code or the GraphSage algorithm in your work, please cite the following paper: @article{hamilton2017inductive, author = {Hamilton, William L. and Ying, Rex and Leskovec, Jure}, @@ -67,7 +68,7 @@ Note that the full log outputs and stored embeddings can be 5-10Gb in size (on t #### Using the output of the unsupervised models -The unsupervised variants of GraphSAGE will output embeddings to the logging directory as described above. +The unsupervised variants of GraphSage will output embeddings to the logging directory as described above. These embeddings can then be used in downstream machine learning applications. The `eval_scripts` directory contains examples of feeding the embeddings into simple logistic classifiers.