Graph representation learning / William L. Hamilton.Material type: TextSeries: Synthesis digital library of engineering and computer science | Synthesis lectures on artificial intelligence and machine learning ; #46.Publisher: San Rafael, California (1537 Fourth Street, 1537 Fourth Street, San Rafael, CA 94901 USA) : Morgan & Claypool Publishers, Description: 1 PDF (xvii, 141 pages) : illustrations (some color)Content type: text Media type: electronic Carrier type: online resourceISBN: 9781681739649Subject(s): Machine learning | Neural networks (Computer science) | Graph theory -- Data processing | graph neural networks | graph embeddings | node embeddings | deep learning | relational data | knowledge graphs | social networks | network analysis | graph signal processing | graph convolutions | spectral graph theory | geometric deep learningGenre/Form: Electronic books.Additional physical formats: Print version:: No titleDDC classification: 006.3/1 LOC classification: Q325.5 | .H364 2020ebOnline resources: Abstract with links to resource | Abstract with links to full text Also available in print.
|Item type||Current library||Call number||Status||Date due||Barcode|
|Ebooks||Indian Institute of Technology Delhi - Central Library||Available|
Mode of access: World Wide Web.
System requirements: Adobe Acrobat Reader.
Part of: Synthesis digital library of engineering and computer science.
Includes bibliographical references (pages 131-140).
1. Introduction -- 1.1. What is a graph? -- 1.2. Machine learning on graphs
2. Background and traditional approaches -- 2.1. Graph statistics and kernel methods -- 2.2. Neighborhood overlap detection -- 2.3. Graph Laplacians and spectral methods -- 2.4. Toward learned representations
part I. Node embeddings. 3. Neighborhood reconstruction methods -- 3.1. An encoder-decoder perspective -- 3.2. Factorization-based approaches -- 3.3. Random walk embeddings -- 3.4. Limitations of shallow embeddings
4. Multi-relational data and knowledge graphs -- 4.1. Reconstructing multi-relational data -- 4.2. Loss functions -- 4.3. Multi-relational decoders
part II. Graph neural networks. 5. The graph neural network model -- 5.1. Neural message passing -- 5.2. Generalized neighborhood aggregation -- 5.3. Generalized update methods -- 5.4. Edge features and multi-relational GNNs -- 5.5. Graph pooling -- 5.6. Generalized message passing
6. Graph neural networks in practice -- 6.1. Applications and loss functions -- 6.2. Efficiency concerns and node sampling -- 6.3. Parameter sharing and regularization
7. Theoretical motivations -- 7.1. Gnns and graph convolutions -- 7.2. Gnns and probabilistic graphical models -- 7.3. Gnns and graph isomorphism
part III. Generative graph models. 8. Traditional graph generation approaches -- 8.1. Overview of traditional approaches -- 8.2. Erdös-Rényi model -- 8.3. Stochastic block models -- 8.4. Preferential attachment -- 8.5. Traditional applications
9. Deep generative models -- 9.1. Variational autoencoder approaches -- 9.2. Adversarial approaches -- 9.3. Autoregressive methods -- 9.4. Evaluating graph generation -- 9.5. Molecule generation.
Abstract freely available; full-text restricted to subscribers or individual document purchasers.
Google book search
Graph-structured data is ubiquitous throughout the natural and social sciences, from telecommunication networks to quantum chemistry. Building relational inductive biases into deep learning architectures is crucial for creating systems that can learn, reason, and generalize from this kind of data. Recent years have seen a surge in research on graph representation learning, including techniques for deep graph embeddings, generalizations of convolutional neural networks to graph-structured data, and neural message-passing approaches inspired by belief propagation. These advances in graph representation learning have led to new state-of-the-art results in numerous domains, including chemical synthesis, 3D vision, recommender systems, question answering, and social network analysis. This book provides a synthesis and overview of graph representation learning. It begins with a discussion of the goals of graph representation learning as well as key methodological foundations in graph theory and network analysis. Following this, the book introduces and reviews methods for learning node embeddings, including random-walk-based methods and applications to knowledge graphs. It then provides a technical synthesis and introduction to the highly successful graph neural network (GNN) formalism, which has become a dominant and fast-growing paradigm for deep learning with graph data. The book concludes with a synthesis of recent advancements in deep generative models for graphs--a nascent but quickly growing subset of graph representation learning.
Also available in print.
Title from PDF title page (viewed on October 12, 2020).