Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
This is a page not in th emain menu
Published:
In last few years, Graph Transformers models become a promising direction for graph learning model due to the successful of Transformers in other tasks like NLP or CV. This note reviews about the recent works adapt Transformer for graph data for more a scaleable, expressive tool include improving positional encoding, attention mechanism, etc.
Published:
Some of the concepts of probability I think we will use frequently.
Published:
A snail climbs a well $x$ feet during the day, and during night, it slips $y$ feet. If the well is $h$ feet deep, how long will it take for the snail to climb?
Short description of portfolio item number 1
Short description of portfolio item number 2 
Published in 13th International Conference on Knowledge and Systems Engineering (KSE), 2021
In this paper, we improve the embedding output of the graph-based convolution layer by adding a number of transformer layers. The transformer layers with attention architecture help discover frequent patterns in the embedding space which increase the predictive power of the model in the downstream tasks.
Recommended citation: T. L. Hoang, T. D. Pham and V. C. Ta, "Improving Graph Convolutional Networks with Transformer Layer in social-based items recommendation", KSE 2021.
Published in 14th International Conference on Knowledge and Systems Engineering (KSE), 2021
In this paper, we propose the usage of cluster-based sampling to reduce the smoothing effect of the high number of layers in GNN. Given each nodes is assigned to a specific region of the embedding space, the cluster-based sampling is expected to propagate this information to the node’s neighbour, thus improve the nodes’ expressivity.
Recommended citation: T. L. Hoang, and V. C. Ta, "Effect of Cluster-based Sampling on the Over-smoothing Issue in Graph Neural Network", KSE 2022.
Published in Pacific Rim International Conference on Artificial Intelligence (PRICAI), 2022
In this paper, we propose the Dynamic-GTN model which is designed to learn the node embedding in a continous-time dynamic graph. The Dynamic-GTN extends the attention mechanism in a standard GTN to include temporal information of recent node interactions. Based on temporal patterns interaction between nodes, the Dynamic-GTN employs an node sampling step to reduce the number of attention operations in the dynamic graph. We evaluate our model on three benchmark datasets for learning node embedding in dynamic graphs.
Recommended citation: T. L. Hoang, and V. C. Ta, "Dynamic-GTN: Learning an Node Efficient Embedding in Dynamic Graph with Transformer", PRICAI 2022.
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.