Evaluating Embedding Methods of Action in Goal Prediction Task
Description: Humans perform different tasks every day. The task consists of a sequence of actions. From the actions we can use neural networks to predict the goal of the actions. An action is often in the form of a tuple (Subject, Predicate, Object), e.g. “[open] <human,microwave>”, which means a human opens the microwave. However, to use neural networks to predict goals, the action tuples have to be transformed into embedding vectors. Two approaches can be used, first, we treat the tuple as natural language and use language models to obtain word embeddings and then use Neural Tensor Network (NTN) to learn the embeddings of action tuple. Second, we can convert the action sequence into a graph and obtain knowledge graph embeddings as action tuple embeddings using methods such as TransE, RotatE, DisMult and so on. The target of this work is to evaluate the performance of the two methods for obtaining action tuple embedding in the task of predicting the goal from actions using a neural network. The research questions are, 1) does NTN in action embedding help goal prediction? 2) which of the two embedding methods has better performance in goal prediction?
Supervisor: Lei Shi
Distribution: 30% literature, 40% implementation and experiments, 30% analysis
Requirements: Strong programming skills in Python, experience in Pytorch.
Good knowledge in deep learning, basic knowledge in knowledge graphs and language models
Literature:
[1] Ding, Xiao, et al. "Event representation learning enhanced with external commonsense knowledge." arXiv preprint arXiv:1909.05190 (2019).
[2] Devlin, Jacob, et al. "Bert: Pre-training of deep bidirectional transformers for language understanding." arXiv preprint arXiv:1810.04805 (2018).
[3] Cheng, D., Yang, F., Wang, X., Zhang, Y., & Zhang, L. (2020, July). Knowledge graph-based event embedding framework for financial quantitative investments. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 2221-2230).