Curriculum-Guided Self-Supervised Representation Learning of Dynamic Heterogeneous Networks

Ahmed N, Rossi RA, Lee J, et al. Role-based graph embeddings. IEEE Trans Knowl Data Eng. 2020. https://doi.org/10.1109/tkde.2020.3006475.

Article  Google Scholar 

Chen YC, Li L, Yu L, et al. UNITER: UNiversal image-TExt representation learning. In: Vedaldi A, Bischof H, Brox T, et al (eds) Computer Vision – Proceedings of the 16th European Conference on Computer Vision (ECCV 2020), Lecture Notes in Computer Science, vol 12375. Springer International Publishing, Glasgow, UK, 2020;104–20. https://doi.org/10.1007/978-3-030-58577-8_7

Chu G, Wang X, Shi C, et al. CuCo: graph representation with curriculum contrastive learning. In: Zhou Z (ed) Proceedings of the 30th International Joint Conference on Artificial Intelligence (IJCAI 2021). International Joint Conferences on Artificial Intelligence Organization, Virtual Event/Montreal, Canada, 2021;2300–06. https://doi.org/10.24963/ijcai.2021/317

Clark K, Luong M, Le QV, et al. ELECTRA: pre-training text encoders as discriminators rather than generators. In: Proceedings of the 8th International Conference on Learning Representations (ICLR 2020). OpenReview.net, Addis Ababa, Ethiopia. 2020

Cui P, Wang X, Pei J, et al. A survey on network embedding. IEEE Trans Knowl Data Eng. 2019;31(5):833–52. https://doi.org/10.1109/tkde.2018.2849727.

Article  Google Scholar 

Devlin J, Chang M, Lee K, et al. BERT: pre-training of deep bidirectional transformers for language understanding. In: Burstein J, Doran C, Solorio T (eds) Proceedings of the 17th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2019). Association for Computational Linguistics, Minneapolis, MN, USA, 2019;4171–86.https://doi.org/10.18653/v1/n19-1423

Dong Y, Chawla NV, Swami A. metapath2vec: Scalable representation learning for heterogeneous networks. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2017). ACM Press, Halifax, NS, Canada, 2017;135–44. https://doi.org/10.1145/3097983.3098036

yang Fu T, Lee WC, Lei Z. HIN2vec: explore meta-paths in heterogeneous information networks for representation learning. In: Lim E, Winslett M, Sanderson M, et al (eds) Proceedings of the 26th ACM on Conference on Information and Knowledge Management (CIKM 2017). ACM Press, Singapore, 2017;1797–806. https://doi.org/10.1145/3132847.3132953

Grover A, Leskovec J. node2vec: scalable feature learning for networks. In: Krishnapuram B, Shah M, Smola AJ, et al (eds) Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2016). ACM Press, San Francisco, CA, USA, 2016;855–64. https://doi.org/10.1145/2939672.2939754

Gupta S, Khare V. Blazingtext: Scaling and accelerating word2vec using multiple gpus. In: Proceedings of the Machine Learning on HPC Environments. Association for Computing Machinery, New York, NY, USA, MLHPC’17, 2017. https://doi.org/10.1145/3146347.3146354,

Hamilton WL, Ying Z, Leskovec J. Inductive representation learning on large graphs. In: Guyon I, von Luxburg U, Bengio S, et al (eds) Advances in Neural Information Processing Systems 30: Proceedings of the 31st Annual Conference on Neural Information Processing Systems (NeurIPS 2017), Long Beach, CA, USA, 2017;1024–34

Hoang VT, Jeon HJ, You ES, et al. Graph representation learning and its applications: a survey. Sensors. 2023;23(8):4168.

Article  Google Scholar 

Hu Z, Dong Y, Wang K, et al. Heterogeneous graph transformer. Proceed The Web Conf. 2020;2020:2704–10.

Google Scholar 

Jeon HJ, Jung JJ. Discovering the role model of authors by research history embedding. J Inf Sci. 2021. https://doi.org/10.1177/01655515211034407, (To Appear).

Jeon HJ, Lee OJ, Jung JJ. Is performance of scholars correlated to their research collaboration patterns? Frontiers in Big Data 2019;2(39). https://doi.org/10.3389/fdata.2019.00039

Jeon HJ, Choi MW, Lee OJ. Day-ahead hourly solar irradiance forecasting based on multi-attributed spatio-temporal graph convolutional network. Sensors. 2022;22(19):7179.

Article  Google Scholar 

Lan Z, Chen M, Goodman S, et al. ALBERT: a lite BERT for self-supervised learning of language representations. In: Proceedings of the 8th International Conference on Learning Representations (ICLR 2020). OpenReview.net, Addis Ababa, Ethiopia. 2020

Lee OJ, Jung JJ. Story embedding: learning distributed representations of stories based on character networks. Artif Intell. 2020;281:103235 https://doi.org/10.1016/j.artint.2020.103235.

Lee OJ, Jung JJ, Kim JT. Learning hierarchical representations of stories by using multi-layered structures in narrative multimedia. Sensors. 2020;20(7):197. https://doi.org/10.3390/s20071978.

Article  Google Scholar 

Lee OJ, Jeon HJ, Jung JJ. Learning multi-resolution representations of research patterns in bibliographic networks. J Inf. 2021;15(1):10112. https://doi.org/10.1016/j.joi.2020.101126.

Article  Google Scholar 

Lewis M, Liu Y, Goyal N, et al. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In: Jurafsky D, Chai J, Schluter N, et al (eds) Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL 2020). Association for Computational Linguistics, Online, 2020;7871–80. https://doi.org/10.18653/v1/2020.acl-main.703

Li B, Drozd A, Guo Y, et al. Scaling word2vec on big corpus. Data Sci Eng. 2019;4(2):157–75.

Article  Google Scholar 

Liu M, Liu Y. Inductive representation learning in temporal networks via mining neighborhood and community influences. In: Diaz F, Shah C, Suel T, et al (eds) Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021). ACM, Virtual Event, Canada, 2021;2202–06. https://doi.org/10.1145/3404835.3463052

Lu J, Batra D, Parikh D, et al. Vilbert: pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In: Wallach HM, Larochelle H, Beygelzimer A, et al (eds) Advances in Neural Information Processing Systems 32: Proceedings of the 33rd Annual Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, BC, Canada, 2019;13–23

Narayanan A, Chandramohan M, Chen L, et al. subgraph2vec: learning distributed representations of rooted sub-graphs from large graphs, 2016. arXiv:1606.08928

Nguyen DQ, Nguyen TD, Phung D. A self-attention network based node embedding model. In: Hutter F, Kersting K, Lijffijt J, et al (eds) Proceedings of the 2020 European Conference on Machine Learning and Knowledge Discovery in Databases (ECML PKDD 2020), Lecture Notes in Computer Science, vol 12459. Springer International Publishing, Ghent, Belgium, 2020;364–77. https://doi.org/10.1007/978-3-030-67664-3_22

Perozzi B, Al-Rfou R, Skiena S. DeepWalk: online learning of social representations. In: Macskassy SA, Perlich C, Leskovec J, et al (eds) Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining (KDD 2014). ACM Press, New York, NY, 2014;701–10. https://doi.org/10.1145/2623330.2623732

Perozzi B, Kulkarni V, Chen H, et al. Don’t walk, skip!: online learning of multi-scale network embeddings. In: Diesner J, Ferrari E, Xu G (eds) Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2017). ACM, Sydney, Australia, 2017;258–65. https://doi.org/10.1145/3110025.3110086

Qu M, Tang J, Han J. Curriculum learning for heterogeneous star network embedding via deep reinforcement learning. In: Chang Y, Zhai C, Liu Y, et al (eds) Proceedings of the 11th ACM International Conference on Web Search and Data Mining (WSDM 2018). ACM, Marina Del Rey, CA, USA, 2018;468–76. https://doi.org/10.1145/3159652.3159711

Ren S, He K, Girshick R, et al. Faster r-CNN: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell. 2017;39(6):1137–49. https://doi.org/10.1109/tpami.2016.2577031.

Article  Google Scholar 

Sachan M, Xing E. Easy questions first? A case study on curriculum learning for question answering. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016). Association for Computational Linguistics, Berlin, German 2016. https://doi.org/10.18653/v1/p16-1043

Shervashidze N, Schweitzer P, van Leeuwen EJ, et al. Weisfeiler-lehman graph kernels. J Mach Learn Res. 2011;12:2539–61.

MathSciNet  Google Scholar 

Sinha A, Shen Z, Song Y, et al. An overview of microsoft academic service (MAS) and applications. In: Gangemi A, Leonardi S, Panconesi A (eds) Proceedings of the 24th International Conference on World Wide Web (WWW 2015). ACM Press, Florence, Italy, 2015;243–46. https://doi.org/10.1145/2740908.2742839

Su W, Zhu X, Cao Y, et al. VL-BERT: pre-training of generic visual-linguistic representations. In: Proceedings of the 8th International Conference on Learning Representations (ICLR 2020). OpenReview.net, Addis Ababa, Ethiopia 2020.

Tang J, Qu M, Mei Q. PTE: predictive text embedding through large-scale heterogeneous text networks. In: Cao L, Zhang C, Joachims T, et al (eds) Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2015). ACM, Sydney, NSW, Australia, 2015a;1165–74. https://doi.org/10.1145/2783258.2783307

Tang J, Qu M, Wang M, et al. LINE: large-scale information network embedding. In: Gangemi A, Leonardi S, Panconesi A (eds) Proceedings of the 24th International Conference on World Wide Web (WWW 2015). ACM, Florence, Italy, 2015b;1067–77. https://doi.org/10.1145/2736277.2741093

Wang W, Liu J, Yang Z, et al. Sustainable collaborator recommendation based on conference closure. IEEE Trans Comput Soc Syst 2019a;6(2):311–22. https://doi.org/10.1109/tcss.2019.2898198

Wang W, Xia F, Wu J, et al. Scholar2vec: vector representation of scholars for lifetime collaborator prediction. ACM Trans Knowl Disc Data 2021a;15(3):Article No. 40.https://doi.org/10.1145/3442199

Wang X, Ji H, Shi C, et al. Heterogeneous graph attention network. In: The world wide web conference, 2019b;2022–32

Wang X, Lu Y, Shi C, et al. Dynamic heterogeneous information network embedding with meta-path based proximity. IEEE Trans Knowl Data Eng 2020.

Wang Y, Wang W, Liang Y, et al. Curgraph: curriculum learning for graph classification. In: Proceedings of the Web Conference 2021, 2021b;1238–48.

Xue G, Zhong M, Li J, et al. Dynamic network embedding survey. Neurocomput. 2022;472:212–22. https://doi.org/10.1016/j.neucom.2021.03.138.

Article  Google Scholar 

Yang C, Xiao Y, Zhang Y, et al. Heterogeneous network representation learning: a unified framework with survey and benchmark. IEEE Trans Knowl Data Eng 2020a. https://doi.org/10.1109/tkde.2020.3045924, (To Appear)

Yang L, Xiao Z, Jiang W, et al. Dynamic heterogeneous graph embedding using hierarchical attentions. In: European Conference on Information Retrieval, Springer, 2020b;425–32

Yang Z, Dai Z, Yang Y, et al. XLNet: generalized autoregressive pretraining for language understanding. In: Wallach HM, Larochelle H, Beygelzimer A, et al (eds) Advances in Neural Information Processing Systems 32: Proceedings of the 33rd Annual Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, BC, Canada, 2019;5754–64

Yao L, Mao C, Luo Y. KG-BERT: BERT for knowledge graph completion, 2019 arXiv:1909.03193

Yin Y, Ji LX, Zhang JP, et al. DHNE: network representation learning method for dynamic heterogeneous networks. IEEE Access. 2019;7:134782–92.

Article  Google Scholar 

Yun S, Jeong M, Kim R, et al. Graph transformer networks. In: Wallach HM, Larochelle H, Beygelzimer A, et al (eds) Advances in Neural Information Processing Systems 32: Proceedings of the 33rd Annual Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, BC, Canada, 2019;11960–970

Zhang C, Swami A, Chawla NV. SHNE: representation learning for semantic-associated heterogeneous networks. In: Proceedings of the twelfth ACM international conference on web search and data mining, 2019a;690–98

Zhang D, Yin J, Zhu X, et al. Network representation learning: a survey. IEEE Transactions on Big Data 2020a;6(1):3–28. https://doi.org/10.1109/tbdata.2018.2850013

Zhang J, Zhang H, Xia C, et al. Graph-Bert: only attention is needed for learning graph representations, 2020b. arXiv:2001.05140

Zhang L, Mao Z, Xu B, et al. Review and arrange: curriculum learning for natural language understanding. IEEE/ACM Trans Audio, Speech, Language Process. 2021;29:3307–32. https://doi.org/10.1109/taslp.2021.3121986.

Article  Google Scholar 

Zhang Z, Han X, Liu Z, et al. ERNIE: enhanced language representation with informative entities. In: Korhonen A, Traum DR, Màrquez L (eds) Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). Association for Computational Linguistics, Florence, Italy, 2019b;1441–51. https://doi.org/10.18653/v1/p19-1139

Comments (0)

No login
gif