阅读材料记录

2019年12月

关于NLP的论文(BERT, ERNIE, Transformer)

  1. Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
  2. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. In Advances in neural information processing systems (pp. 5998-6008).
  3. Sun, Y., Wang, S., Li, Y., Feng, S., Chen, X., Zhang, H., ... & Wu, H. (2019). ERNIE: Enhanced Representation through Knowledge Integration. arXiv preprint arXiv:1904.09223.
  4. Sun, Y., Wang, S., Li, Y., Feng, S., Tian, H., Wu, H., & Wang, H. (2019). Ernie 2.0: A continual pre-training framework for language understanding. arXiv preprint arXiv:1907.12412.

关于深度学习中优化器(Optimizer)的文献综述

  1. Ruder, S. (2016). An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747.

2019年11月

关于推荐系统的论文

  1. Xiaojie Wang, Rui Zhang, Yu Sun, Jianzhong Qi. Doubly Robust Joint Learning for Recommendation on Data Missing Not at Random, The 36th International Conference on Machine Learning (ICML) 2019.
  2. Xiaojie Wang, Rui Zhang, Yu Sun, Jianzhong Qi. KDGAN: Knowledge Distillation with Generative Adversarial Networks, 32nd Conference on Neural Information Processing Systems (NIPS) 2018.
  3. Yu Sun, Nicholas Jing Yuan, Xing Xie, Kieran McDonald, Rui Zhang. Collaborative Intent Prediction with Real-Time Contextual Data, ACM Transactions on Information Systems (TOIS), 35 (4), 30, 2017.

关于python编程的书籍

1.《编写高质量代码 改善Python程序的91个建议》(未读完)

2019年9月

关于NLP和机器学习的论文

  1. Cook, P., & Stevenson, S. (2007). Automagically inferring the source words of lexical blends. In Proceedings of the Tenth Conference of the Pacific Association for Computational Linguistics (PACLING-2007) (pp. 289-297).
  2. Giyatmi, G., Wijayava, R., & Arumi, S. (2017). Blending Words Found In Social Media. JURNAL ARBITRER, 4(2), 65-75.
  3. Cook, P. (2012). Using social media to find English lexical blends. In Proc. of EURALEX (pp. 846-854).
  4. Chi, L., Lim, K. H., Alam, N., & Butler, C. J. (2016, December). Geolocation prediction in Twitter using location indicative words and textual features. In Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT) (pp. 227-234).
  5. Eisenstein, J., O'Connor, B., Smith, N. A., & Xing, E. P. (2010, October). A latent variable model for geographic lexical variation. In Proceedings of the 2010 conference on empirical methods in natural language processing (pp. 1277-1287). Association for Computational Linguistics.
  6. Jing, L. P., Huang, H. K., & Shi, H. B. (2002, November). Improved feature selection approach TFIDF in text mining. In Proceedings. International Conference on Machine Learning and Cybernetics (Vol. 2, pp. 944-946). IEEE.
  7. Rahimi, A., Cohn, T., & Baldwin, T. (2018). Semi-supervised user geolocation via graph convolutional networks. arXiv preprint arXiv:1804.08049.
  8. Cheng, Z., Caverlee, J., & Lee, K. (2010, October). You are where you tweet: a content-based approach to geo-locating twitter users. In Proceedings of the 19th ACM international conference on Information and knowledge management (pp. 759-768). ACM.

2020年1-3月

闲余读物

  1. 《至味在人间》,解馋用

  2. 《你当像鸟飞往你的山》,传记形式,还未读完

  3. 《挪威的森林》,再度,发现不同的感受

nlp in brain

  1. Toneva, M., & Wehbe, L. (2019). Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain). In Advances in Neural Information Processing Systems (pp. 14928-14938).

  2. Wehbe, L., Murphy, B., Talukdar, P., Fyshe, A., Ramdas, A., & Mitchell, T. (2014). Simultaneously uncovering the patterns of brain regions involved in different story reading subprocesses. PloS one, 9(11).

  3. Finding syntax in human encephalography with beam search

2020年4-5月

闲余读物

  1. 《坏小孩》,比隐秘的角落更精彩

  2. 《东大爸爸写给我的日本史》

nlp with commonsense

  1. Bisk, Y., Zellers, R., Bras, R. L., Gao, J., & Choi, Y. (2019). PIQA: Reasoning about Physical Commonsense in Natural Language. arXiv preprint arXiv:1911.11641.

  2. Forbes, M., Holtzman, A., & Choi, Y. (2019). Do Neural Language Representations Learn Physical Commonsense?. arXiv preprint arXiv:1908.02899

  3. Malaviya, C., Bhagavatula, C., Bosselut, A., & Choi, Y. Commonsense Knowledge Base Completion with Structural and Semantic Context.

  4. Sap, M., Horvitz, E., Choi, Y., Smith, N. A., & Pennebaker, J. W. Recollection versus Imagination: Exploring Human Memory and Cognition via Neural Language Models

posted @ 2019-11-19 21:34  MrDoghead  阅读(278)  评论(0编辑  收藏  举报