200篇2024/S1论文文章助你“入坑”AI工程:FT + KG + RAG + Agent
本文汇编了笔者2024上半年整理的AI工程领域相关的论文、博客、公众号文章,包括FineTuning、KnowledgeGraph、RAG、Agent四大类,可以帮助大家快速跟进/回顾上半年AI工程领域的进展。
有需要(想“入坑”)的同学请自取【狗头 * 3】。
FT
- Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning: https://arxiv.org/abs/2303.15647
- Towards a Unified View of Parameter-Efficient Transfer Learning:https://arxiv.org/abs/2110.04366
- 【LoRA】LoRA: Low-Rank Adaptation of Large Language Models:https://arxiv.org/abs/2106.09685
- 【AdaLoRA】AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning:https://arxiv.org/abs/2303.10512
- 【QLoRA】QLoRA: Efficient Finetuning of Quantized LLMs:https://arxiv.org/abs/2305.14314
- 【Prompt Tuning】The Power of Scale for Parameter-Efficient Prompt Tuning:https://arxiv.org/abs/2104.08691
- 【Prefix Tuning】Prefix-Tuning: Optimizing Continuous Prompts for Generation:https://arxiv.org/abs/2101.00190
- 【P-Tuning v1】GPT Understands, Too:https://arxiv.org/abs/2103.10385
- 【P-Tuning v2】P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks:https://arxiv.org/abs/2110.07602
- Finetuned Language Models Are Zero-Shot Learners:https://arxiv.org/abs/2109.01652
- Making Pre-trained Language Models Better Few-shot Learners:https://arxiv.org/abs/2012.15723
- How Does In-Context Learning Help Prompt Tuning?:https://arxiv.org/abs/2302.11521
- 【BitFit】BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models:https://arxiv.org/abs/2106.10199
- 【Adapters】Parameter-Efficient Transfer Learning for NLP:https://arxiv.org/abs/1902.00751
- 【RLHF】Training language models to follow instructions with human feedback:https://arxiv.org/abs/2203.02155
- Deep reinforcement learning from human preferences:https://arxiv.org/abs/1706.03741
- Fine-Tuning Language Models from Human Preferences:https://arxiv.org/abs/1909.08593
- Learning to summarize from human feedback:https://arxiv.org/abs/2009.01325
- WebGPT: Browser-assisted question-answering with human feedback:https://arxiv.org/abs/2112.09332
- 【InstructGPT】Training language models to follow instructions with human feedback:https://arxiv.org/abs/2203.02155
- 【GopherCite】Teaching language models to support answers with verified quotes:https://arxiv.org/abs/2203.11147
- 【Sparrow】Improving alignment of dialogue agents via targeted human judgements:https://arxiv.org/abs/2209.14375
- ChatGPT: Optimizing Language Models for Dialogue:https://autogpt.net/chatgpt-optimizing-language-models-for-dialogue/
- Scaling Laws for Reward Model Overoptimization:https://arxiv.org/abs/2210.10760
- Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback:https://arxiv.org/abs/2204.05862
- Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned:https://arxiv.org/abs/2209.07858
- Dynamic Planning in Open-Ended Dialogue using Reinforcement Learning:https://arxiv.org/abs/2208.02294
- RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback:https://arxiv.org/abs/2309.00267
- Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers:https://arxiv.org/abs/2404.04925
- [论文]“退一步”海阔天空:谷歌提出Step-Back Prompting:https://mp.weixin.qq.com/s/VUyHJ_-LiHnyFr-V4dZHPA
- 关于大模型微调,你想知道的都在这里了:https://blog.csdn.net/yulingzheng/article/details/131938155
- 微调、训练大模型概念介绍及论文笔记:Tuning系列论文笔记:https://www.wehelpwin.com/article/4091
- LLM微调理论及实践:https://qiankunli.github.io/2023/10/29/llm_finetune.html
- LoRA(Low-Rank Adaptation of Large Language Models)-- 一种大模型prompt-tuning调优方法:https://www.cnblogs.com/LittleHann/p/17318509.html
- 五万字综述!Prompt-Tuning:深度解读一种新的微调范式:https://zhuanlan.zhihu.com/p/618871247
- 抱抱脸:ChatGPT背后的算法——RLHF | 附12篇RLHF必刷论文:https://mp.weixin.qq.com/s?__biz=MzI4MDYzNzg4Mw==&mid=2247554744&idx=3&sn=58d27263f499a939cba817522840a9cb&chksm=ebb72e6cdcc0a77a135c55c297c3c8c5ee106780c92f072bbf821ea0f8a1e143a47034e69680&scene=27
- ChatTuGraph:通过大模型“与图对话”:https://mp.weixin.qq.com/s/rZdj8TEoHZg_f4C-V4lq2A
- 生成式大模型的RLHF技术(一):基础:https://zhuanlan.zhihu.com/p/667636425
- 人工智能大语言模型微调技术:SFT 监督微调、LoRA 微调方法、P-tuning v2 微调方法、Freeze 监督微调方法:https://cloud.tencent.com/developer/article/2302701
- 多语言大型语言模型:资源、分类和前沿综述:https://mp.weixin.qq.com/s/jw1l4CKoGdZuW0Sy_5jo5g
- Data is all you need:15万亿 Token!FineWeb 开创公开数据集新纪元:https://mp.weixin.qq.com/s/klU1qtItu71181F40c8ZxA
- WildChat:100万条用户-ChatGPT对话数据开源!:https://mp.weixin.qq.com/s/oXy8iLFWwbGHypbXIK5hrQ
- 20K star!搞定 LLM 微调的开源利器:https://mp.weixin.qq.com/s/Z2izsHIZTyuBQ3IRC5YgVw
KG
- 浙大 & 蚂蚁 | 提出MyGO框架,旨在提升多模态知识图谱(MMKG)完整性!:https://mp.weixin.qq.com/s/CVswirXSMI3satqqmy6Hmw
- 2024 最新综述 | 当知识图谱遇上多模态学习:https://mp.weixin.qq.com/s/XrAossHKtsfEEEiQJqbjrQ
- TKDE|三元组集合预测:从零开始进行知识图谱补全:https://mp.weixin.qq.com/s/xUF8rkqG4n1zkHTXgiQmIw
- EDC: 如何自动化构建知识图谱,基于大型语言模型知识图谱构建新框架:https://mp.weixin.qq.com/s/vIUu0qj5jkgL7jf6hfBkgQ
- Graph Maker:轻松使用开源大模型将文本转为知识图谱,发现新知识!:https://mp.weixin.qq.com/s/Sn8m5gZyfpmFCCPXHPwWhQ
- 通俗易懂解释知识图谱(Knowledge Graph):https://www.cnblogs.com/huangyc/p/10043749.html
- 数字化转型时代的知识图谱实践|5步掌握知识图谱:https://mp.weixin.qq.com/s/nb6GsrSwHWuQNGZoBmD1-w
- OpenRAG Notion:https://openrag.notion.site/Open-RAG-c41b2a4dcdea4527a7c1cd998e763595
- 第11章 知识图谱及其应用:https://mp.weixin.qq.com/s/fFJQsNDbsCZ3s4P7ntxhPA
- 大模型和知识图谱结合的机遇与挑战-GPT构建高质量KG:https://mp.weixin.qq.com/s/WRi0nLXJ1-Dwh4I6LLxs6w
- WeRead2Notion-Pro强大的图书管理系统:https://mp.weixin.qq.com/s/Wif_pyb5L3WlLLlj7iIPYw
- 大模型和知识图谱双轮驱动:人机协同的SEO内容生成的路线图 - WordLift实践:https://mp.weixin.qq.com/s/FvxS8oSD28G7L1Tt437Hig
- OpenSPG v0.0.3 发布,新增大模型统一知识抽取&图谱可视化:https://mp.weixin.qq.com/s/27kWp7Dycud7YMd5S2enlQ
- 「大模型」之所短,「知识图谱」之所长:https://mp.weixin.qq.com/s/A2EKHtBtXBqb0ADY34FjbA
- 如何将任何文本语料转换为知识图谱?:https://mp.weixin.qq.com/s/pxavUtB2i0SjQazpxf_jwA
- Docs2KG:使用大模型自动构建知识图谱,降低企业知识图谱构建门槛:https://mp.weixin.qq.com/s/tj1gyh1ljLdYGh2sOXwgwQ
- 对话刘铭教授:多模态知识图谱构建初探:https://mp.weixin.qq.com/s/Ra9y7TO58BPmAwCF2-JH3g
RAG
- WWW2024 | 欢迎参加 图基础模型 Tutorial:https://mp.weixin.qq.com/s/hx3swpM827IllHO8-EufLQ
- GFMPapers:https://github.com/BUPT-GAMMA/GFMPapers?tab=readme-ov-file
- 图遇见大型语言模型:进展与未来方向的研究:https://mp.weixin.qq.com/s/yzqFSVm3j-UsT3niJi8LLw
- Survey || 探讨了大模型理解图结构数据:https://mp.weixin.qq.com/s/-d351EXOf96Ttury-CHGHg
- Awesome-LLMs-in-Graph-tasks:https://github.com/yhLeeee/Awesome-LLMs-in-Graph-tasks
- RAGCache: Efficient Knowledge Caching for Retrieval-Augmented Generation:https://arxiv.org/abs/2404.12457
- RAG+RAU:对检索增强型语言模型(RALM)进行全面、深入综述:https://mp.weixin.qq.com/s/YUgJ8uOrX2Zes1-mG-iteA
- 从RAG到RALM:检索增强语言模型原理剖析及最新进展:https://mp.weixin.qq.com/s/aMjDVihuD3JrqDIl92paeg
- RAG与RAU:自然语言处理中的检索增强语言模型综述:https://mp.weixin.qq.com/s/BHzcwydvSkTIvIQ4iLGzlA
- 阿里RAG新框架R4:增强检索器-重排序-响应器,5个知识密集任务上都超过Self-RAG等!:https://mp.weixin.qq.com/s/Lsom93jtIr4Pv7DjpQuiDQ
- 《大规模分布式图算法》综述:https://mp.weixin.qq.com/s/urlIwhyZc4hHGA4hYllHiw
- 多语言大型语言模型:资源、分类和前沿综述:https://mp.weixin.qq.com/s/jw1l4CKoGdZuW0Sy_5jo5g
- 从0到精通,读懂这一篇就够了,RAG:检索增强的前世今生:https://mp.weixin.qq.com/s/jlYrPRRw8kAeBLTEVFoBTg
- 【LLM-RAG】用于内容生成的检索增强生成综述:https://mp.weixin.qq.com/s/2b5_uuuKwAQIjRWi5x9FXQ
- RAT,突破AI幻觉的新方法:https://mp.weixin.qq.com/s/TqmY4ouDuloE2v-iSJB_-Q
- RAT — Retrieval Augmented Thoughts:https://cobusgreyling.medium.com/rat-retrieval-augmented-thoughts-c7eb0cf5547c
- RAG+ Chain of Thought ⇒ Retrieval Augmented Thoughts (RAT):https://medium.com/@bijit211987/rag-chain-of-thought-retrieval-augmented-thoughts-rat-3d3489517bf0
- RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Horizon Generation:https://arxiv.org/pdf/2403.05313.pdf
- RAE:一个专为大模型多跳问答设计的检索增强型知识编辑框架:https://mp.weixin.qq.com/s/R0N8yexAlXetFyCS-W2dvg
- 可解释生成人工智能 (GenXAI):综述、概念化与研究议程:https://mp.weixin.qq.com/s/LfhWkp5qZkjsMxAxHpxsRA
- 知识图谱增强RAG: 用外部知识提升LLM:https://mp.weixin.qq.com/s/uDOMTGTxSpsZQg2Fa3jR3w 【index is not only vector(graph hybrid)】
- 《大型语言模型中基于检索的文本生成》综述:https://mp.weixin.qq.com/s/slHksXsbqTDzY3XwGF4nrA
- AI的“记忆”外挂:检索增强生成技术(RAG)综述:https://mp.weixin.qq.com/s/FQzp3eUWH_ysTvQWNox4QA
- 重磅!《大语言模型》新书出炉,人大出版,391页pdf:https://mp.weixin.qq.com/s/Jn-k95-IAa0N-utA4r3_Sg
- RAG遇见LLMs:走向检索增强型大语言模型:https://mp.weixin.qq.com/s/EI5OI3WdJ09ecjyemWxClg
- 从局部到全局:一种面向查询的图结构RAG摘要方法:https://mp.weixin.qq.com/s/hiuhqo5ZI8cL125URX5yNQ
- 多模态检索增强生成(Multimodal Retrieval Augmented Generation,MM-RAG):https://mp.weixin.qq.com/s/wGar-qBfvjdi5juO1c0YxQ
- CRAG: 纠正性 RAG 模型,提升信息检索与生成质量:https://mp.weixin.qq.com/s/HTN66ca6OTF_2YcW0Mmsbw
- 一文讲清楚检索增强型文本生成(RAG),极其舒坦的格式:https://mp.weixin.qq.com/s/yxLRUhKxqbm5VoCCXXmnOA
- RAG系统评估之全面综述,兼看分析框架RGAR:检索、生成、附加要求!:https://mp.weixin.qq.com/s/z-zvn8WJHQqGNBD0Ic3uHw
- 蚂蚁「大图模型」研究,为图智能迈向AGI铺了一条新通途:https://mp.weixin.qq.com/s/0eHTqqmz1OXdIhNl2Qky9A
- 使用Pytorch从零实现Transformer模型:https://mp.weixin.qq.com/s/XFniIyQcrxambld5KmXr6Q
- 图解 transformer——注意力计算原理:https://mp.weixin.qq.com/s/pURSi89KAiJIJAYZ-kT-iQ
- Graph RAG: 知识图谱结合 LLM 的检索增强:https://mp.weixin.qq.com/s/VJRG0MUaEGR6iM_xFRroyg
- hugegraph-ai 重磅发布!!HugeGraph + LLM 场景的深入探索:https://mp.weixin.qq.com/s/QnFo1IJrGqY5SObgBh245w
- 大模型时代,AI 和数据库技术会碰撞出什么新火花?:https://www.infoq.cn/article/whmTRumNeD6js22SjfSv?utm_term=wxgroup
- LLM之RAG实战(三十二)| 使用RAGAs和LlamaIndex评估RAG:https://mp.weixin.qq.com/s/y61bX6iAwKdhpcupi1UBKQ
- 从传统RAG到GraphRAG - 当大模型遇见知识图谱:https://mp.weixin.qq.com/s/9FNPZqVuagFRPic1PeJ0tQ?poc_token=HBWYH2ajbHvxLsfsR4C7-2gVvDUebyRpHdSuUYQh
- RAGFlow:基于OCR和文档解析的下一代 RAG 引擎:https://mp.weixin.qq.com/s/oOr1tQqgBxV6W9XcoudcSg
- RAG性能评估:从方案到实践:https://mp.weixin.qq.com/s/hqBkz7OCH03GzaKkBl_N9Q
- OpenAI联合创始人通俗解读大语言模型:https://mp.weixin.qq.com/s/VUxmkXlJxiYCu9YB1A_WLw
- 人机协同Knowledge Copilot - 兼论KG-RAG和文档RAG:https://mp.weixin.qq.com/s/waKnyIsmV0MKiUw0knuzsw
- LLM可视化,不容错过!:https://mp.weixin.qq.com/s/-qG-RIaBC-Ddy2mOGbecew
- 检索增强微调 (RAFT):https://mp.weixin.qq.com/s/FBCvlvKWyXO5WmqDS_WCVA
- RAG “七宗罪”:RAG 系统常见问题总结:https://mp.weixin.qq.com/s/A-BYTnYFafMVxv0P7xi6mA
- LangChain团队最新技术报告:Is RAG Really Dead ?:https://mp.weixin.qq.com/s/nILm01bsR0tPfU_9dGVu7A
- 微软多部门联合推出GraphRAG项目:全面性和多样性方面显著优于原生大模型RAG:https://mp.weixin.qq.com/s/c--xUR7Q_O7yAMsaAJRiKQ
- RAG 2.0,终于把RAG做对了!:https://mp.weixin.qq.com/s/11gdAT-oDHGtx_TEP5kdCA
- Project GraphRAG:https://www.microsoft.com/en-us/research/project/graphrag/overview/
- 基于图的元数据过滤以提高RAG应用中的向量搜索性能:https://mp.weixin.qq.com/s/cyuAWdFH--gWvStFmNNnzw
- Neo4j与微软合作GraphRAG,以增强GenAI能力:https://mp.weixin.qq.com/s/Cp3M-y_8cTjvU3SkaU6MMg?poc_token=HIXURWajK80qRkgIC5AVH-YfO0y2J3ZM0ecOBePQ
- 7K star!Text2SQL还不够?试试RAG2SQL的开源工具,用自然语言查数据:https://mp.weixin.qq.com/s/sBPHDFGiWQW0Wxrp8gR4ZA
- 知识图谱:langchain+neo4j+gpt-3.5-turbo-0125实现:https://mp.weixin.qq.com/s/aVRD4j_79zhm_FqVq5KDZg
- 超越文本检索:Graph RAG如何变革LLM内容生成:https://mp.weixin.qq.com/s/n6MO_loG52bcP1XvMubJag
- RAG 2.0:https://mp.weixin.qq.com/s/5_-Z0r0DfAfiJ4JuI1225g
- 创业:大模型RAG系统三个月的开发心得和思考:https://mp.weixin.qq.com/s/Np-UUBtAGzZSE-hi5jfHrQ
- 微软RAG实践—— 从局部到全局:一种图 RAG 方法用于查询焦点摘要:https://mp.weixin.qq.com/s/bALlT4fx2aZGhlW-wgNxnw
- RAG中,根据查询意图驱动的Router逻辑分析:https://mp.weixin.qq.com/s/H_pJRGV_PYwfFOvgZnNe-g
- 大型语言模型景观:https://mp.weixin.qq.com/s/hMBsuC09X73d0ZttD9_hBA
- 万字长文:RAG百科全书:https://mp.weixin.qq.com/s/c-ZJDgpjl8ghl4KN5HPOXA
- 完全免费白嫖GPT4的三个方法,都给你整理好了!:https://mp.weixin.qq.com/s/gdNL5OTEcLjgysDoaBdLBA
- 在 RAG 系统中结合文本嵌入和知识(图)嵌入:https://mp.weixin.qq.com/s/NNJWM_F26Uy4TCdx0niowQ
- 忘记 RAG,未来是 RAG-Fusion:https://mp.weixin.qq.com/s/H-et40jJo8cbJT87xDt6QA
- 从RAG到GraphRAG的应用落地揭秘:https://mp.weixin.qq.com/s/e8qbRUg2-zFzBqQ7iTvuew
- 高级RAG检索中的五种查询重写策略:https://mp.weixin.qq.com/s/55lJxm9rCZO8T8vD-VtGsg
- FlashRAG:可能是最全的、最快搭建RAG的开源框架:https://mp.weixin.qq.com/s/XRfSqYwvuGB6sDJzRm0QVA
- 使用FalkorDB知识图谱构建RAG应用:https://mp.weixin.qq.com/s/frMD-6tyG_2z8cTEVcM4pw
- GraphRAG: Design Patterns, Challenges, Recommendations:https://gradientflow.com/graphrag-design-patterns-challenges-recommendations/
- GraphRAG:设计模式,挑战和落地指南:https://mp.weixin.qq.com/s/jiuJsMCuglk0inz6xwquKw
- [万字长文]GraphRAG技术栈及样例全面解析:https://mp.weixin.qq.com/s/wALg2ZgK314fDqwTfCnuuw
- GraphRAG: Unlocking LLM discovery on narrative private data:https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
- Full Fine-Tuning, PEFT, Prompt Engineering, and RAG: Which One Is Right for You?:https://deci.ai/blog/fine-tuning-peft-prompt-engineering-and-rag-which-one-is-right-for-you/
- 震惊:开发一款世界瞩目的数据库仅需18人:https://mp.weixin.qq.com/s/WYPNWw_c1EU9f68yyi7D4A
- Vector | Graph:蚂蚁首个开源Graph RAG框架设计解读:https://mp.weixin.qq.com/s/WILvYFiKugroy9Q_FmGriA
Agent
- Agents Is All You Need: 众智成城提升 LLM 效果[论文速递]:https://mp.weixin.qq.com/s/M8CsmdoBJzFhxJjDofLikw
- A Survey on Large Language Model based Autonomous Agents:https://arxiv.org/abs/2308.11432
- More Agents Is All You Need:https://arxiv.org/pdf/2402.05120.pdf
- [论文] 阿里提出AgentScope:灵活强大的智能体框架:https://mp.weixin.qq.com/s/Yu6FmgHMe3jg0LTAH9GceQ
- 大语言模型(LLM)论文调研整理3:https://zhuanlan.zhihu.com/p/668000465
- 万字综述:大语言模型多智能体(LLM Multi-Agents)进展与挑战:https://mp.weixin.qq.com/s/KU7ghWtVollZ4AZwDLjO8w
- 2024年大模型的发力点:大模型Agent,分享6篇最新LLM Agent研究成果:https://mp.weixin.qq.com/s/AQTfg4vr3ZMIMsf4v-zLeQ
- 2024年大模型Agent Tuning技术调研:工具增强,长序列,多智能体, ...:https://mp.weixin.qq.com/s/xewI7KUibsp77Eye5CQCpw
- 论文导读 | Large Language Model based Multi-Agents的发展与挑战:https://mp.weixin.qq.com/s/yizk_hf0bPfESu5TkYTN-g
- 多agent思想显著提升小模型工具调用能力:https://mp.weixin.qq.com/s/0qYu8kuPFh-5EuWLq5BLGw
- 大模型Multi-Agents记忆共享:利用实时记忆存储和检索系统来增强上下文学习:https://mp.weixin.qq.com/s/n7WJfm8JB3MCQr5CYX9t2g
- Communicative Agents for Software Development:https://arxiv.org/abs/2307.07924
- AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation:https://arxiv.org/abs/2308.08155
- MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework:https://arxiv.org/abs/2308.00352
- 大语言模型智能体规划能力综述: 分类、任务分解、选择、反思、记忆增强:https://mp.weixin.qq.com/s/1POXDVJDv3ob1HqpKjb3Mg
- Understanding the planning of LLM agents: A survey:https://arxiv.org/abs/2402.02716
- AgentKit:用乐高积木式节点构建LLM智能体的思考过程以解决复杂任务:https://mp.weixin.qq.com/s/SPKtKCUYAnEL6qxkxlAHXg
- ResearchAgent: 利用agent自动生成论文idea,再也不用担心做科研没有思路了:https://mp.weixin.qq.com/s/imKZQDU9sB38JO957uV_hw
- Agent 系列之 LLM Compiler框架解析:https://mp.weixin.qq.com/s/-VJolvwOSnGKJVte0v2Irg
- 论文导读 | 基于LLMs的Multi-Agent框架:https://mp.weixin.qq.com/s/C2g_FwBNx18TKedOXQoZ7g
- 《综述:全新大语言模型驱动的Agent》——4.5万字详细解读复旦NLP和米哈游最新Agent Survey:https://mp.weixin.qq.com/s/B0GvDnFHcFdvmWxXTw87GA
- Plan-and-Solve Prompting 论文解析:https://mp.weixin.qq.com/s/NAXX128DDiRhLyZwWtShAA
- ICLR 2024 大语言模型多智能体研究总结:https://mp.weixin.qq.com/s/ROTFmXMarvKmbop4wT8gDw
- ERAGent:集成5个先进组件与技术的增强型RAG Agent,显著提升3类问答任务效果:https://mp.weixin.qq.com/s/7O12bWxJYVRqLPcHQkpYoA
- Agent Planning with World Knowledge Model:https://mp.weixin.qq.com/s/kU8UHwnUrRNBPYW7QBnLtA
- 大模型思维链推理的综述:进展、前沿和未来:https://mp.weixin.qq.com/s/X2lcVLFFlFgQCzacret4Vg
- ACL 2024 | 让纯LLM实现类人的符号逻辑推理能力,开源框架SymbCoT来了:https://mp.weixin.qq.com/s/dV98nDU13ewa_F8uGxrlyw
- 微软研究院MRP:大模型动态选择最佳解题策略的元推理提示,比CoT、ToT更有效:https://mp.weixin.qq.com/s/Q1NO5a2nZkztLWNI19HCbg
- 中国AIGC最值得关注企业&产品榜单揭晓!首份应用全景图谱发布:https://mp.weixin.qq.com/s/-jvjxWxssQvfgToxfJbzvA
- 中国AIGC应用全景报告_量子位智库.pdf
- 5个顶级开源Agent框架,你必须知道!:https://mp.weixin.qq.com/s/zf_BSlcmTgNKLbVu8Tq8Qg
- AI Agent 应该更有趣还是更有用?:https://mp.weixin.qq.com/s/3oO0d5onGvNpgU_h0x_-gA
- 吴恩达:AI Agent 工作流今年将有巨大进展,这是一个重要的趋势:https://mp.weixin.qq.com/s/mqMpW4dF367G8zH-PiJGqA
- 大模型应用的 10 种架构模式:https://mp.weixin.qq.com/s/-ECPED501nOqMUPBWf3sOw
- Synthetic data generation:https://python.langchain.com/docs/use_cases/data_generation/
- 朱啸虎讲了一个中国现实主义AIGC故事:https://mp.weixin.qq.com/s/IXjlplabhMcEqAVPZyq9kg
- 产品经理研读:Agent的九种设计模式(图解+代码):https://mp.weixin.qq.com/s/9CRzuNgnwyq3-tkqnTA6TA
- 从API到Agent:万字长文洞悉LangChain工程化设计:https://mp.weixin.qq.com/s/9HtxRuyzavovC9NytzCDIg
- AutoPrompt—可生成高质量提示词的AI工具:https://mp.weixin.qq.com/s/VI125TAJ_zm3YGOGlIGyHA
- AI Agent的哲学意义:https://mp.weixin.qq.com/s/qVkDZBdDgI1RrUjp8XFzlg
- Salesforce AI Research 刘志伟:像Agent一样思考 | Agent Insights:https://mp.weixin.qq.com/s/liArrE8wlhMfHImwYAatdw
- 长篇报道:面向复杂任务的、AI自动驱动的开发框架AutoDev:https://mp.weixin.qq.com/s/CQZrYS4lQWcThLyX2-xSlw
- 基于LLM的AI Agent架构设计统一框架:https://mp.weixin.qq.com/s/aOsDviPOao90uEBw3YkABw
- 吴恩达来信:智能体设计模式5:多智能体协作:https://mp.weixin.qq.com/s/EKUh0iU1t9FKhXWfeATeKQ
- Agent调研--19类Agent框架对比:https://mp.weixin.qq.com/s/9Wlut8KMD2pogDxcbGlo_Q
- AI Agent万字长文总结:规划, 工具, 执行, 记忆:https://mp.weixin.qq.com/s/muQtb6f5Zjlwpygl4qLALw
- 体验Flowith:探索人机交互从传统聊天对话到画布式知识管理的转变:https://mp.weixin.qq.com/s/ZL_hIRItkF9QwfXjwTZnEg
- AI Agent新对决:LangGraph与AutoGen的技术角力:https://mp.weixin.qq.com/s/rh4z76U404YxjzhQwJodvg
- 为什么Workflow对Agent系统很重要?(在WaytoAGI社群的分享):https://mp.weixin.qq.com/s/kobWUoZvqwnweo2xAwxN7w
- 现有的AI Agents有哪些,你知道吗?:https://mp.weixin.qq.com/s/YZLH4i-IeJUPltHdaTHuag
- 效果炸裂!使用 GPT-4o 快速实现LLM OS:https://mp.weixin.qq.com/s/Xyaq8wdCZuQhKVORIw05_g
- MemGPT:9.2k星星!创建具有长期记忆和自定义工具的大模型Agent,完全开源!:https://mp.weixin.qq.com/s/egRyfHaYbzTV0_CIXD2KPw
- 万字长文!AI Agent架构概况:关于推理、规划和工具调用:https://mp.weixin.qq.com/s/cpfaJPo9ml0k90mhrsaRTA
- Agentic Workflow:AI重塑了我的工作流:https://mp.weixin.qq.com/s/XzEUpUbbWHazAq-OD4EbMA
- (纯干货) AI Agent - 7大认知框架全解析与代码实现:https://mp.weixin.qq.com/s/LvClKo_T3h_E8RM_O3ikzA
- Agent开发者坦白:窘境中前行:https://mp.weixin.qq.com/s/kCXZN7Wli-RCvZXRb6mF7g
- Coze(扣子)一个国内版的类GPTs,使用指南(入门篇):https://mp.weixin.qq.com/s/-CgVCDAC1lszYewrIHEiHQ
- 11 个顶级开源Agent框架:自主运行 AI 的未来(2024 年更新):https://mp.weixin.qq.com/s/c51C2mfUb9LvZIRMyYZGWA
- 【开源看AI】4.2K star!Reor:AI自动帮你发现知识之间的连接:https://mp.weixin.qq.com/s/44QPN2zME2r8yvB5zBdxZA
- 如何借助 LLM 设计和实现任务型对话 Agent:https://www.thoughtworks.com/zh-cn/insights/blog/machine-learning-and-ai/how-to-design-task-based-dialogue-Agent-with-LLM
- 深度学习之父 Hinton 万字访谈录:中美 AI 竞赛没有退路可言:https://mp.weixin.qq.com/s/W4x4WuorcGNbSWPtpEbwWg
- 干货 | IBM:系统性整理Agent架构、框架:https://mp.weixin.qq.com/s/wtlEGcESftf14xLqUFnzhw
- LlamaIndex团队技术报告:“RAG的尽头是Agent”:https://mp.weixin.qq.com/s/wuyMN7CLAT9HGYlmjLWUtA
- AI时代个人生存/摸鱼探索指南.Beta:https://gamma.app/docs/AIGC-Dev-9y7n4vslcp2bol2?mode=doc
- 吴恩达:从 Agent 到 Agentic,超越基础模型的下一代 AI:https://mp.weixin.qq.com/s/Gl74YZn4ylxSHAkwUFB-FA
- 基于大模型的Agents之间的记忆共享:https://mp.weixin.qq.com/s/6JOYPPqlCySgIEqWw2c4nQ
- 万字长文讲透智能体|智能体的架构、探索与应用:https://mp.weixin.qq.com/s/2QHKlyAffgDDWZAwHZVEoA
- 『深度长文』吴恩达:AI Agent 4种最常见的设计模式:https://mp.weixin.qq.com/s/eDPZJdBjul5jdJedgIoZ4A
作者:Florian
本文版权归作者和博客园共有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文链接,否则作者保留追究法律责任的权利。
若本文对你有所帮助,您的 关注 和 推荐 是我分享知识的动力!
若本文对你有所帮助,您的 关注 和 推荐 是我分享知识的动力!