Personal Business Research Community About Us

Join Waitlist


the Synergy

of AI and Web3

Sahara Research leads innovations on fair access and ownership to global knowledge capital
Sean Ren
AI development is at an inflection point. On many fronts, AI has begun to surpass human performance and automate increasingly large portions of our lives. In the new era, it is critically important to protect the privacy of users, guarantee the provenance of the data and model, and establish a decentralized and trustless network of AI and humans.
Sahara Research
Decentralized Learning
In the ever-progressing field of Large Language Models (LLMs), there has been a significant shift towards addressing the critical concerns of data privacy and over-centralized model learning. As these models become more intricate and widely used, the necessity of handling sensitive information securely and distributing the learning process becomes paramount.
Dataless Knowledge Fusion by Merging Weights of Language Models
FedNLP: Benchmarking Federated Learning Methods for Natural Language Processing Tasks
AI Agents
LLM-powered knowledge agents offer unprecedented capabilities in various applications ranging from personal assistant to autonomous research and data analysis. As these agents grow in complexity and ubiquity, their ability for problem-solving and global planning is the key to user experience.
SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks
Bootstrap Your Own Skills: Learning to Solve New Tasks with Large Language Model Guidance
Human-AI Collaboration
The intersection of human intelligence and artificial intelligence, particularly in the context of Large Language Models (LLMs), presents a landscape rich with opportunities and challenges. Understanding the dynamics of Human-AI collaboration is essential for leveraging the capabilities of LLMs while addressing potential risks.
Relying on the Unreliable: The Impact of Language Models' Reluctance to Express Uncertainty
How Far Are Large Language Models From Agents with Theory-of-Mind?
Refining Language Models with Compositional Explanations
Continuous Learning
The concept of continuous learning represents a pivotal advancement in the development and application of Large Language Models (LLMs), addressing the need for these models to adapt and evolve in response to ever-changing data landscapes. This area of research is particularly crucial in ensuring that LLMs remain relevant, accurate, and efficient in real-world applications.
Lifelong Pretraining: Continually Adapting Language Models to Emerging Corpora
On Continual Model Refinement in Out-of-Distribution Data Streams