Trustworthy Large Language Models: A Multifaceted Strategy for Truthfulness, Fairness, and Privacy Preservation

  • XU, Jianliang (CoPI)
  • Zhang, Xiangyu (PI)
  • Li, Qing (CoPI)
  • King , Irvin (CoPI)
  • Jia, Xiaohua (CoPI)
  • Yang, Yu (CoPI)

Project: Research project

Project Details

Description

Large Language Models (LLMs) have become a key technology, transforming online services in areas like e-commerce, healthcare, and finance. The global market for LLMs is growing rapidly, but these models still face challenges that affect their reliability and trustworthiness. These include issues with providing false or biased information, treating certain groups unfairly, and potential privacy risks when handling sensitive data. This project aims to tackle these problems by developing new methods to improve the truthfulness, fairness, and privacy of LLMs. Researchers plan to create tools that help LLMs generate more accurate and unbiased responses, avoid harmful biases, and protect user privacy. The goal is to integrate these improvements into one unified system, creating LLMs that are more reliable and ethically sound. The results could revolutionize AI applications, boosting trust and making these models safer for use in critical sectors.
StatusActive
Effective start/end date30/06/2529/06/28

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.