Skip to content
#

llm-security

Here is 1 public repository matching this topic...

利用分类法和敏感词检测法对生成式大模型的输入和输出内容进行安全检测,尽早识别风险内容。The input and output contents of generative large model are checked by classification method and sensitive word detection method to identify content risk as early as possible.

  • Updated Sep 9, 2024
  • Java

Improve this page

Add a description, image, and links to the llm-security topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the llm-security topic, visit your repo's landing page and select "manage topics."

Learn more