Hosted on MSN5mon
Study shows that LLMs could maliciously be used to poison biomedical knowledge graphs"We suspect that these models can potentially generate malicious content that undermines medical knowledge graphs (KGs). We particularly aimed to investigate whether or not these models can be ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results