Skip to main contentdfsdf

Torrey Trust

A nuanced view of bias in language models

If we have to translate this into language models, statistical bias corresponds to how accurately the model predicts a text relative to what we would expect. The variance corresponds to how consistently the model corresponds to the expected. A static bias says something about how good the model is at writing from the data it is based on and not how good it is about the world we live in!

Shared by Torrey Trust, 1 save total

  • When writing and talking about bias in language models like ChatGPT, it's usually about the mismatch between what the model writes and what we would like it to write (the ideal world in the eyes of the individual?).
  • they tend to perpetuate the inequalities present in the datasets, thus perpetuating, for example, stereotypes and discrimination. If data contains biases, the models are likely to do the same. 

28 more annotations...

No more items

Highlighter, Sticky notes, Tagging, Groups and Network: integrated suite dramatically boosting research productivity. Learn more »

Join Diigo