Helium’s news bias wheel is an easy, empirical measure of a news sources bias powered by an AI (no human input) to identify certain types of language over a large number of articles.
Using zeroshot learning, Helium probabilistically classifies news according to the following:
Do articles use emotionally charged language (trying to appeal to primitive emotions) instead of objective, descriptive reporting?
Do articles use prescriptive language (trying to disguise opinions as facts) as opposed to descriptive, epistemically humble language?
Do articles beg the question (selectively push a certain answer/narrative) as opposed to neutral, truth-seeking journalism?
Do articles appeal to authority/institutions as opposed to reason, facts, logic, and primary sources?
How often do sources publish about political topics? While articles about political topics aren’t bias per se, a higher proportion could indicate more politicization instead of factual reporting.
Do articles use subjective, relativistic language as opposed to objective, balanced, fact-based language that acknowledges uncertainty/perspective?
Do articles use opinionated language, as opposed to informative/objective reporting?
Do articles use fearful and potentially manipulative language, as opposed to neutral/other emotions?
Do articles oversimplify nuance into reductionist categories as opposed to factful, context-specific language?
Do articles gossip about people as opposed to report impersonal/objective information?
Do articles use immature language as opposed to mature/truth seeking reporting?
Do articles report on events themselves, or how other people react/respond to events?
Do articles victimize as opposed to discussing events/responsibilities?
Do articles use circular reasoning as opposed to logical/deductive/inductive reasoning?